50+ ETL Jobs in India
Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!



About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases. Data Axle is headquartered in Dallas, TX, USA.
Roles and Responsibilities:
- Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
- Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
- Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
- Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
- Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
- Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
- Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
- Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
- Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.
Basic Qualifications
- 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
- Strong proficiency in SQL and experience with BigQuery for data warehousing.
- Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
- Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
- Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
- Knowledge of data governance, security best practices, and compliance requirements in cloud environments.
Preferred Qualifications:
- Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
- Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
- Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
- Proficiency in Python, Java, or Scala for data engineering and pipeline development.
- Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
- Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
- Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.
We’re looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization.
Responsibilities:
- Lead the design of data warehouses, lakes, and ETL workflows.
- Collaborate with teams to gather requirements and build scalable solutions.
- Ensure data governance, security, and optimal performance of systems.
- Mentor junior engineers and drive end-to-end project delivery.
Requirements:
- 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects.
- Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms.
- Expertise in big data tools (e.g., Apache Spark, Kafka).
- Excellent communication skills and leadership abilities.
Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Job Summary:
As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives.
We are looking for you!
As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments.
You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values.
You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high-quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision.
You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s).
You value inclusivity and want to join a culture that empowers you to show up as your authentic self.
You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you should be driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions.
What You Will Do:
- Analyze Business Requirements.
- Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema.
- Transformation of Data in Power BI/SQL/ETL Tool.
- Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas.
- Experience writing SQL Queries and stored procedures.
- Design effective Power BI solutions based on business requirements.
- Manage a team of Power BI developers and guide their work.
- Integrate data from various sources into Power BI for analysis.
- Optimize performance of reports and dashboards for smooth usage.
- Collaborate with stakeholders to align Power BI projects with goals.
- Knowledge of Data Warehousing(must), Data Engineering is a plus
What we need?
- B. Tech computer science or equivalent
- Minimum 5+ years of relevant experience
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Job Description:
We are looking for an experienced PL/SQL Developer to join our team. The ideal candidate should have a strong background in database development and optimization, with hands-on experience in writing complex PL/SQL code and working with large-scale databases.
Key Responsibilities:
- Design, develop, and optimize PL/SQL procedures, functions, packages, and triggers.
- Analyze business requirements and translate them into technical specifications.
- Write efficient SQL queries and improve existing code for performance.
- Perform data analysis and troubleshooting for production issues.
- Collaborate with application developers and business analysts to integrate database logic into applications.
- Ensure database security, integrity, and backup procedures are followed.
Required Skills:
- Strong experience in Oracle PL/SQL development.
- Expertise in writing complex SQL queries, stored procedures, and performance tuning.
- Good understanding of database design and data modeling.
- Experience with version control and deployment tools.
- Familiarity with ETL processes and tools is a plus.
- Strong problem-solving and analytical skills.
- Good communication and collaboration skills.
Preferred Qualifications:
- Experience working in Agile/Scrum environments.
- Exposure to cloud databases or migration projects is an advantage.
- Required Minimum 3 years of Experience as a Data Engineer
- Database Knowledge: Experience with Timeseries and Graph Database is must along with SQL, PostgreSQL, MySQL, or NoSQL databases like FireStore, MongoDB,
- Data Pipelines: Understanding data Pipeline process like ETL, ELT, Streaming Pipelines with tools like AWS Glue, Google Dataflow, Apache Airflow, Apache NiFi.
- Data Modeling: Knowledge of Snowflake Schema, Fact & Dimension Tables.
- Data Warehousing Tools: Experience with Google BigQuery, Snowflake, Databricks
- Performance Optimization: Indexing, partitioning, caching, query optimization techniques.
- Python or SQL Scripting: Ability to write scripts for data processing and automation
Founded in 2002, Zafin offers a SaaS product and pricing platform that simplifies core modernization for top banks worldwide. Our platform enables business users to work collaboratively to design and manage pricing, products, and packages, while technologists streamline core banking systems.
With Zafin, banks accelerate time to market for new products and offers while lowering the cost of change and achieving tangible business and risk outcomes. The Zafin platform increases business agility while enabling personalized pricing and dynamic responses to evolving customer and market needs.
Zafin is headquartered in Vancouver, Canada, with offices and customers around the globe including ING, CIBC, HSBC, Wells Fargo, PNC, and ANZ. Zafin is proud to be recognized as a top employer and certified Great Place to Work® in Canada, India and the UK.
Job Summary:
We are looking for a highly skilled and detail-oriented Data & Visualisation Specialist to join our team. The ideal candidate will have a strong background in Business Intelligence (BI), data analysis, and visualisation, with advanced technical expertise in Azure Data Factory (ADF), SQL, Azure Analysis Services, and Power BI. In this role, you will be responsible for performing ETL operations, designing interactive dashboards, and delivering actionable insights to support strategic decision-making.
Key Responsibilities:
· Azure Data Factory: Design, build, and manage ETL pipelines in Azure Data Factory to facilitate seamless data integration across systems.
· SQL & Data Management: Develop and optimize SQL queries for extracting, transforming, and loading data while ensuring data quality and accuracy.
· Data Transformation & Modelling: Build and maintain data models using Azure Analysis Services (AAS), optimizing for performance and usability.
· Power BI Development: Create, maintain, and enhance complex Power BI reports and dashboards tailored to business requirements.
· DAX Expertise: Write and optimize advanced DAX queries and calculations to deliver dynamic and insightful reports.
· Collaboration: Work closely with stakeholders to gather requirements, deliver insights, and help drive data-informed decision-making across the organization.
· Attention to Detail: Ensure data consistency and accuracy through rigorous validation and testing processes. o Presentation & Reporting:
· Effectively communicate insights and updates to stakeholders, delivering clear and concise documentation.
Skills and Qualifications:
Technical Expertise:
· Proficient in Azure Data Factory for building ETL pipelines and managing data flows.
· Strong experience with SQL, including query optimization and data transformation.
· Knowledge of Azure Analysis Services for data modelling
· Advanced Power BI skills, including DAX, report development, and data modelling.
· Familiarity with Microsoft Fabric and Azure Analytics (a plus)
· Analytical Thinking: Ability to work with complex datasets, identify trends, and tackle ambiguous challenges effectively
Communication Skills:
· Excellent verbal and written communication skills, with the ability to convey complex technical information to non-technical stakeholders.
· Educational Qualification: Minimum of a Bachelor's degree, preferably in a quantitative field such as Mathematics, Statistics, Computer Science, Engineering, or a related discipline
What’s in it for you
Joining our team means being part of a culture that values diversity, teamwork, and high-quality work. We offer competitive salaries, annual bonus potential, generous paid time off, paid volunteering days, wellness benefits, and robust opportunities for professional growth and career advancement.

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Job Description :
As a Data & Analytics Architect, you will lead key data initiatives, including cloud transformation, data governance, and AI projects. You'll define cloud architectures, guide data science teams in model development, and ensure alignment with data architecture principles across complex solutions. Additionally, you will create and govern architectural blueprints, ensuring standards are met and promoting best practices for data integration and consumption.
Responsibilities :
- Play a key role in driving a number of data and analytics initiatives including cloud data transformation, data governance, data quality, data standards, CRM, MDM, Generative AI and data science.
- Define cloud reference architectures to promote reusable patterns and promote best practices for data integration and consumption.
- Guide the data science team in implementing data models and analytics models.
- Serve as a data science architect delivering technology and architecture services to the data science community.
- In addition, you will also guide application development teams in the data design of complex solutions, in a large data eco-system, and ensure that teams are in alignment with the data architecture principles, standards, strategies, and target states.
- Create, maintain, and govern architectural views and blueprints depicting the Business and IT landscape in its current, transitional, and future state.
- Define and maintain standards for artifacts containing architectural content within the operating model.
Requirements :
- Strong cloud data architecture knowledge (preference for Microsoft Azure)
- 8-10+ years of experience in data architecture, with proven experience in cloud data transformation, MDM, data governance, and data science capabilities.
- Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end uses.
- Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final - recommendation.
- Well versed in the Data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).
- Experience with cloud data technologies such as Azure data factory, Azure Data Fabric, Azure storage, Azure data lake storage, Azure data bricks, Azure AD, Azure ML etc.
- Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.
We are seeking skilled Data Engineers with prior experience on Azure data engineering services and SQL server to join our team. As a Data Engineer, you will be responsible for designing and implementing robust data infrastructure, building scalable data pipelines, and ensuring efficient data integration and storage.
Experience : 6 - 10 years
Notice : Immediate to 30days
Responsibilities :
- Design, develop, and maintain scalable data pipelines and use Azure Data Factory and Azure Stream Analytics
- Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
- Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
- Implement data governance and security best practices to ensure compliance and data integrity.
- Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
- Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications :
- Proven experience as a Data Engineer or in a similar role.
- Experience in designing and hands-on development in cloud-based (AWS/Azure) analytics solutions.
- Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, Azure App Service, Azure Databricks, Azure IoT, Azure HDInsight + Spark, Azure Stream Analytics
- Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential.
- Experience in SQL server and procedures.
- Thorough understanding of Azure Infrastructure offerings.
- Strong experience in common data warehouse modelling principles including Kimball, Inmon.
- Experience in additional Modern Database terminologies.
- Working knowledge of Python or Java or Scala is desirable
- Strong knowledge of data modelling, ETL processes, and database technologies
- Experience with big data processing frameworks (Hadoop, Spark) and data pipeline orchestration tools (Airflow).
- Solid understanding of data governance, data security, and data quality best practices.
- Strong analytical and problem-solving skills, with attention to detail.
Role & Responsibilities
About the Role:
We are seeking a highly skilled Senior Data Engineer with 5-7 years of experience to join our dynamic team. The ideal candidate will have a strong background in data engineering, with expertise in data warehouse architecture, data modeling, ETL processes, and building both batch and streaming pipelines. The candidate should also possess advanced proficiency in Spark, Databricks, Kafka, Python, SQL, and Change Data Capture (CDC) methodologies.
Key responsibilities:
Design, develop, and maintain robust data warehouse solutions to support the organization's analytical and reporting needs.
Implement efficient data modeling techniques to optimize performance and scalability of data systems.
Build and manage data lakehouse infrastructure, ensuring reliability, availability, and security of data assets.
Develop and maintain ETL pipelines to ingest, transform, and load data from various sources into the data warehouse and data lakehouse.
Utilize Spark and Databricks to process large-scale datasets efficiently and in real-time.
Implement Kafka for building real-time streaming pipelines and ensure data consistency and reliability.
Design and develop batch pipelines for scheduled data processing tasks.
Collaborate with cross-functional teams to gather requirements, understand data needs, and deliver effective data solutions.
Perform data analysis and troubleshooting to identify and resolve data quality issues and performance bottlenecks.
Stay updated with the latest technologies and industry trends in data engineering and contribute to continuous improvement initiatives.

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Profile: Product Support Engineer
🔴 Experience: 1 year as Product Support Engineer.
🔴 Location: Mumbai (Andheri).
🔴 5 days of working from office.
Skills Required:
🔷 Experience in providing support for ETL or data warehousing is preferred.
🔷 Good Understanding on Unix and Databases concepts.
🔷 Experience working with SQL and No-SQL databases and writing simple
queries to get data for debugging issues.
🔷 Being able to creatively come up with solutions for various problems and
implement them.
🔷 Experience working with REST APIs and debugging requests and
responses using tools like Postman.
🔷 Quick troubleshooting and diagnosing skills.
🔷 Knowledge of customer success processes.
🔷 Experience in document creation.
🔷 High availability for fast response to customers.
🔷 Language knowledge required in one of NodeJs, Python, Java.
🔷 Background in AWS, Docker, Kubernetes, Networking - an advantage.
🔷 Experience in SAAS B2B software companies - an advantage.
🔷 Ability to join the dots around multiple events occurring concurrently and
spot patterns.
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Senior ETL developer in SAS
We are seeking a skilled and experienced ETL Developer with strong SAS expertise to join
our growing Data Management team in Kolkata. The ideal candidate will be responsible for
designing, developing, implementing, and maintaining ETL processes to extract, transform,
and load data from various source systems into Banking data warehouse and other data
repositories of BFSI. This role requires a strong understanding of Banking data warehousing
concepts, ETL methodologies, and proficiency in SAS programming for data manipulation
and analysis.
Responsibilities:
• Design, develop, and implement ETL solutions using industry best practices and
tools, with a strong focus on SAS.
• Develop and maintain SAS programs for data extraction, transformation, and loading.
• Work with source system owners and data analysts to understand data requirements
and translate them into ETL specifications.
• Build and maintain data pipelines for Banking database to ensure data quality,
integrity, and consistency.
• Perform data profiling, data cleansing, and data validation to ensure accuracy and
reliability of data.
• Troubleshoot and resolve Bank’s ETL-related issues, including data quality problems
and performance bottlenecks.
• Optimize ETL processes for performance and scalability.
• Document ETL processes, data flows, and technical specifications.
• Collaborate with other team members, including data architects, data analysts, and
business users.
• Stay up-to-date with the latest SAS related ETL technologies and best practices,
particularly within the banking and financial services domain.
• Ensure compliance with data governance policies and security standards.
Qualifications:
• Bachelor's degree in Computer Science, Information Technology, or a related field.
• Proven experience as an ETL Developer, preferably within the banking or financial
services industry.
• Strong proficiency in SAS programming for data manipulation and ETL processes.
• Experience with other ETL tools (e.g., Informatica PowerCenter, DataStage, Talend)
is a plus.
• Solid understanding of data warehousing concepts, including dimensional modeling
(star schema, snowflake schema).
• Experience working with relational databases (e.g., Oracle, SQL Server) and SQL.
• Familiarity with data quality principles and practices.
• Excellent analytical and problem-solving skills.
• Strong communication and interpersonal skills.
• Ability to work independently and as part of a team.
• Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
• Understanding of regulatory requirements in the banking sector (e.g., RBI guidelines)
is an advantage.
Preferred Skills:
• Experience with cloud-based data warehousing solutions (e.g., AWS Redshift, Azure
Synapse, Google BigQuery).
• Knowledge of big data technologies (e.g., Hadoop, Spark).
• Experience with agile development methodologies.
• Relevant certifications (e.g., SAS Certified Professional).
What We Offer:
• Competitive salary and benefits package.
• Opportunity to work with cutting-edge technologies in a dynamic environment.
• Exposure to the banking and financial services domain.
• Professional development and growth opportunities.
• A collaborative and supportive work culture.
Overview:
We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).
Responsibilities:
- Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
- Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
- Build and optimize high-performance data pipelines for real-time and batch data processing.
- Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Write efficient, clean, and reusable code for ETL processes and data workflows.
- Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
- Implement data governance practices and ensure security and compliance of data processes.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
- Optimize and automate data workflows for continuous improvement.
- Maintain up-to-date documentation of data pipeline architectures and processes.
Requirements:
Technical Skills:
- Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
- ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
- Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
- SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
- Programming: Proficient in Python, Java, or Scala for building and automating data processes.
- Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.
Experience:
- Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
- Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
- Experience with data migration from on-premises to cloud environments (Teradata to GCP).
- Familiarity with Data Lake concepts and technologies.
- Experience with version control systems like Git and working in Agile environments.
- Knowledge of CI/CD and automation processes in data engineering.
Soft Skills:
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
- Ability to work collaboratively in a fast-paced, cross-functional team environment.
- Strong attention to detail and ability to prioritize tasks.
Preferred Qualifications:
- Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
- Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
- Familiarity with data governance frameworks and data privacy regulations.
- Certifications in Google Cloud or Teradata are a plus.
Benefits:
- Competitive salary and performance-based bonuses.
- Health, dental, and vision insurance.
- 401(k) with company matching.
- Paid time off and flexible work schedules.
- Opportunities for professional growth and development.
Hi Kirti,
Job Title: Data Analytics Engineer
Experience: 3 to 6 years
Location: Gurgaon (Hybrid)
Employment Type: Full-time
Job Description:
We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers, SQL querying for data warehouses like Amazon Redshift, and experience with Data Lakes using S3. A foundational understanding of Apache Parquet and Python is also desirable.
Key Responsibilities:
1. Data Pipeline Development & Maintenance
- Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose.
- Ensure seamless data replication and transformation across multiple systems.
- Implement and optimize CDC-based data pipelines from various source systems.
2. Data Layering & Warehouse Management
- Implement Bronze, Silver, and Gold layer architectures to optimize data workflows.
- Design and manage data pipelines for structured and unstructured data.
- Ensure data integrity and quality within Redshift and other analytical data stores.
3. Database Management & SQL Development
- Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift.
- Design and implement data models that support business intelligence and analytics use cases.
4. Data Lakes & Storage Optimization
- Work with AWS S3-based Data Lakes to store and manage large-scale datasets.
- Optimize data ingestion and retrieval using Apache Parquet.
5. Data Integration & Automation
- Integrate diverse data sources into a centralized analytics platform.
- Automate workflows to improve efficiency and reduce manual effort.
- Leverage Python for scripting, automation, and data manipulation where necessary.
6. Performance Optimization & Monitoring
- Monitor data pipelines for failures and implement recovery strategies.
- Optimize data flows for better performance, scalability, and cost-effectiveness.
- Troubleshoot and resolve ETL and data replication issues proactively.
Technical Expertise Required:
- 3 to 6 years of experience in Data Engineering, ETL Development, or related roles.
- Hands-on experience with Qlik Replicate & Qlik Compose for data integration.
- Strong SQL expertise, with experience in writing and optimizing queries for Redshift.
- Experience working with Bronze, Silver, and Gold layer architectures.
- Knowledge of Change Data Capture (CDC) pipelines from multiple sources.
- Experience working with AWS S3 Data Lakes.
- Experience working with Apache Parquet for data storage optimization.
- Basic understanding of Python for automation and data processing.
- Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus.
- Strong analytical and problem-solving skills.
- Ability to work in a fast-paced, agile environment.
Preferred Qualifications:
- Experience in performance tuning and cost optimization in Redshift.
- Familiarity with big data technologies such as Spark or Hadoop.
- Understanding of data governance and security best practices.
- Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.
Work life balance / Startup / Learning / Good Environment .................................................................
Job Description for QA Engineer:
- 6-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience on Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management toolis good to have.
- Exposure to the financial domain knowledge is considered a plus
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus
Key Attributes include:
- Team player with professional and positive approach
- Creative, innovative and able to think outside of the box
- Strong attention to detail during root cause analysis and defect issue resolution
- Self-motivated & self-sufficient
- Effective communicator both written and verbal
- Brings a high level of energy with enthusiasm to generate excitement and motivate the team
- Able to work under pressure with tight deadlines and/or multiple projects
- Experience in negotiation and conflict resolution
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Position Overview
We are seeking an experienced SAP Cutover/Data Migration Consultant with over 5 years of expertise in managing end-to-end cutover activities and data migration processes in SAP implementation or upgrade projects. The ideal candidate will have a deep understanding of SAP data structures, migration tools, and methodologies, along with exceptional project management and collaboration skills.
Key Responsibilities :
Cutover Planning and Execution
Develop detailed cutover plans, including timelines, dependencies, roles, and responsibilities.
Coordinate with business, technical, and functional teams to ensure seamless execution of cutover activities.
Identify risks and develop mitigation strategies for a smooth transition to the production environment.
Execute and monitor the cutover plan during go-live, ensuring minimal business disruption.
Data Migration Management
Lead the end-to-end SAP data migration process, including extraction, transformation, cleansing, validation, and loading.
Work closely with business stakeholders to define data migration scope, strategies, and rules.
Develop and execute data mapping and transformation scripts using tools like LSMW, BODS, or SAP Migration Cockpit.
Ensure data quality and integrity by performing rigorous testing and validation activities.
Testing and Validation
Collaborate with functional and technical teams to define and execute data migration test plans.
Perform mock migrations, reconciliation, and post-load validation to ensure successful data loads.
Resolve data discrepancies and provide root cause analysis during testing phases.
Documentation and Reporting
Prepare and maintain detailed documentation, including cutover plans, data migration scripts, and issue logs.
Provide regular status updates to project stakeholders on cutover and migration progress.
Stakeholder Collaboration
Act as a liaison between business users, technical teams, and project managers to ensure alignment of migration activities.
Conduct workshops and training sessions for business users on data readiness and cutover processes.
Required Skills & Qualifications
Education: Bachelor’s degree in Computer Science, Information Technology, or related field.
Experience: Minimum 5+ years of experience in SAP cutover planning and data migration.
Technical Skills:
Expertise in SAP data migration tools such as LSMW, SAP BODS, SAP Data Migration Cockpit, or custom ETL tools.
Strong knowledge of SAP modules (e.g., SD, MM, FICO) and their data structures.
Proficiency in data extraction, transformation, and loading techniques.
Experience with S/4HANA migrations and understanding of HANA-specific data structures (preferred).
Hands-on experience in managing legacy system data extraction and reconciliation processes.
Soft Skills:
Strong analytical, organizational, and problem-solving skills.
Excellent communication and interpersonal skills to collaborate with cross-functional teams.
Ability to manage multiple priorities and deliver within tight timelines.
Preferred Certifications:
SAP Certified Application Associate – Data Migration.
SAP Activate Project Manager Certification (preferred).
Skills & Requirements
Cutover planning, execution, SAP data migration, LSMW, SAP BODS, SAP Migration Cockpit, ETL tools, SAP SD, SAP MM, SAP FICO, data extraction, data transformation, data loading, S/4HANA migration, HANA data structures, Legacy system data reconciliation, Analytical skills, Communication skills.
5 years of experience with PowerBI as a developer
Design, develop, and maintain interactive and visually appealing Power BI dashboards and reports.
Translate business requirements into technical specifications for BI solutions.
Implement advanced Power BI features such as calculated measures, KPIs, and custom visuals.
Strong proficiency in Power BI, including Power Query, DAX, and custom visuals.
Experience with data modeling, ETL processes, and creating relationships in datasets.
Knowledge of SQL for querying and manipulating data.
Familiarity with connecting Power BI to cloud services such as Azure.
Understanding of data warehousing concepts and BI architecture.

Job Summary:
We are seeking a skilled Senior Tableau Developer to join our data team. In this role, you will design and build interactive dashboards, collaborate with data teams to deliver impactful insights, and optimize data pipelines using Airflow. If you are passionate about data visualization, process automation, and driving business decisions through analytics, we want to hear from you.
Key Responsibilities:
- Develop and maintain dynamic Tableau dashboards and visualizations to provide actionable business insights.
- Partner with data teams to gather reporting requirements and translate them into effective data solutions.
- Ensure data accuracy by integrating various data sources and optimizing data pipelines.
- Utilize Airflow for task orchestration, workflow scheduling, and monitoring.
- Enhance dashboard performance by streamlining data processing and improving query efficiency.
Requirements:
- 5+ years of hands-on experience in Tableau development.
- Proficiency in Airflow for building and automating data pipelines.
- Strong skills in data transformation, ETL processes, and data modeling.
- Solid understanding of SQL and database management.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Nice to Have:
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with programming languages such as Python or R.
Why Join Us?
- Work on impactful data projects with a talented and collaborative team.
- Opportunity to innovate and shape data visualization strategies.
- Competitive compensation and professional growth opportunities

Job Title : Sr. Data Engineer
Experience : 5+ Years
Location : Noida (Hybrid – 3 Days in Office)
Shift Timing : 2-11 PM
Availability : Immediate
Job Description :
- We are seeking a Senior Data Engineer to design, develop, and optimize data solutions.
- The role involves building ETL pipelines, integrating data into BI tools, and ensuring data quality while working with SQL, Python (Pandas, NumPy), and cloud platforms (AWS/GCP).
- You will also develop dashboards using Looker Studio and work with AWS services like S3, Lambda, Glue ETL, Athena, RDS, and Redshift.
- Strong debugging, collaboration, and communication skills are essential.
Job Title: Data Engineer (Python, AWS, ETL)
Experience: 6+ years
Location: PAN India (Remote / Work From Home)
Employment Type: Full-time
Preferred domain: Real Estate
Key Responsibilities:
Develop and optimize ETL workflows using Python, Pandas, and PySpark.
Design and implement SQL queries for data extraction, transformation, and optimization.
Work with JSON and REST APIs for data integration and automation.
Manage and optimize Amazon S3 storage, including partitioning and lifecycle policies.
Utilize AWS Athena for SQL-based querying, performance tuning, and cost optimization.
Develop and maintain AWS Lambda functions for serverless processing.
Manage databases using Amazon RDS and Amazon DynamoDB, ensuring performance and scalability.
Orchestrate workflows with AWS Step Functions for efficient automation.
Implement Infrastructure as Code (IaC) using AWS CloudFormation for resource provisioning.
Set up AWS Data Pipelines for CI/CD deployment of data workflows.
Required Skills:
Programming & Scripting: Python (ETL, Automation, REST API Integration).
Databases: SQL (Athena / RDS), Query Optimization, Schema Design.
Big Data & Processing: Pandas, PySpark (Data Transformation, Aggregation).
Cloud & Storage: AWS (S3, Athena, RDS, DynamoDB, Step Functions, CloudFormation, Lambda, Data Pipelines).
Good to Have Skills:
Experience with Azure services such as Table Storage, AI Search, Cognitive Services, Functions, Service Bus, and Storage.
Qualifications:
Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
6+ years of experience in data engineering, ETL, and cloud-based data processing.

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!

We are looking for skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. This role requires technical expertise in Python, SQL, ETL/ELT frameworks, and cloud data warehouses, along with strong collaboration skills to partner with cross-functional teams.
Company: BigThinkCode Technologies
URL:
Location: Chennai (Work from office / Hybrid)
Experience: 4 - 6 years
Key Responsibilities:
- Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data.
- Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
- Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
- Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
- Ensure data quality, governance, and security across pipelines and storage systems.
- Document architectures, processes, and workflows for clarity and reproducibility.
Required Technical Skills:
- Proficiency in Python for scripting, automation, and pipeline development.
- Expertise in SQL (complex queries, optimization, and database design).
- Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, dbt, AWS Glue).
- Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
- Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
- Knowledge of version control systems (e.g., Git) and CI/CD practices.
Preferred Qualifications:
- Experience with streaming data technologies (e.g., Kafka, Kinesis, Spark Streaming).
- Exposure to cloud platforms (AWS, GCP, Azure) and their data services.
- Understanding of data modelling (dimensional, star schema) and optimization techniques.
Soft Skills:
- Team player with a collaborative mindset and ability to mentor junior engineers.
- Strong stakeholder management skills to align technical solutions with business goals.
- Excellent communication skills to explain technical concepts to non-technical audiences.
- Proactive problem-solving and adaptability in fast-paced environments.
If interested, apply / reply by sharing your updated profile to connect and discuss.
Regards
We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.
Location - Pune (Hybrid 3 days)
Responsibilities:
Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.
Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.
Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.
Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.
Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.
Troubleshoot and resolve technical issues related to Power BI dashboards and reports.
Provide technical guidance and mentorship to junior team members.
Stay abreast of the latest trends and technologies in the Power BI ecosystem.
Ensure data security, governance, and compliance with industry best practices.
Contribute to the development and improvement of the organization's data and analytics strategy.
May lead and mentor a team of junior Power BI developers.
Qualifications:
8-12 years of experience in Business Intelligence and Data Analytics.
Proven expertise in Power BI development, including DAX, advanced data modeling techniques.
Strong SQL skills, including writing complex queries, stored procedures, and views.
Experience with ETL/ELT processes and tools.
Experience with data warehousing concepts and methodologies.
Excellent analytical, problem-solving, and communication skills.
Strong teamwork and collaboration skills.
Ability to work independently and proactively.
Bachelor's degree in Computer Science, Information Systems, or a related field preferred.
Experience: 4+ years.
Location: Vadodara & Pune
Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines
What you'll be doing:
- Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
- Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
- Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
- Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
- Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
- Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
- Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
- Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
- Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.
What you need:
Basic Skills:
- 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
- Strong experience with Apache Kafka for stream processing and real-time data integration.
- Proficiency in SQL and ETL/ELT processes.
- Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
- Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
- Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
- Knowledge of data governance, security, and compliance best practices.
- Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Ability to work in a collaborative team environment and communicate effectively with cross-functional teams
Responsibilities:
- Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
- Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
- Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
- Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
- Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
- Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
- Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
- Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
- Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
- Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform.

Clients located in Bangalore,Chennai &Pune Location

Role: Ab Initio Developer
Experience: 2.5 (mandate) - 8 years
Skills: Ab Initio Development
Location: Chennai/Bangalore/Pune
only Immediate to 15 days joiners
should be available for in person interview only
Its a long term contract role with IBM and Arnold is the payrolling company.
JOB DESCRIPTION:
We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.
Key Responsibilities:
· Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.
· Analyze complex data requirements and translate them into effective Ab Initio designs.
· Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.
· Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.
· Optimize performance and scalability of Ab Initio jobs.
· Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.
· Stay up-to-date with the latest Ab Initio technologies and industry best practices.
Required Skills and Experience:
· 2.5 to 8 years of hands-on experience in Ab Initio development.
· Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.
· Proficiency in Ab Initio's graphical interface and command-line tools.
· Experience in data modeling, data warehousing, and ETL concepts.
· Strong SQL skills and experience with relational databases.
· Excellent problem-solving and analytical skills.
· Ability to work independently and as part of a team.
· Strong communication and documentation skills.
Preferred Skills:
· Experience with cloud-based data integration platforms.
· Knowledge of data quality and data governance concepts.
· Experience with scripting languages (e.g., Python, Shell scripting).
· Certification in Ab Initio or related technologies.
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Key Responsibilities:
- Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
- Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
- Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
- CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
- Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
- Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
- Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Dear Candidate,
We are Urgently hiring QA Automation Engineers and Test leads At Hyderabad and Bangalore
Exp: 6-10 yrs
Locations: Hyderabad ,Bangalore
JD:
we are Hiring Automation Testers with 6-10 years of Automation testing experience using QA automation tools like Java, UFT, Selenium, API Testing, ETL & others
Must Haves:
· Experience in Financial Domain is a must
· Extensive Hands-on experience in Design, implement and maintain automation framework using Java, UFT, ETL, Selenium tools and automation concepts.
· Experience with AWS concept and framework design/ testing.
· Experience in Data Analysis, Data Validation, Data Cleansing, Data Verification and identifying data mismatch.
· Experience with Databricks, Python, Spark, Hive, Airflow, etc.
· Experience in validating and analyzing kubernetics log files.
· API testing experience
· Backend testing skills with ability to write SQL queries in Databricks and in Oracle databases
· Experience in working with globally distributed Agile project teams
· Ability to work in a fast-paced, globally structured and team-based environment, as well as independently
· Experience in test management tools like Jira
· Good written and verbal communication skills
Good To have:
- Business and finance knowledge desirable
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Worldwide Locations: USA | HK | IN
Job Description :
Job Title : Data Engineer
Location : Pune (Hybrid Work Model)
Experience Required : 4 to 8 Years
Role Overview :
We are seeking talented and driven Data Engineers to join our team in Pune. The ideal candidate will have a strong background in data engineering with expertise in Python, PySpark, and SQL. You will be responsible for designing, building, and maintaining scalable data pipelines and systems that empower our business intelligence and analytics initiatives.
Key Responsibilities:
- Develop, optimize, and maintain ETL pipelines and data workflows.
- Design and implement scalable data solutions using Python, PySpark, and SQL.
- Collaborate with cross-functional teams to gather and analyze data requirements.
- Ensure data quality, integrity, and security throughout the data lifecycle.
- Monitor and troubleshoot data pipelines to ensure reliability and performance.
- Work on hybrid data environments involving on-premise and cloud-based systems.
- Assist in the deployment and maintenance of big data solutions.
Required Skills and Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 4 to 8 Years of experience in Data Engineering or related roles.
- Proficiency in Python and PySpark for data processing and analysis.
- Strong SQL skills with experience in writing complex queries and optimizing performance.
- Familiarity with data pipeline tools and frameworks.
- Knowledge of cloud platforms such as AWS, Azure, or GCP is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Preferred Qualifications:
- Experience with big data technologies like Hadoop, Hive, or Spark.
- Familiarity with data visualization tools and techniques.
- Knowledge of CI/CD pipelines and DevOps practices in a data engineering context.
Work Model:
- This position follows a hybrid work model, with candidates expected to work from the Pune office as per business needs.
Why Join Us?
- Opportunity to work with cutting-edge technologies.
- Collaborative and innovative work environment.
- Competitive compensation and benefits.
- Clear career progression and growth opportunities.
Job Description: Data Engineer
Location: Remote
Experience Required: 6 to 12 years in Data Engineering
Employment Type: [Full-time]
Notice: Looking for candidates, Who can join immediately or 15days Max
About the Role:
We are looking for a highly skilled Data Engineer with extensive experience in Python, Databricks, and Azure services. The ideal candidate will have a strong background in building and optimizing ETL processes, managing large-scale data infrastructures, and implementing data transformation and modeling tasks.
Key Responsibilities:
ETL Development:
Use Python as an ETL tool to read data from various sources, perform data type transformations, handle errors, implement logging mechanisms, and load data into Databricks-managed delta tables.
Develop robust data pipelines to support analytics and reporting needs.
Data Transformation & Optimization:
Perform data transformations and evaluations within Databricks.
Work on optimizing data workflows for performance and scalability.
Azure Expertise:
Implement and manage Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.
Coding & Development:
Utilize Python for complex tasks involving classes, objects, methods, dictionaries, loops, packages, wheel files, and database connectivity.
Write scalable and maintainable code to manage streaming and batch data processing.
Cloud & Infrastructure Management:
Leverage Spark, Scala, and cloud-based solutions to design and maintain large-scale data infrastructures.
Work with cloud data warehouses, data lakes, and storage formats.
Project Leadership:
Lead data engineering projects and collaborate with cross-functional teams to deliver solutions on time.
Required Skills & Qualifications:
Technical Proficiency:
- Expertise in Python for ETL and data pipeline development.
- Strong experience with Databricks and Apache Spark.
- Proven skills in handling Azure services, including Azure SQL Database, Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake.
Experience & Knowledge:
- Minimum 6+ years of experience in data engineering.
- Solid understanding of data modeling, ETL processes, and optimizing data pipelines.
- Familiarity with Unix shell scripting and scheduling tools.
Other Skills:
- Knowledge of cloud warehouses and storage formats.
- Experience in handling large-scale data infrastructures and streaming data.
Preferred Qualifications:
- Proven experience with Spark and Scala for big data processing.
- Prior experience in leading or mentoring data engineering teams.
- Hands-on experience with end-to-end project lifecycle in data engineering.
What We Offer:
- Opportunity to work on challenging and impactful data projects.
- A collaborative and innovative work environment.
- Competitive compensation and benefits.
How to Apply:

Description
Come Join Us
Experience.com - We make every experience matter more
Position: Senior GCP Data Engineer
Job Location: Chennai (Base Location) / Remote
Employment Type: Full Time
Summary of Position
A Senior Data Engineer is a professional who specializes in preparing big data infrastructure for analytical or operational uses. He/She is responsible for develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. They collaborate with data scientists and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organisation.
Responsibilities:
- Collaborate with cross-functional teams to define, prioritize, and execute data engineering initiatives aligned with business objectives.
- Design and implement scalable, reliable, and secure data solutions by industry best practices and compliance requirements.
- Drive the adoption of cloud-native technologies and architectural patterns to optimize the performance, cost, and reliability of data pipelines and analytics solutions.
- Mentor and lead a team of Data Engineers.
- Demonstrate a drive to learn and master new technologies and techniques.
- Apply strong problem-solving skills with an emphasis on building data-driven or AI-enhanced products.
- Coordinate with ML/AI and engineering teams to understand data requirements.
Experience & Skills:
- 8+ years of Strong experience in ETL and ELT data from various sources in Data Warehouses
- 8+ years of experience in Python, Pandas, Numpy, and SciPy.
- 5+ years of Experience in GCP
- 5+ years of Experience in BigQuery, PySpark, and Pub/Sub
- 5+ years of Experience working with and creating data architectures.
- Certified in Google Cloud Professional Data Engineer.
- Advanced proficiency in Google Cloud services such as Dataflow, Dataproc, Dataprep, Data Studio, and Cloud Composer.
- Proficient in writing complex Spark (PySpark) User Defined Functions (UDFs), Spark SQL, and HiveQL.
- Good understanding of Elastic search.
- Experience in assessing and ensuring data quality, data testing, and addressing data quality issues.
- Excellent understanding of Spark architecture and underlying frameworks including storage management.
- Solid background in database design and development, database administration, and software engineering across full life cycles.
- Experience with NoSQL data stores like MongoDB, DocumentDB, and DynamoDB.
- Knowledge of data governance principles and practices, including data lineage, metadata management, and access control mechanisms.
- Experience in implementing and optimizing data security controls, encryption, and compliance measures in GCP environments.
- Ability to troubleshoot complex issues, perform root cause analysis, and implement effective solutions in a timely manner.
- Proficiency in data visualization tools such as Tableau, Looker, or Data Studio to create insightful dashboards and reports for business users.
- Strong communication and interpersonal skills to effectively collaborate with technical and non-technical stakeholders, articulate complex concepts, and drive consensus.
- Experience with agile methodologies and project management tools like Jira or Asana for sprint planning, backlog grooming, and task tracking.
Skills: ETL+ SQL
· Experience with SQL and data querying languages.
· Knowledge of data governance frameworks and best practices.
· Familiarity with programming/scripting languages (e.g., SparkSQL)
· Strong understanding of data integration techniques and ETL processes.
· Experience with data quality tools and methodologies.
· Strong communication and problem-solving skills
Detailed JD: Data Integration: Manage the seamless integration various data lake, ensuring that jobs are running as expected, validate the data ingested , track the DQ checks , rerun/reprocess the jobs in case of failures post figuring out the RCAs
Data Quality Assurance: Monitor and validate data quality during and after the migration process, implementing checks and corrective actions as needed.
Documentation: Maintain comprehensive documentation related to data issues encountered during the weekly/monthly processing and operational procedures.
Continuous Improvement: Recommend and implement improvements to data processing, tools, and technologies to enhance efficiency and effectiveness.
Job Title: Data Analyst-Fintech
Job Description:
We are seeking a highly motivated and detail-oriented Data Analyst with 2 to 4 years of work experience to join our team. The ideal candidate will have a strong analytical mindset, excellent problem-solving skills, and a passion for transforming data into actionable insights. In this role, you will play a pivotal role in gathering, analyzing, and interpreting data to support informed decision-making and drive business growth.
Key Responsibilities:
1. Data Collection and Extraction:
§ Gather data from various sources, including databases, spreadsheets and APIs,
§ Perform data cleansing and validation to ensure data accuracy and integrity.
2. Data Analysis:
§ Analyze large datasets to identify trends, patterns, and anomalies.
§ Conduct analysis and data modeling to generate insights and forecasts.
§ Create data visualizations and reports to present findings to stakeholders.
3. Data Interpretation and Insight Generation:
§ Translate data insights into actionable recommendations for business improvements.
§ Collaborate with cross-functional teams to understand data requirements and provide data-driven solutions.
4. Data Quality Assurance:
§ Implement data quality checks and validation processes to ensure data accuracy and consistency.
§ Identify and address data quality issues promptly.
Qualifications:
1. Bachelor's degree in a relevant field such as Computer Science, Statistics, Mathematics, or a related discipline.
2. Proven work experience as a Data Analyst, with 2 to 4 years of relevant experience.
3. Knowledge of data warehousing concepts and ETL processes is advantageous.
4. Proficiency in data analysis tools and languages (e.g., SQL, Python, R).
5. Experience with data visualization tools (e.g., Tableau, Power BI) is a plus.
6. Strong analytical and problem-solving skills.
7. Excellent communication and presentation skills.
8. Attention to detail and a commitment to data accuracy.
9. Familiarity with machine learning and predictive modeling is a bonus.
If you are a data-driven professional with a passion for uncovering insights from complex datasets and have the qualifications and skills mentioned above, we encourage you to apply for this Data Analyst position. Join our dynamic team and contribute to making data-driven decisions that will shape our company's future.
Fatakpay is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Job Description:
Position: Senior Manager- Data Analytics (Fintech Firm)
Experience: 5-8 Years
Location: Mumbai-Andheri
Employment Type: Full-Time
About Us:
We are a dynamic fintech firm dedicated to revolutionizing the financial services industry through innovative data solutions. We believe in leveraging cutting-edge technology to provide superior financial products and services to our clients. Join our team and be a part of this exciting journey.
Job Overview:
We are looking for a skilled Data Engineer with 3-5 years of experience to join our data team. The ideal candidate will have a strong background in ETL processes, data pipeline creation, and database management. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data systems and pipelines.
Key Responsibilities:
- Design and develop robust and scalable ETL processes to ingest and process large datasets from various sources.
- Build and maintain efficient data pipelines to support real-time and batch data processing.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Optimize database performance and ensure data integrity and security.
- Troubleshoot and resolve data-related issues and provide support for data operations.
- Implement data quality checks and monitor data pipeline performance.
- Document technical solutions and processes for future reference.
Required Skills and Qualifications:
- Bachelor's degree in Engineering, or a related field.
- 3-5 years of experience in data engineering or a related role.
- Strong proficiency in ETL tools and techniques.
- Experience with SQL and relational databases (e.g., MySQL, PostgreSQL).
- Familiarity with big data technologies
- Proficiency in programming languages such as Python, Java, or Scala.
- Knowledge of data warehousing concepts and tools
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Knowledge of machine learning and data science principles.
- Experience with real-time data processing and streaming platforms (e.g., Kafka).
What We Offer:
- Competitive compensation package (10-15 LPA) based on experience and qualifications.
- Opportunity to work with a talented and innovative team in the fintech industry..
- Professional development and growth opportunities.
How to Apply:
If you are passionate about data engineering and eager to contribute to a forward-thinking fintech firm, we would love to hear from you.
- Bachelor's degree required, or higher education level, or foreign equivalent, preferably in area wit
- At least 5 years experience in Duck Creek Data Insights as Technical Architect/Senior Developer.
- Strong Technical knowledge on SQL databases, MSBI.
- Should have strong hands-on knowledge on Duck Creek Insight product, SQL Server/DB level configuration, T-SQL, XSL/XSLT, MSBI etc
- Well versed with Duck Creek Extract Mapper Architecture
- Strong understanding of Data Modelling, Data Warehousing, Data Marts, Business Intelligence with ability to solve business problems
- Strong understanding of ETL and EDW toolsets on the Duck Creek Data Insights
- Strong knowledge on Duck Creek Insight product overall architecture flow, Data hub, Extract mapper etc
- Understanding of data related to business application areas policy, billing, and claims business solutions
- Minimum 4 to 7 year working experience on Duck Creek Insights product
- Strong Technical knowledge on SQL databases, MSBI
- Preferable having experience in Insurance domain
- Preferable experience in Duck Creek Data Insights
- Experience specific to Duck Creek would be an added advantage
- Strong knowledge of database structure systems and data mining
- Excellent organisational and analytical abilities
- Outstanding problem solver
Job Description: Data Engineer (Fintech Firm)
Position: Data Engineer
Experience: 2-4 Years
Location: Mumbai-Andheri
Employment Type: Full-Time
About Us:
We are a dynamic fintech firm dedicated to revolutionizing the financial services industry through innovative data solutions. We believe in leveraging cutting-edge technology to provide superior financial products and services to our clients. Join our team and be a part of this exciting journey.
Job Overview:
We are looking for a skilled Data Engineer with 3-5 years of experience to join our data team. The ideal candidate will have a strong background in ETL processes, data pipeline creation, and database management. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data systems and pipelines.
Key Responsibilities:
- Design and develop robust and scalable ETL processes to ingest and process large datasets from various sources.
- Build and maintain efficient data pipelines to support real-time and batch data processing.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Optimize database performance and ensure data integrity and security.
- Troubleshoot and resolve data-related issues and provide support for data operations.
- Implement data quality checks and monitor data pipeline performance.
- Document technical solutions and processes for future reference.
Required Skills and Qualifications:
- Bachelor's degree in Engineering, or a related field.
- 3-5 years of experience in data engineering or a related role.
- Strong proficiency in ETL tools and techniques.
- Experience with SQL and relational databases (e.g., MySQL, PostgreSQL).
- Familiarity with big data technologies
- Proficiency in programming languages such as Python, Java, or Scala.
- Knowledge of data warehousing concepts and tools
- Excellent problem-solving skills and attention to detail.
- Strong communication and collaboration skills.
Preferred Qualifications:
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Knowledge of machine learning and data science principles.
- Experience with real-time data processing and streaming platforms (e.g., Kafka).
What We Offer:
- Competitive compensation package (12-20 LPA) based on experience and qualifications.
- Opportunity to work with a talented and innovative team in the fintech industry..
- Professional development and growth opportunities.
We are seeking a skilled Qlik Developer with 4-5 years of experience in Qlik development to join our team. The ideal candidate will have expertise in QlikView and Qlik Sense, along with strong communication skills for interacting with business stakeholders. Knowledge of other BI tools such as Power BI and Tableau is a plus.
Must-Have Skills:
QlikView and Qlik Sense Development: 4-5 years of hands-on experience in developing and maintaining QlikView/Qlik Sense applications and dashboards.
Data Visualization: Proficiency in creating interactive reports and dashboards, with a deep understanding of data storytelling.
ETL (Extract, Transform, Load): Experience in data extraction from multiple data sources (databases, flat files, APIs) and transforming it into actionable insights.
Qlik Scripting: Knowledge of Qlik scripting, set analysis, and expressions to create efficient solutions.
Data Modeling: Expertise in designing and implementing data models for reporting and analytics.
Stakeholder Communication: Strong communication skills to collaborate with non-technical business users and translate their requirements into effective BI solutions.
Troubleshooting and Support: Ability to identify, troubleshoot, and resolve issues related to Qlik applications.
Nice-to-Have Skills:
Other BI Tools: Experience in using other business intelligence tools such as Power BI and Tableau.
SQL & Data Querying: Familiarity with SQL for data querying and database management.
Cloud Platforms: Experience with cloud services like Azure, AWS, or Google Cloud in relation to BI and data solutions.
Programming Knowledge: Exposure to programming languages like Python or R.
Agile Methodologies: Understanding of Agile frameworks for project delivery.

Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- SFDC(Data modelling experience) would be given preference
- Good understanding in object-oriented concepts and hands on experience on Scala with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Scala
- Good experience in SQL – should be able to write complex queries.
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (preferred)
- Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (preferred)
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on AWS (preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Responsibilities include:
- Develop and maintain data validation logic in our proprietary Control Framework tool
- Actively participate in business requirement elaboration and functional design sessions to develop an understanding of our Operational teams’ analytical needs, key data flows and sources
- Assist Operational teams in the buildout of Checklists and event monitoring workflows within our Enterprise Control Framework platform
- Build effective working relationships with Operational users, Reporting and IT development teams and business partners across the organization
- Conduct interviews, generate user stories, develop scenarios and workflow analyses
- Contribute to the definition of reporting solutions that empower Operational teams to make immediate decisions as to the best course of action
- Perform some business user acceptance testing
- Provide production support and troubleshooting for existing operational dashboards
- Conduct regular demos and training of new features for the stakeholder community
Qualifications
- Bachelor’s degree or equivalent in Business, Accounting, Finance, MIS, Information Technology or related field of study
- Minimum 5 years’ of SQL required
- Experience querying data on cloud platforms (AWS/ Azure/ Snowflake) required
- Exceptional problem solving and analytical skills, attention to detail and organization
- Able to independently troubleshoot and gather supporting evidence
- Prior experience developing within a BI reporting tool (e.g. Spotfire, Tableau, Looker, Information Builders) a plus
- Database Management and ETL development experience a plus
- Self-motivated, self-assured, and self-managed
- Able to multi-task to meet time-driven goals
- Asset management experience, including investment operation a plus
Job Description:
We are currently seeking a talented and experienced SAP SF Data Migration Specialist to join our team and drive the successful migration of SAP S/4 from SAP ECC.
As the SAP SF Data Migration Specialist, you will play a crucial role in overseeing the design, development, and implementation of data solutions within our SAP SF environment. You will collaborate closely with cross-functional teams to ensure data integrity, accuracy, and usability to support business processes and decision-making.
About the Company:
We are a dynamic and innovative company committed to delivering exceptional solutions that empower our clients to succeed. With our headquarters in the UK and a global footprint across the US, Noida, and Pune in India, we bring a decade of expertise to every endeavour, driving real results. We take a holistic approach to project delivery, providing end-to-end services that encompass everything from initial discovery and design to implementation, change management, and ongoing support. Our goal is to help clients leverage the full potential of the Salesforce platform to achieve their business objectives.
What Makes VE3 The Best For You We think of your family as our family, no matter the shape or size. We offer maternity leaves, PF Fund Contributions, 5 days working week along with a generous paid time off program that benefits balance your work & personal life.
Requirements
Responsibilities:
- Lead the design and implementation of data migration strategies and solutions within SAP SF environments.
- Develop and maintain data migration plans, ensuring alignment with project timelines and objectives.
- Collaborate with business stakeholders to gather and analyse data requirements, ensuring alignment with business needs and objectives.
- Design and implement data models, schemas, and architectures to support SAP data structures and functionalities.
- Lead data profiling and analysis activities to identify data quality issues, gaps, and opportunities for improvement.
- Define data transformation rules and processes to ensure data consistency, integrity, and compliance with business rules and regulations.
- Manage data cleansing, enrichment, and standardization efforts to improve data quality and usability.
- Coordinate with technical teams to implement data migration scripts, ETL processes, and data loading mechanisms.
- Develop and maintain data governance policies, standards, and procedures to ensure data integrity, security, and privacy.
- Lead data testing and validation activities to ensure accuracy and completeness of migrated data.
- Provide guidance and support to project teams, including training, mentoring, and knowledge sharing on SAP data best practices and methodologies.
- Stay current with SAP data management trends, technologies, and best practices, and recommend innovative solutions to enhance data capabilities and performance.
Requirements:
- Bachelor’s degree in computer science, Information Systems, or related field; master’s degree preferred.
- 10+ years of experience in SAP and Non-SAP data management, with a focus on data migration, data modelling, and data governance.
- Have demonstrable experience as an SAP Data Consultant, ideally working across SAP SuccessFactors and non-SAP systems
- Highly knowledgeable and experienced in managing HR data migration projects in SAP SuccessFactors environments
- Demonstrate knowledge of how data aspects need to be considered within overall SAP solution design
- Manage the workstream activities and plan, including stakeholder management, engagement with the business and the production of governance documentation.
- Proven track record of leading successful SAP data migration projects from conception to completion.
- Excellent analytical, problem-solving, and communication skills, with the ability to collaborate effectively with cross-functional teams.
- Experience with SAP Activate methodologies preferred.
- SAP certifications in data management or related areas are a plus.
- Ability to work independently and thrive in a fast-paced, dynamic environment.
- Lead the data migration workstream, with a direct team of circa 5 resources in addition to other 3rd party and client resource.
- Work flexibly and remotely. Occasional UK travel will be required
Benefits
- Competitive salary and comprehensive benefits package.
- Opportunity to work in a dynamic and challenging environment on critical migration projects.
- Professional growth opportunities in a supportive and forward-thinking organization.
- Engagement with cutting-edge SAP technologies and methodologies in data migration.
Job Summary:
We are looking for an experienced ETL Tester with 5 to 7 years of experience and expertise
in the banking domain. The candidate will be responsible for testing ETL processes,
ensuring data quality, and validating data flows in large-scale projects.
Key Responsibilities:
Design and execute ETL test cases, ensuring data integrity and accuracy.
Perform data validation using complex SQL queries.
Collaborate with business analysts to define testing requirements.
Track defects and work with developers to resolve issues.
Conduct performance testing for ETL processes.
Banking Domain Knowledge: Strong understanding of banking processes such as
payments, loans, credit, accounts, and regulatory reporting.
Required Skills:
5-7 years of ETL testing experience.
Strong SQL skills and experience with ETL tools (Informatica, SSIS, etc.).
Knowledge of banking domain processes.
Experience with test management tools (JIRA, HP ALM).
Familiarity with Agile methodologies.
Location – Hyderabad


We are seeking a Data Engineer ( Snowflake, Bigquery, Redshift) to join our team. In this role, you will be responsible for the development and maintenance of fault-tolerant pipelines, including multiple database systems.
Responsibilities:
- Collaborate with engineering teams to create REST API-based pipelines for large-scale MarTech systems, optimizing for performance and reliability.
- Develop comprehensive data quality testing procedures to ensure the integrity and accuracy of data across all pipelines.
- Build scalable dbt models and configuration files, leveraging best practices for efficient data transformation and analysis.
- Partner with lead data engineers in designing scalable data models.
- Conduct thorough debugging and root cause analysis for complex data pipeline issues, implementing effective solutions and optimizations.
- Follow and adhere to group's standards such as SLAs, code styles, and deployment processes.
- Anticipate breaking changes to implement backwards compatibility strategies regarding API schema changesAssist the team in monitoring pipeline health via observability tools and metrics.
- Participate in refactoring efforts as platform application needs evolve over time.
Requirements:
- Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field.
- 3+ years of professional experience with a cloud database such as Snowflake, Bigquery, Redshift.
- +1 years of professional experience with dbt (cloud or core).
- Exposure to various data processing technologies such as OLAP and OLTP and their applications in real-world scenarios.
- Exposure to work cross-functionally with other teams such as Product, Customer Success, Platform Engineering.
- Familiarity with orchestration tools such as Dagster/Airflow.
- Familiarity with ETL/ELT tools such as dltHub/Meltano/Airbyte/Fivetran and DBT.
- High intermediate to advanced SQL skills (comfort with CTEs, window functions).
- Proficiency with Python and related libraries (e.g., pandas, sqlalchemy, psycopg2) for data manipulation, analysis, and automation.
Benefits:
- Work Location: Remote
- 5 days working
You can apply directly through the link:https://zrec.in/e9578?source=CareerSite
Explore our Career Page for more such jobs : careers.infraveo.com

- Responsible for designing, storing, processing, and maintaining of large-scale data and related infrastructure.
- Can drive multiple projects both from operational and technical standpoint.
- Ideate and build PoV or PoC for new product that can help drive more business.
- Responsible for defining, designing, and implementing data engineering best practices, strategies, and solutions.
- Is an Architect who can guide the customers, team, and overall organization on tools, technologies, and best practices around data engineering.
- Lead architecture discussions, align with business needs, security, and best practices.
- Has strong conceptual understanding of Data Warehousing and ETL, Data Governance and Security, Cloud Computing, and Batch & Real Time data processing
- Has strong execution knowledge of Data Modeling, Databases in general (SQL and NoSQL), software development lifecycle and practices, unit testing, functional programming, etc.
- Understanding of Medallion architecture pattern
- Has worked on at least one cloud platform.
- Has worked as data architect and executed multiple end-end data engineering project.
- Has extensive knowledge of different data architecture designs and data modelling concepts.
- Manages conversation with the client stakeholders to understand the requirement and translate it into technical outcomes.
Required Tech Stack
- Strong proficiency in SQL
- Experience working on any of the three major cloud platforms i.e., AWS/Azure/GCP
- Working knowledge of an ETL and/or orchestration tools like IICS, Talend, Matillion, Airflow, Azure Data Factory, AWS Glue, GCP Composer, etc.
- Working knowledge of one or more OLTP databases (Postgres, MySQL, SQL Server, etc.)
- Working knowledge of one or more Data Warehouse like Snowflake, Redshift, Azure Synapse, Hive, Big Query, etc.
- Proficient in at least one programming language used in data engineering, such as Python (or Scala/Rust/Java)
- Has strong execution knowledge of Data Modeling (star schema, snowflake schema, fact vs dimension tables)
- Proficient in Spark and related applications like Databricks, GCP DataProc, AWS Glue, EMR, etc.
- Has worked on Kafka and real-time streaming.
- Has strong execution knowledge of data architecture design patterns (lambda vs kappa architecture, data harmonization, customer data platforms, etc.)
- Has worked on code and SQL query optimization.
- Strong knowledge of version control systems like Git to manage source code repositories and designing CI/CD pipelines for continuous delivery.
- Has worked on data and networking security (RBAC, secret management, key vaults, vnets, subnets, certificates)

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.