Cutshort logo
Datawarehousing Jobs in Chennai

6+ Datawarehousing Jobs in Chennai | Datawarehousing Job openings in Chennai

Apply to 6+ Datawarehousing Jobs in Chennai on CutShort.io. Explore the latest Datawarehousing Job opportunities across top companies like Google, Amazon & Adobe.

icon
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad
5 - 10 yrs
₹5L - ₹10L / yr
Google Cloud Platform (GCP)
Teradata
ETL
Datawarehousing

Overview:

We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).

Responsibilities:

  • Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
  • Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
  • Build and optimize high-performance data pipelines for real-time and batch data processing.
  • Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
  • Write efficient, clean, and reusable code for ETL processes and data workflows.
  • Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
  • Implement data governance practices and ensure security and compliance of data processes.
  • Monitor and troubleshoot data pipeline performance and resolve issues proactively.
  • Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
  • Optimize and automate data workflows for continuous improvement.
  • Maintain up-to-date documentation of data pipeline architectures and processes.

Requirements:

Technical Skills:

  • Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
  • ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
  • Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
  • SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
  • Programming: Proficient in Python, Java, or Scala for building and automating data processes.
  • Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.

Experience:

  • Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
  • Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
  • Experience with data migration from on-premises to cloud environments (Teradata to GCP).
  • Familiarity with Data Lake concepts and technologies.
  • Experience with version control systems like Git and working in Agile environments.
  • Knowledge of CI/CD and automation processes in data engineering.

Soft Skills:

  • Strong problem-solving and troubleshooting skills.
  • Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
  • Ability to work collaboratively in a fast-paced, cross-functional team environment.
  • Strong attention to detail and ability to prioritize tasks.

Preferred Qualifications:

  • Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
  • Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
  • Familiarity with data governance frameworks and data privacy regulations.
  • Certifications in Google Cloud or Teradata are a plus.

Benefits:

  • Competitive salary and performance-based bonuses.
  • Health, dental, and vision insurance.
  • 401(k) with company matching.
  • Paid time off and flexible work schedules.
  • Opportunities for professional growth and development.


Read more
Clients located in Bangalore,Chennai &Pune Location

Clients located in Bangalore,Chennai &Pune Location

Agency job
Bengaluru (Bangalore), Pune, Chennai
3 - 8 yrs
₹8L - ₹16L / yr
ETL
skill iconPython
Shell Scripting
Data modeling
Datawarehousing

Role: Ab Initio Developer

Experience: 2.5 (mandate) - 8 years

Skills: Ab Initio Development

Location: Chennai/Bangalore/Pune

only Immediate to 15 days joiners

should be available for in person interview only

Its a long term contract role with IBM and Arnold is the payrolling company.

JOB DESCRIPTION:

We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.


Key Responsibilities:

·      Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.

·      Analyze complex data requirements and translate them into effective Ab Initio designs.

·      Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.

·      Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.

·      Optimize performance and scalability of Ab Initio jobs.

·      Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.

·      Stay up-to-date with the latest Ab Initio technologies and industry best practices.

Required Skills and Experience:

·      2.5 to 8 years of hands-on experience in Ab Initio development.

·      Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.

·      Proficiency in Ab Initio's graphical interface and command-line tools.

·      Experience in data modeling, data warehousing, and ETL concepts.

·      Strong SQL skills and experience with relational databases.

·      Excellent problem-solving and analytical skills.

·      Ability to work independently and as part of a team.

·      Strong communication and documentation skills.

Preferred Skills:

·      Experience with cloud-based data integration platforms.

·      Knowledge of data quality and data governance concepts.

·      Experience with scripting languages (e.g., Python, Shell scripting).

·      Certification in Ab Initio or related technologies.

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Hyderabad, Chennai
10 - 15 yrs
₹8L - ₹20L / yr
Data engineering
ETL
Datawarehousing
cicd
skill iconJenkins
+3 more

Key Responsibilities:

  • Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
  • Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
  • Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
  • CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
  • Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
  • Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
  • Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.

Share Cv to




Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
DFCS Technologies

DFCS Technologies

Agency job
via dfcs Technologies by SheikDawood Ali
Remote, Chennai, Anywhere India
1 - 5 yrs
₹8L - ₹12L / yr
Software Testing (QA)
ETL QA
Test Automation (QA)
Datawarehousing
SQL
1+ year of experience in Database Testing, ETL Testing, BI Testing, SQL
• Key Skillset: -
• Advanced SQL Skills and good Communication Skills are mandatory.
• Develop and execute detailed Data warehouse related functional, integration and regression test cases, and documentation
• Prioritize testing tasks based on goals and risks of projects and ensure testing milestones, activities and tasks are completed as scheduled.
• Develop and Design Datawarehouse testcases, scenarios, and scripts to ensure quality Data warehouse /BI applications.
• Report the status of test planning, defects and execution activities, including regular status updates to the project team.
• Hands on Experience on any SQL Too
Read more
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
2 - 5 yrs
₹6L - ₹25L / yr
Big Data
Hadoop
Apache Hive
skill iconScala
Spark
+12 more

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Roles and Responsibilities:

  • Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
  • Develop programs in Scala and Python as part of data cleaning and processing.
  • Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.  
  • Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
  • Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
  • Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Provide high operational excellence guaranteeing high availability and platform stability.
  • Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.

Skills:

  • Experience with Big Data pipeline, Big Data analytics, Data warehousing.
  • Experience with SQL/No-SQL, schema design and dimensional data modeling.
  • Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
  • Experience in designing systems that process structured as well as unstructured data at large scale.
  • Experience in AWS/Spark/Java/Scala/Python development.
  • Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
  • Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
  • Prior exposure to streaming data sources such as Kafka.
  • Should have knowledge on Shell Scripting and Python scripting.
  • High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
  • Experience with NoSQL databases such as Cassandra / MongoDB.
  • Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
  • Experience building and deploying applications on on-premise and cloud-based infrastructure.
  • Having a good understanding of machine learning landscape and concepts. 

 

Qualifications and Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.

Certifications:

Good to have at least one of the Certifications listed here:

    AZ 900 - Azure Fundamentals

    DP 200, DP 201, DP 203, AZ 204 - Data Engineering

    AZ 400 - Devops Certification

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort