3+ Teradata Jobs in Chennai | Teradata Job openings in Chennai
Apply to 3+ Teradata Jobs in Chennai on CutShort.io. Explore the latest Teradata Job opportunities across top companies like Google, Amazon & Adobe.

Overview:
We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).
Responsibilities:
- Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
- Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
- Build and optimize high-performance data pipelines for real-time and batch data processing.
- Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Write efficient, clean, and reusable code for ETL processes and data workflows.
- Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
- Implement data governance practices and ensure security and compliance of data processes.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
- Optimize and automate data workflows for continuous improvement.
- Maintain up-to-date documentation of data pipeline architectures and processes.
Requirements:
Technical Skills:
- Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
- ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
- Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
- SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
- Programming: Proficient in Python, Java, or Scala for building and automating data processes.
- Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.
Experience:
- Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
- Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
- Experience with data migration from on-premises to cloud environments (Teradata to GCP).
- Familiarity with Data Lake concepts and technologies.
- Experience with version control systems like Git and working in Agile environments.
- Knowledge of CI/CD and automation processes in data engineering.
Soft Skills:
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
- Ability to work collaboratively in a fast-paced, cross-functional team environment.
- Strong attention to detail and ability to prioritize tasks.
Preferred Qualifications:
- Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
- Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
- Familiarity with data governance frameworks and data privacy regulations.
- Certifications in Google Cloud or Teradata are a plus.
Benefits:
- Competitive salary and performance-based bonuses.
- Health, dental, and vision insurance.
- 401(k) with company matching.
- Paid time off and flexible work schedules.
- Opportunities for professional growth and development.
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
We are looking for a Teradata developer for one of our premium clients, Kindly contact me if interested