Cutshort logo
Data engineering Jobs in Hyderabad

11+ Data engineering Jobs in Hyderabad | Data engineering Job openings in Hyderabad

Apply to 11+ Data engineering Jobs in Hyderabad on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.

icon
Premier global software products and services firm

Premier global software products and services firm

Agency job
via Recruiting Bond by Pavan Kumar
Hyderabad, Ahmedabad, Indore
5 - 10 yrs
₹10L - ₹20L / yr
Data engineering
Data modeling
Database Design
Data Warehouse (DWH)
Datawarehousing
+9 more

Job Summary: 

As a Data Engineering Lead, your role will involve designing, developing, and implementing interactive dashboards and reports using data engineering tools. You will work closely with stakeholders to gather requirements and translate them into effective data visualizations that provide valuable insights. Additionally, you will be responsible for extracting, transforming, and loading data from multiple sources into Power BI, ensuring its accuracy and integrity. Your expertise in Power BI and data analytics will contribute to informed decision-making and support the organization in driving data-centric strategies and initiatives.


We are looking for you!

As an ideal candidate for the Data Engineering Lead position, you embody the qualities of a team player with a relentless get-it-done attitude. Your intellectual curiosity and customer focus drive you to continuously seek new ways to add value to your job accomplishments.


You thrive under pressure, maintaining a positive attitude and understanding that your career is a journey. You are willing to make the right choices to support your growth. In addition to your excellent communication skills, both written and verbal, you have a proven ability to create visually compelling designs using tools like Power BI and Tableau that effectively communicate our core values. 


You build high-performing, scalable, enterprise-grade applications and teams. Your creativity and proactive nature enable you to think differently, find innovative solutions, deliver high-quality outputs, and ensure customers remain referenceable. With over eight years of experience in data engineering, you possess a strong sense of self-motivation and take ownership of your responsibilities. You prefer to work independently with little to no supervision. 


You are process-oriented, adopt a methodical approach, and demonstrate a quality-first mindset. You have led mid to large-size teams and accounts, consistently using constructive feedback mechanisms to improve productivity, accountability, and performance within the team. Your track record showcases your results-driven approach, as you have consistently delivered successful projects with customer case studies published on public platforms. Overall, you possess a unique combination of skills, qualities, and experiences that make you an ideal fit to lead our data engineering team(s).


You value inclusivity and want to join a culture that empowers you to show up as your authentic self. 


You know that success hinges on commitment, our differences make us stronger, and the finish line is always sweeter when the whole team crosses together. In your role, you should be driving the team using data, data, and more data. You will manage multiple teams, oversee agile stories and their statuses, handle escalations and mitigations, plan ahead, identify hiring needs, collaborate with recruitment teams for hiring, enable sales with pre-sales teams, and work closely with development managers/leads for solutioning and delivery statuses, as well as architects for technology research and solutions.


What You Will Do: 

  • Analyze Business Requirements.
  • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema.
  • Transformation of Data in Power BI/SQL/ETL Tool.
  • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas.
  • Experience writing SQL Queries and stored procedures.
  • Design effective Power BI solutions based on business requirements.
  • Manage a team of Power BI developers and guide their work.
  • Integrate data from various sources into Power BI for analysis.
  • Optimize performance of reports and dashboards for smooth usage.
  • Collaborate with stakeholders to align Power BI projects with goals.
  • Knowledge of Data Warehousing(must), Data Engineering is a plus


What we need?

  • B. Tech computer science or equivalent
  • Minimum 5+ years of relevant experience 


Why join us?

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills.
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company.
  • Recognized for excellence in data and AI solutions with industry awards and accolades.


Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Hyderabad, Noida, Gurugram
4 - 10 yrs
₹4L - ₹25L / yr
Data Warehouse (DWH)
Informatica
ETL
skill iconAmazon Web Services (AWS)
Data engineering
+3 more

• Required Qualifications

Bachelor’s degree or equivalent in Computer Science, Engineering, or related field; or equivalent work experience.

4-10 years of proven experience in Data Engineering

At least 4+ years of experience on AWS Cloud

Strong understanding in data warehousing principals and data modeling

Expert with SQL including knowledge of advanced query optimization techniques - build queries and data visualizations to support business use cases/analytics.

Proven experience on the AWS environment including access governance, infrastructure changes and implementation of CI/CD processes to support automated development and deployment

Proven experience with software tools including Pyspark and Python, PowerBI, QuickSite and core AWS tools such as Lambda, RDS, Cloudwatch, Cloudtrail, SNS, SQS, etc.

Experience building services/APIs on AWS Cloud environment.

Data ingestion and curation as well as implementation of data pipelines.


• Preferred Qualifications

Experience in Informatica/ETL technology will be a plus.

Experience with AI/ML Ops – model build through implementation lifecycle in AWS Cloud environment.

Hands-on experience on Snowflake would be good to have.

Experience in DevOps and microservices would be preferred.

Experience in Financial industry a plus.

Read more
top MNC

top MNC

Agency job
via Vy Systems by thirega thanasekaran
Hyderabad, Chennai
10 - 15 yrs
₹8L - ₹20L / yr
Data engineering
ETL
Datawarehousing
cicd
skill iconJenkins
+3 more

Key Responsibilities:

  • Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
  • Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
  • Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
  • CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
  • Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
  • Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
  • Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.

Share Cv to




Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three

Read more
RAPTORX.AI

at RAPTORX.AI

2 candid answers
Pratyusha Vemuri
Posted by Pratyusha Vemuri
Hyderabad
5 - 7 yrs
₹10L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
Data Visualization
Graph Databases
Neo4J
+2 more

Role Overview

We are looking for a Tech Lead with a strong background in fintech, especially with experience or a strong interest in fraud prevention and Anti-Money Laundering (AML) technologies. 

This role is critical in leading our fintech product development, ensuring the integration of robust security measures, and guiding our team in Hyderabad towards delivering high-quality, secure, and compliant software solutions.

Responsibilities

  • Lead the development of fintech solutions, focusing on fraud prevention and AML, using Typescript, ReactJs, Python, and SQL databases.
  • Architect and deploy secure, scalable applications on AWS or Azure, adhering to the best practices in financial security and data protection.
  • Design and manage databases with an emphasis on security, integrity, and performance, ensuring compliance with fintech regulatory standards.
  • Guide and mentor the development team, promoting a culture of excellence, innovation, and continuous learning in the fintech space.
  • Collaborate with stakeholders across the company, including product management, design, and QA, to ensure project alignment with business goals and regulatory requirements.
  • Keep abreast of the latest trends and technologies in fintech, fraud prevention, and AML, applying this knowledge to drive the company's objectives.

Requirements

  • 5-7 years of experience in software development, with a focus on fintech solutions and a strong understanding of fraud prevention and AML strategies.
  • Expertise in Typescript, ReactJs, and familiarity with Python.
  • Proven experience with SQL databases and cloud services (AWS or Azure), with certifications in these areas being a plus.
  • Demonstrated ability to design and implement secure, high-performance software architectures in the fintech domain.
  • Exceptional leadership and communication skills, with the ability to inspire and lead a team towards achieving excellence.
  • A bachelor's degree in Computer Science, Engineering, or a related field, with additional certifications in fintech, security, or compliance being highly regarded.

Why Join Us?

  • Opportunity to be at the cutting edge of fintech innovation, particularly in fraud prevention and AML.
  • Contribute to a company with ambitious goals to revolutionize software development and make a historical impact.
  • Be part of a visionary team dedicated to creating a lasting legacy in the tech industry.
  • Work in an environment that values innovation, leadership, and the long-term success of its employees.


Read more
Upgrad KnowledgeHut

Upgrad KnowledgeHut

Agency job
Hyderabad
3 - 5 yrs
₹8L - ₹15L / yr
skill iconAmazon Web Services (AWS)
skill iconData Analytics
Data Visualization
PowerBI
Tableau
+3 more

AWS Data Engineer:

 

Job Description

  • 3+ years of experience in AWS Data Engineering.

  • Design and build ETL pipelines & Data lakes to automate ingestion of structured and unstructured data

  • Experience working with AWS big data technologies (Redshift, S3, AWS Glue, Kinesis, Athena ,DMS, EMR and Lambda for Serverless ETL)

  • Should have knowledge in SQL and NoSQL programming languages.

  • Have worked on batch and real time pipelines.

  • Excellent programming and debugging skills in Scala or Python & Spark.

  • Good Experience in Data Lake formation, Apache spark, python, hands on experience in deploying the models.

  • Must have experience in Production migration Process

  • Nice to have experience with Power BI visualization tools and connectivity

 

Roles & Responsibilities:

  • Design, build and operationalize large scale enterprise data solutions and applications

  • Analyze, re-architect and re-platform on premise data warehouses to data platforms on AWS cloud.

  • Design and build production data pipelines from ingestion to consumption within AWS big data architecture, using Python, or Scala.

  • Perform detail assessments of current state data platforms and create an appropriate transition path to AWS cloud.

 

Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
SpringML

at SpringML

1 video
2 recruiters
Kayal Vizhi
Posted by Kayal Vizhi
Hyderabad
4 - 11 yrs
₹8L - ₹20L / yr
Big Data
Hadoop
Apache Spark
Spark
Data Structures
+3 more

SpringML is looking to hire a top-notch Senior  Data Engineer who is passionate about working with data and using the latest distributed framework to process large dataset. As an Associate Data Engineer, your primary role will be to design and build data pipelines. You will be focused on helping client projects on data integration, data prep and implementing machine learning on datasets. In this role, you will work on some of the latest technologies, collaborate with partners on early win, consultative approach with clients, interact daily with executive leadership, and help build a great company. Chosen team members will be part of the core team and play a critical role in scaling up our emerging practice.

RESPONSIBILITIES:

 

  • Ability to work as a member of a team assigned to design and implement data integration solutions.
  • Build Data pipelines using standard frameworks in Hadoop, Apache Beam and other open-source solutions.
  • Learn quickly – ability to understand and rapidly comprehend new areas – functional and technical – and apply detailed and critical thinking to customer solutions.
  • Propose design solutions and recommend best practices for large scale data analysis

 

SKILLS:

 

  • B.tech  degree in computer science, mathematics or other relevant fields.
  • 4+years of experience in ETL, Data Warehouse, Visualization and building data pipelines.
  • Strong Programming skills – experience and expertise in one of the following: Java, Python, Scala, C.
  • Proficient in big data/distributed computing frameworks such as Apache,Spark, Kafka,
  • Experience with Agile implementation methodologies
Read more
Fragma Data Systems

at Fragma Data Systems

8 recruiters
Evelyn Charles
Posted by Evelyn Charles
Remote, Bengaluru (Bangalore), Hyderabad
3 - 9 yrs
₹8L - ₹20L / yr
PySpark
Data engineering
Data Engineer
Windows Azure
ADF
+2 more
Must-Have Skills:
• Good experience in Pyspark - Including Dataframe core functions and Spark SQL
• Good experience in SQL DBs - Be able to write queries including fair complexity.
• Should have excellent experience in Big Data programming for data transformation and aggregations
• Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
• Good customer communication.
• Good Analytical skill
 
 
Technology Skills (Good to Have):
  • Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
  • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. 
  • Designing and implementing data engineering, ingestion, and transformation functions
  • Azure Synapse or Azure SQL data warehouse
  • Spark on Azure is available in HD insights and data bricks
 
Good to Have: 
  • Experience with Azure Analysis Services
  • Experience in Power BI
  • Experience with third-party solutions like Attunity/Stream sets, Informatica
  • Experience with PreSales activities (Responding to RFPs, Executing Quick POCs)
  • Capacity Planning and Performance Tuning on Azure Stack and Spark.
Read more
INSOFE

at INSOFE

1 recruiter
Nitika Bist
Posted by Nitika Bist
Hyderabad, Bengaluru (Bangalore)
7 - 10 yrs
₹12L - ₹18L / yr
Big Data
Data engineering
Apache Hive
Apache Spark
Hadoop
+4 more
Roles & Responsibilities:
  • Total Experience of 7-10 years and should be interested in teaching and research
  • 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
  • 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
  • 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
  • Experience in designing, building, and maintaining ETL systems
  • Experience in data pipeline and workflow management tools like Airflow
  • Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
  • Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
  • Should be good at storytelling in Technology
Please note that candidates should be interested in teaching and research work.

Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors
Read more
Milestone Hr Consultancy

at Milestone Hr Consultancy

2 recruiters
Jyoti Sharma
Posted by Jyoti Sharma
Remote, Hyderabad
3 - 8 yrs
₹6L - ₹16L / yr
skill iconPython
skill iconDjango
Data engineering
Apache Hive
Apache Spark
We are currently looking for passionate Data Engineers to join our team and mission. In this role, you will help doctors from across the world improve care and save lives by helping extract insights and predict risk. Our Data Engineers ensure that data are ingested and prepared, ready for insights and intelligence to be derived from them. We’re looking for smart individuals to join our incredibly talented team, that is on a mission to transform healthcare.As a Data Engineer you will be engaged in some or all of the following activities:• Implement, test and deploy distributed data ingestion, data processing and feature engineering systems computing on large volumes of Healthcare data using a variety of open source and proprietary technologies.• Design data architectures and schemas optimized for analytics and machine learning.• Implement telemetry to monitor the performance and operations of data pipelines.• Develop tools and libraries to implement and manage data processing pipelines, including ingestion, cleaning, transformation, and feature computation.• Work with large data sets, and integrate diverse data sources, data types and data structures.• Work with Data Scientists, Machine Learning Engineers and Visualization Engineers to understand data requirements, and translate them into production-ready data pipelines.• Write and automate unit, functional, integration and performance tests in a Continuous Integration environment.• Take initiative to find solutions to technical challenges for healthcare data.You are a great match if you have some or all of the following skills and qualifications.• Strong understanding of database design and feature engineering to support Machine Learning and analytics.• At least 3 years of industry experience building, testing and deploying large-scale, distributed data processing systems.• Proficiency in working with multiple data processing tools and query languages (Python, Spark, SQL, etc.).• Excellent understanding of distributed computing concepts and Big Data technologies (Spark, Hive, etc.).• Proficiency in performance tuning and optimization of data processing pipelines.• Attention to detail and focus on software quality, with experience in software testing.• Strong cross discipline communication skills and teamwork.• Demonstrated clear and thorough logical and analytical thinking, as well as problem solving skills.• Bachelor or Masters in Computer Science or related field. Skill - Apache Spark-Python-Hive Skill Description - Skill1– SparkSkill2- PythonSkill3 – Hive, SQL Responsibility - Sr. data engineer"
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort