Cutshort logo
Data engineering Jobs in Pune

13+ Data engineering Jobs in Pune | Data engineering Job openings in Pune

Apply to 13+ Data engineering Jobs in Pune on CutShort.io. Explore the latest Data engineering Job opportunities across top companies like Google, Amazon & Adobe.

icon
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 4 yrs
Best in industry
AWS Lambda
databricks
Database migration
Apache Kafka
Apache Spark
+3 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.

Brief Description:

We are looking for a talented Data Engineer to join our team. In this role, you will design, implement, and manage data pipelines, ensuring the accessibility and reliability of data for critical business processes. This is an exciting opportunity to work on scalable solutions that power data-driven decisions

Skillset:

Here is a list of some of the technologies you will work with (the list below is not set in stone)

Data Pipeline Orchestration and Execution:

● AWS Glue

● AWS Step Functions

● Databricks Change

Data Capture:

● Amazon Database Migration Service

● Amazon Managed Streaming for Apache Kafka with Debezium Plugin

Batch:

● AWS step functions (and Glue Jobs)

● Asynchronous queueing of batch job commands with RabbitMQ to various “ETL Jobs”

● Cron and subervisord processing on dedicated job server(s): Python & PHP

Streaming:

● Real-time processing via AWS MSK (Kafka), Apache Hudi, & Apache Flink

● Near real-time processing via worker (listeners) spread over AWS Lambda, custom server (daemons) written in Python and PHP Symfony

● Languages: Python & PySpark, Unix Shell, PHP Symfony (with Doctrine ORM)

● Monitoring & Reliability: Datadog & Cloudwatch

Things you will do:

● Build dashboards using Datadog and Cloudwatch to ensure system health and user support

● Build schema registries that enable data governance

● Partner with end-users to resolve service disruptions and evangelize our data product offerings

● Vigilantly oversee data quality and alert upstream data producers of issues

● Support and contribute to the data platform architecture strategy, roadmap, and implementation plans to support the company’s data-driven initiatives and business objective

● Work with Business Intelligence (BI) consumers to deliver enterprise-wide fact and dimension data product tables to enable data-driven decision-making across the organization.

● Other duties as assigned

Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Adesso

Adesso

Agency job
via HashRoot by Maheswari M
Kochi (Cochin), Chennai, Pune
3 - 6 yrs
₹4L - ₹24L / yr
Data engineering
skill iconAmazon Web Services (AWS)
Windows Azure
Snowflake
Data Transformation Tool (DBT)
+3 more

We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions. 

Responsibilities:

Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool) 

Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.

Develop data routes: You design scalable and powerful data management processes.

Analyze data: You derive sound findings from data sets and present them in an understandable way.

Requirements:

Requirements management and project experience: You successfully implement cloud-based data & analytics projects.

Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.

Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).

SQL know-how: You have a sound and solid knowledge of SQL.

Data management: You are familiar with topics such as master data management and data quality.

Bachelor's degree in computer science, or a related field.

Strong communication and collaboration abilities to work effectively in a team environment.

 

Skills & Requirements

Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.

Read more
Intellikart Ventures LLP
Prajwal Shinde
Posted by Prajwal Shinde
Pune
2 - 5 yrs
₹9L - ₹15L / yr
PowerBI
SQL
ETL
snowflake
Apache Kafka
+1 more

Experience: 4+ years.

Location: Vadodara & Pune

Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines

What you'll be doing:

  • Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
  • Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
  • Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
  • Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
  • Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
  • Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
  • Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
  • Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
  • Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.


What you need:

Basic Skills:


  • 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
  • Strong experience with Apache Kafka for stream processing and real-time data integration.
  • Proficiency in SQL and ETL/ELT processes.
  • Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
  • Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
  • Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
  • Knowledge of data governance, security, and compliance best practices.
  • Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
  • Ability to work in a collaborative team environment and communicate effectively with cross-functional teams


Responsibilities:

  • Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
  • Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
  • Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
  • Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
  • Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
  • Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
  • Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
  • Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
  • Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
  • Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform. 


Read more
persistent

persistent

Agency job
via Bohiyaanam Talent Solutions LLP by TrishaDutt Tekgminus
Pune, Mumbai, Bengaluru (Bangalore), Indore, Kolkata
6 - 7 yrs
₹12L - ₹18L / yr
MuleSoft
ETL QA
Automation
Data engineering

I am looking for Mulesoft Developer for a reputed MNC

 

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

 

Skills:

Mulesoft

Experience: 6+ Years

Relevant experience: 4 Years

Location : Pune, Mumbai, Bangalore, Indore, Kolkata

Read more
Tredence
Rohit S
Posted by Rohit S
Chennai, Pune, Bengaluru (Bangalore), Gurugram
11 - 16 yrs
₹20L - ₹32L / yr
Data Warehouse (DWH)
Google Cloud Platform (GCP)
skill iconAmazon Web Services (AWS)
Data engineering
Data migration
+1 more
• Engages with Leadership of Tredence’s clients to identify critical business problems, define the need for data engineering solutions and build strategy and roadmap
• S/he possesses a wide exposure to complete lifecycle of data starting from creation to consumption
• S/he has in the past built repeatable tools / data-models to solve specific business problems
• S/he should have hand-on experience of having worked on projects (either as a consultant or with in a company) that needed them to
o Provide consultation to senior client personnel o Implement and enhance data warehouses or data lakes.
o Worked with business teams or was a part of the team that implemented process re-engineering driven by data analytics/insights
• Should have deep appreciation of how data can be used in decision-making
• Should have perspective on newer ways of solving business problems. E.g. external data, innovative techniques, newer technology
• S/he must have a solution-creation mindset.
Ability to design and enhance scalable data platforms to address the business need
• Working experience on data engineering tool for one or more cloud platforms -Snowflake, AWS/Azure/GCP
• Engage with technology teams from Tredence and Clients to create last mile connectivity of the solutions
o Should have experience of working with technology teams
• Demonstrated ability in thought leadership – Articles/White Papers/Interviews
Mandatory Skills Program Management, Data Warehouse, Data Lake, Analytics, Cloud Platform
Read more
Tredence
Bengaluru (Bangalore), Pune, Gurugram, Chennai
8 - 12 yrs
₹12L - ₹30L / yr
Snow flake schema
Snowflake
SQL
Data modeling
Data engineering
+1 more

JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:

• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.

• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.

ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.

• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication

Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.

 

About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality

Read more
Celebal Technologies

at Celebal Technologies

2 recruiters
Payal Hasnani
Posted by Payal Hasnani
Jaipur, Noida, Gurugram, Delhi, Ghaziabad, Faridabad, Pune, Mumbai
5 - 15 yrs
₹7L - ₹25L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more
Job Responsibilities:

• Project Planning and Management
o Take end-to-end ownership of multiple projects / project tracks
o Create and maintain project plans and other related documentation for project
objectives, scope, schedule and delivery milestones
o Lead and participate across all the phases of software engineering, right from
requirements gathering to GO LIVE
o Lead internal team meetings on solution architecture, effort estimation, manpower
planning and resource (software/hardware/licensing) planning
o Manage RIDA (Risks, Impediments, Dependencies, Assumptions) for projects by
developing effective mitigation plans
• Team Management
o Act as the Scrum Master
o Conduct SCRUM ceremonies like Sprint Planning, Daily Standup, Sprint Retrospective
o Set clear objectives for the project and roles/responsibilities for each team member
o Train and mentor the team on their job responsibilities and SCRUM principles
o Make the team accountable for their tasks and help the team in achieving them
o Identify the requirements and come up with a plan for Skill Development for all team
members
• Communication
o Be the Single Point of Contact for the client in terms of day-to-day communication
o Periodically communicate project status to all the stakeholders (internal/external)
• Process Management and Improvement
o Create and document processes across all disciplines of software engineering
o Identify gaps and continuously improve processes within the team
o Encourage team members to contribute towards process improvement
o Develop a culture of quality and efficiency within the team

Must have:
• Minimum 08 years of experience (hands-on as well as leadership) in software / data engineering
across multiple job functions like Business Analysis, Development, Solutioning, QA, DevOps and
Project Management
• Hands-on as well as leadership experience in Big Data Engineering projects
• Experience developing or managing cloud solutions using Azure or other cloud provider
• Demonstrable knowledge on Hadoop, Hive, Spark, NoSQL DBs, SQL, Data Warehousing, ETL/ELT,
DevOps tools
• Strong project management and communication skills
• Strong analytical and problem-solving skills
• Strong systems level critical thinking skills
• Strong collaboration and influencing skills

Good to have:
• Knowledge on PySpark, Azure Data Factory, Azure Data Lake Storage, Synapse Dedicated SQL
Pool, Databricks, PowerBI, Machine Learning, Cloud Infrastructure
• Background in BFSI with focus on core banking
• Willingness to travel

Work Environment
• Customer Office (Mumbai) / Remote Work

Education
• UG: B. Tech - Computers / B. E. – Computers / BCA / B.Sc. Computer Science
Read more
Mobile Programming India Pvt Ltd

at Mobile Programming India Pvt Ltd

1 video
17 recruiters
Pawan Tiwari
Posted by Pawan Tiwari
Remote, Bengaluru (Bangalore), Chennai, Pune, Gurugram, Mohali, Dehradun
4 - 7 yrs
₹10L - ₹15L / yr
Data engineering
Data Engineer
skill iconDjango
skill iconPython

Looking Data Enginner for our OWn organization-

Notice Period- 15-30 days
CTC- upto 15 lpa

 

Preferred Technical Expertise 

  1. Expertise in Python programming.
  2. Proficient in Pandas/Numpy Libraries. 
  3. Experience with Django framework and API Development.
  4. Proficient in writing complex queries using SQL
  5. Hands on experience with Apache Airflow.
  6. Experience with source code versioning tools such as GIT, Bitbucket etc.

 Good to have Skills:

  1. Create and maintain Optimal Data Pipeline Architecture
  2. Experienced in handling large structured data.
  3. Demonstrated ability in solutions covering data ingestion, data cleansing, ETL, Data mart creation and exposing data for consumers.
  4. Experience with any cloud platform (GCP is a plus)
  5. Experience with JQuery, HTML, Javascript, CSS is a plus.
If Intersted , Kindly share Your CV
Read more
DataMetica

at DataMetica

1 video
7 recruiters
Nikita Aher
Posted by Nikita Aher
Pune, Hyderabad
7 - 12 yrs
₹12L - ₹33L / yr
Big Data
Hadoop
Spark
Apache Spark
Apache Hive
+3 more

Job description

Role : Lead Architecture (Spark, Scala, Big Data/Hadoop, Java)

Primary Location : India-Pune, Hyderabad

Experience : 7 - 12 Years

Management Level: 7

Joining Time: Immediate Joiners are preferred


  • Attend requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Experience of Solution Design and Solution Architecture for the data engineer model to build and implement Big Data Projects on-premises and on cloud.
  • Align architecture with business requirements and stabilizing the developed solution
  • Ability to build prototypes to demonstrate the technical feasibility of your vision
  • Professional experience facilitating and leading solution design, architecture and delivery planning activities for data intensive and high throughput platforms and applications
  • To be able to benchmark systems, analyses system bottlenecks and propose solutions to eliminate them
  • Able to help programmers and project managers in the design, planning and governance of implementing projects of any kind.
  • Develop, construct, test and maintain architectures and run Sprints for development and rollout of functionalities
  • Data Analysis, Code development experience, ideally in Big Data Spark, Hive, Hadoop, Java, Python, PySpark,
  • Execute projects of various types i.e. Design, development, Implementation and migration of functional analytics Models/Business logic across architecture approaches
  • Work closely with Business Analysts to understand the core business problems and deliver efficient IT solutions of the product
  • Deployment sophisticated analytics program of code using any of cloud application.


Perks and Benefits we Provide!


  • Working with Highly Technical and Passionate, mission-driven people
  • Subsidized Meals & Snacks
  • Flexible Schedule
  • Approachable leadership
  • Access to various learning tools and programs
  • Pet Friendly
  • Certification Reimbursement Policy
  • Check out more about us on our website below!

www.datametica.com

Read more
1CH

at 1CH

1 recruiter
Sathish Sukumar
Posted by Sathish Sukumar
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
4 - 15 yrs
₹10L - ₹25L / yr
Data engineering
Data engineer
ETL
SSIS
ADF
+3 more
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server
Read more
MedCords

at MedCords

6 recruiters
Monika Goel
Posted by Monika Goel
Pune, Kota
3 - 10 yrs
₹10L - ₹35L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
AWS CloudFormation
Object Oriented Programming (OOPs)
+8 more

Job Description-

Backend Developer- Senior

 

Experience - 3-6 years

Location: Pune/Kota

 

Minimum Qualifications:

 

- BE/B.Tech or ME/M.Tech in Computer Science.

- Must have “Can Do Attitude” towards work

- Must have work exp of 3-6 years

- Must have programming exp of 1-2 years in any of Python/Golang/Java languages

- Must have worked in product based company

- Ready to work in a startup and adaptable to a dynamic environment

- Ready to accept ad-hoc requirements and track them till they get implemented

- Ready to learn new technologies like Andriod, Angular, etc.

- Good at HTTP basics, OOPs concepts, data structures, algorithms, networking and

security aspects

- Ability to write clean code and maintain it

- Good at SQL/No-SQL databases

 

Preferred Qualifications:

- Experience in any good product based startup

- Experience in working with the team and managing a small team of 2-5 associates

- Experience in being a mentor for co-developers

- Experience in design/developing scalable systems.

- Experience in public cloud platforms services/APIs of AWS, Google Cloud, etc.

- Experience in data engineering

- Experience in SOA/Microservice architecture development

 

Responsibilities:

- Design and develop scalable services and APIs in Python/Golang

- Always maintain the services secure

- Should optimize APIs for mobile data and apps

- Use off-the-shelf and state-of-the-art services for faster development of product

- Guide team members with designs

- Take the end to end ownership of features and resolve customer issues on priority

- Mentor/guide/monitor junior developer

- Expertise Android/Angular to the required extent and guide app developers while

designing APIs

 

Opportunities in the role:

- LearnAngular, Python, Node.js, Golang, ELK stack, MEAN/MERN

- Work on AWS, Azure, Google Cloud Platform

- Work on databases like RDS, MongoDB, Big Table & DynamoDB, Redis, Aerospike

- Experience with SQL/ NoSQL Databases (RDS, DynamoDB, Google Datastore, Redis)

- Experience with ELK stack.

- Fast prototyping of proof-of concept features/application based on a brief

- Work on data engineering

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort