Cutshort logo

50+ Python Jobs in Pune | Python Job openings in Pune

Apply to 50+ Python Jobs in Pune on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
Fractal Analytics

at Fractal Analytics

5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Hyderabad, Gurugram, Noida, Pune, Mumbai, Chennai, Coimbatore
4yrs+
Best in industry
Generative AI
skill iconMachine Learning (ML)
LLMOps
Large Language Models (LLM) tuning
Open-source LLMs
+15 more

Role description:

You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.


Required skills:

  • 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
  • Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
  • Should have worked on proprietary and open source large language models
  • Experience on LLM fine tuning, creating distilled model from hosted LLMs
  • Building data pipelines for model training
  • Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
  • Experience in GenAI application deployment on cloud and on-premise at scale for production
  • Experience in creating CI/CD pipelines
  • Working knowledge on Kubernetes
  • Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
  • Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
  • Experience in light weight UI development using streamlit or chainlit (optional)
  • Desired experience on open-source tools for ML development, deployment, observability and integration
  • Background on DevOps and MLOps will be a plus
  • Experience working on collaborative code versioning tools like GitHub/GitLab
  • Team player with good communication and presentation skills
Read more
ZeMoSo Technologies

at ZeMoSo Technologies

11 recruiters
Agency job
via TIGI HR Solution Pvt. Ltd. by Vaidehi Sarkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Chennai, Pune
4 - 8 yrs
₹10L - ₹15L / yr
Data engineering
skill iconPython
SQL
Data Warehouse (DWH)
skill iconAmazon Web Services (AWS)
+3 more

Work Mode: Hybrid


Need B.Tech, BE, M.Tech, ME candidates - Mandatory



Must-Have Skills:

● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.

● Minimum of 3 years of proven experience as a Data Engineer.

● Strong proficiency in Python programming language and SQL.

● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.

● Good comprehension and critical thinking skills.


● Kindly note Salary bracket will vary according to the exp. of the candidate - 

- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA

- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA

- Experience more than 8 yrs - Salary upto 40 LPA

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
7 - 10 yrs
Best in industry
Google Cloud Platform (GCP)
ETL
skill iconPython
skill iconJava
skill iconScala
+4 more

About Data Axle:

Data Axle Inc.  has been an industry leader in data, marketing solutions, sales and research for over 45 years in the USA. Data Axle has set up a strategic global center of excellence in Pune. This center delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.  Data Axle is headquartered in Dallas, TX, USA.


Roles and Responsibilities:

  • Design, implement, and manage scalable analytical data infrastructure, enabling efficient access to large datasets and high-performance computing on Google Cloud Platform (GCP).
  • Develop and optimize data pipelines using GCP-native services like BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Data Fusion, and Cloud Storage.
  • Work with diverse data sources to extract, transform, and load data into enterprise-grade data lakes and warehouses, ensuring high availability and reliability.
  • Implement and maintain real-time data streaming solutions using Pub/Sub, Dataflow, and Kafka.
  • Research and integrate the latest big data and visualization technologies to enhance analytics capabilities and improve efficiency.
  • Collaborate with cross-functional teams to implement machine learning models and AI-driven analytics solutions using Vertex AI and BigQuery ML.
  • Continuously improve existing data architectures to support scalability, performance optimization, and cost efficiency.
  • Enhance data security and governance by implementing industry best practices for access control, encryption, and compliance.
  • Automate and optimize data workflows to simplify reporting, dashboarding, and self-service analytics using Looker and Data Studio.


Basic Qualifications

  • 7+ years of experience in data engineering, software development, business intelligence, or data science, with expertise in large-scale data processing and analytics.
  • Strong proficiency in SQL and experience with BigQuery for data warehousing.
  • Hands-on experience in designing and developing ETL/ELT pipelines using GCP services (Cloud Composer, Dataflow, Dataproc, Data Fusion, or Apache Airflow).
  • Expertise in distributed computing and big data processing frameworks, such as Apache Spark, Hadoop, or Flink, particularly within Dataproc and Dataflow environments.
  • Experience with business intelligence and data visualization tools, such as Looker, Tableau, or Power BI.
  • Knowledge of data governance, security best practices, and compliance requirements in cloud environments.


Preferred Qualifications:

  • Degree/Diploma in Computer Science, Engineering, Mathematics, or a related technical field.
  • Experience working with GCP big data technologies, including BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud SQL.
  • Hands-on experience with real-time data processing frameworks, including Kafka and Apache Beam.
  • Proficiency in Python, Java, or Scala for data engineering and pipeline development.
  • Familiarity with DevOps best practices, CI/CD pipelines, Terraform, and infrastructure-as-code for managing GCP resources.
  • Experience integrating AI/ML models into data workflows, leveraging BigQuery ML, Vertex AI, or TensorFlow.
  • Understanding of Agile methodologies, software development life cycle (SDLC), and cloud cost optimization strategies.
Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
12 - 17 yrs
Best in industry
databricks
skill iconPython
PySpark
skill iconMachine Learning (ML)
SQL
+1 more

Roles & Responsibilities:  

We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of  identifying high quality target audiences that generate profitable marketing return for our clients. We are looking  for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful  predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy  contributing to and learning from a highly talented team and working on a variety of projects.  


We are looking for a Manager Data Scientist who will be responsible for  

  • Ownership of design, implementation, and deployment of machine learning algorithms in a modern  Python-based cloud architecture  
  • Design or enhance ML workflows for data ingestion, model design, model inference and scoring  3. Oversight on team project execution and delivery  
  • Establish peer review guidelines for high quality coding to help develop junior team members’ skill set  growth, cross-training, and team efficiencies  
  • Visualize and publish model performance results and insights to internal and external audiences  


Qualifications:  

  • Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science,  Mathematics, Engineering)  
  • Minimum of 12+ years of work experience in the end-to-end lifecycle of ML model development  and deployment into production within a cloud infrastructure (Databricks is highly preferred)  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by  example in the fulfilment of client requests  
  • Exhibit deep knowledge of core mathematical principles relating to data science and machine  learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and  Unsupervised ML, A/B Testing, etc.)  
  • Proficiency in Python and SQL required; PySpark/Spark experience a plus  
  • Ability to conduct a productive peer review and proper code structure in Github
  • Proven experience developing, testing, and deploying various ML algorithms (neural networks,  XGBoost, Bayes, and the like)  
  • Working knowledge of modern CI/CD methods  


This position description is intended to describe the duties most frequently performed by an individual in this  position. It is not intended to be a complete list of assigned duties but to describe a position level. 

Read more
Deqode

at Deqode

1 recruiter
Alisha Das
Posted by Alisha Das
Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Hyderabad, Indore, Jaipur, Kolkata
4 - 5 yrs
₹2L - ₹18L / yr
skill iconPython
PySpark

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.

Key Responsibilities:

  • Write clean, scalable, and efficient Python code.
  • Work with Python frameworks such as PySpark for data processing.
  • Design, develop, update, and maintain APIs (RESTful).
  • Deploy and manage code using GitHub CI/CD pipelines.
  • Collaborate with cross-functional teams to define, design, and ship new features.
  • Work on AWS cloud services for application deployment and infrastructure.
  • Basic database design and interaction with MySQL or DynamoDB.
  • Debugging and troubleshooting application issues and performance bottlenecks.

Required Skills & Qualifications:

  • 4+ years of hands-on experience with Python development.
  • Proficient in Python basics with a strong problem-solving approach.
  • Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
  • Good understanding of API development and integration.
  • Knowledge of GitHub and CI/CD workflows.
  • Experience in working with PySpark or similar big data frameworks.
  • Basic knowledge of MySQL or DynamoDB.
  • Excellent communication skills and a team-oriented mindset.

Nice to Have:

  • Experience in containerization (Docker/Kubernetes).
  • Familiarity with Agile/Scrum methodologies.


Read more
Tech Prescient

at Tech Prescient

2 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Remote, Pune
7 - 9 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
skill iconAmazon Web Services (AWS)

Job Description:

We are looking for a Python Lead who has the following experience and expertise -

  • Proficiency in developing RESTful APIs using Flask/Django or Fast API framework
  • Hands-on experience of using ORMs for database query mapping
  • Unit test cases for code coverage and API testing
  • Using Postman for validating the APIs Experienced with GIT process and rest of the code management including knowledge of ticket management systems like JIRA
  • Have at least 2 years of experience in any cloud platform
  • Hands-on leadership experience
  • Experience of direct communication with the stakeholders

Skills and Experience:

  • Good academics
  • Strong teamwork and communications
  • Advanced troubleshooting skills
  • Ready and immediately available candidates will be preferred.


Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune, Mumbai
5 - 11 yrs
₹5L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconPython
skill iconAngular (2+)

Should have strong hands on experience of 8-10 yrs in Java Development.

Should have strong knowledge of Java 11+, Spring, Spring Boot, Hibernate, Rest Web Services.

Strong Knowledge of J2EE Design Patterns and Microservices design patterns.

Should have strong hand on knowledge of SQL / PostGres DB. Good to have exposure to Nosql DB.

Should have strong knowldge of AWS services (Lambda, EC2, RDS, API Gateway, S3, Could front, Airflow.

Good to have Python ,PySpark as a secondary Skill

Should have ggod knowledge of CI CD pipleline.

Should be strong in wiriting unit test cases, debug Sonar issues.

Should be able to lead/guide team of junior developers

Should be able to collab with BA and solution architects to create HLD and LLD documents

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Ahmedabad
4 - 9 yrs
₹10L - ₹35L / yr
skill iconPython
pytest
skill iconAmazon Web Services (AWS)
Test Automation (QA)
SQL

At least 5 years of experience in testing and developing automation tests.

A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.

Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.

Familiarity with Playwright or other browser application testing frameworks is a significant advantage.

Proficiency in object-oriented programming and principles is required.

Extensive knowledge of AWS services is essential.

Strong expertise in REST API testing and SQL is required.

A solid understanding of testing and development life cycle methodologies is necessary.

Knowledge of the financial industry and trading systems is a plus

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
8 - 10 yrs
Best in industry
Engineering Management
skill iconJavascript
TypeScript
skill iconAngularJS (1.x)
skill iconReact.js
+7 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning


Roles and Responsibilities:

● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers

● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies

● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals

● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration

● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans

● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement

● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members


Requirements:

● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role

● Proven experience in architecting and building web and mobile applications at scale

● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks

● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices

● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams

● Excellent problem-solving, communication, and organizational skills

● Nice to haves:

○ Prior experience in working with startups or product-based companies

○ Experience mentoring tech leads and helping shape engineering culture

○ Exposure to AI/ML, data engineering, or platform thinking


Why Join Us?:

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethics and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NeoGenCode Technologies Pvt Ltd
Bengaluru (Bangalore), Pune, Chennai
3 - 6 yrs
₹2L - ₹12L / yr
Test Automation (QA)
Automation
Software Testing (QA)
Generative AI
Selenium
+7 more

Job Title : Automation Quality Engineer (Gen AI)

Experience : 3 to 5+ Years

Location : Bangalore / Chennai / Pune


Role Overview :

We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.

You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.


Key Responsibilities :

  • Develop and maintain test strategies for AI models, APIs, and user interfaces.
  • Build automation frameworks and integrate into CI/CD pipelines.
  • Validate model accuracy, robustness, and monitor model drift.
  • Perform regression, performance, load, and security testing.
  • Log and track issues; collaborate with developers to resolve them.
  • Ensure compliance with data privacy and ethical AI standards.
  • Document QA processes and testing outcomes.

Mandatory Skills :

  • Test Automation : Selenium, Playwright, or Deep Eval
  • Programming/Scripting : Python, JavaScript
  • API Testing : Postman, REST Assured
  • Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
  • Performance Testing : JMeter
  • Bug Tracking : Azure DevOps
  • Methodologies : Agile delivery experience
  • Soft Skills : Strong communication and problem-solving abilities
Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
2 - 4 yrs
Best in industry
AWS Lambda
databricks
Database migration
Apache Kafka
Apache Spark
+3 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.

Brief Description:

We are looking for a talented Data Engineer to join our team. In this role, you will design, implement, and manage data pipelines, ensuring the accessibility and reliability of data for critical business processes. This is an exciting opportunity to work on scalable solutions that power data-driven decisions

Skillset:

Here is a list of some of the technologies you will work with (the list below is not set in stone)

Data Pipeline Orchestration and Execution:

● AWS Glue

● AWS Step Functions

● Databricks Change

Data Capture:

● Amazon Database Migration Service

● Amazon Managed Streaming for Apache Kafka with Debezium Plugin

Batch:

● AWS step functions (and Glue Jobs)

● Asynchronous queueing of batch job commands with RabbitMQ to various “ETL Jobs”

● Cron and subervisord processing on dedicated job server(s): Python & PHP

Streaming:

● Real-time processing via AWS MSK (Kafka), Apache Hudi, & Apache Flink

● Near real-time processing via worker (listeners) spread over AWS Lambda, custom server (daemons) written in Python and PHP Symfony

● Languages: Python & PySpark, Unix Shell, PHP Symfony (with Doctrine ORM)

● Monitoring & Reliability: Datadog & Cloudwatch

Things you will do:

● Build dashboards using Datadog and Cloudwatch to ensure system health and user support

● Build schema registries that enable data governance

● Partner with end-users to resolve service disruptions and evangelize our data product offerings

● Vigilantly oversee data quality and alert upstream data producers of issues

● Support and contribute to the data platform architecture strategy, roadmap, and implementation plans to support the company’s data-driven initiatives and business objective

● Work with Business Intelligence (BI) consumers to deliver enterprise-wide fact and dimension data product tables to enable data-driven decision-making across the organization.

● Other duties as assigned

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Chennai, Bhopal, Jaipur
10 - 15 yrs
₹30L - ₹40L / yr
Spark
Google Cloud Platform (GCP)
skill iconPython
Apache Airflow
PySpark
+1 more

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.


  • Shift: 2 PM 11 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or those with a notice period of up to 30 days


Key Responsibilities:

  • Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
  • Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
  • Ensure data integrity, consistency, and availability across all systems.
  • Collaborate with data engineers, analysts, and stakeholders to optimize performance.
  • Document standards and best practices for data engineering workflows.

Required Experience:


  • 7-8 years of experience in data engineering, architecture, and pipeline development.
  • Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
  • Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
  • Understanding of Data Lake table formats (Delta, Iceberg, etc.).
  • Proficiency in Python for scripting and automation.
  • Strong problem-solving skills and collaborative mindset.


⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Tech Prescient

at Tech Prescient

2 candid answers
3 recruiters
Ashwini Damle
Posted by Ashwini Damle
Pune
8 - 10 yrs
₹15L - ₹30L / yr
skill iconPython
skill iconFlask
FastAPI
skill iconAmazon Web Services (AWS)

Job Title- Technical Lead

Job location- Pune/Hybrid

Availability- Immediate Joiners

Experience Range- 8-10 yrs

Desired skills - Python, Flask/FastAPI/Django, SQL/NoSQL, AWS/Azure


(Python/Flask/FastAPI/Django/AWS/Azure Cloud) who has worked on the modern full stack to deliver software products and solutions. He/She should have experience in leading from the front, handling customer situations, and internal teams, anchoring project communications and delivering outstanding work experience to our customers.


  • 8+ years of relevant software design and development experience building cloud-native applications using Python and JavaScript stack.


  • A thorough understanding of deploying to at least one of the Cloud platforms (AWS or Azure) is required. Knowledge of Kubernetes is an added advantage.


  • Experience with Microservices architecture and serverless deployments.


  • Well-versed with RESTful services and building scalable API architectures using any Python framework.


  • Hands-on with Frontend technologies using either Angular or React.


  • Experience managing distributed delivery teams, tech leadership, ideating with the customer leadership, design discussions and code reviews to deliver quality software products.


  • Good attitude and passion for learning new technologies on the job.


  • Good communication and leadership skills. Ability to lead the internal team as well as customer communication (email/calls).
Read more
Deqode

at Deqode

1 recruiter
Shraddha Katare
Posted by Shraddha Katare
Pune
2 - 5 yrs
₹3L - ₹10L / yr
PySpark
skill iconAmazon Web Services (AWS)
AWS Lambda
SQL
Data engineering
+2 more


Here is the Job Description - 


Location -- Viman Nagar, Pune

Mode - 5 Days Working


Required Tech Skills:


 ● Strong at PySpark, Python

 ● Good understanding of Data Structure 

 ● Good at SQL query/optimization 

 ● Strong fundamentals of OOPs programming 

 ● Good understanding of AWS Cloud, Big Data. 

 ● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB  


Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 

Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications  

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
Tekdi Technologies Pvt. Ltd.
Anuja Gangurde
Posted by Anuja Gangurde
Pune
6 - 8 yrs
Best in industry
skill iconPython
skill iconJava
skill iconPostgreSQL

Responsibilities:

  • Plan and design Python-based microservices as per given requirements.
  • Write scalable and efficient code while managing co-workers, juniors, and cross-team communication for timely delivery.
  • Conduct code reviews and manage development deployments.
  • Provide support for UAT and production deployments.
  • Participate in client calls and handle grievances related to built services.
  • Work in an Agile development environment, following best practices.
  • Stay updated with the latest technology trends in software and hardware and be capable of working across multiple technologies.

Requirements:

Must-Have:

  • Languages & Frameworks: Python, FastAPI, Django, familiarity with web frameworks for development.
  • Databases: PostgreSQL (schema designing, CRUD operations, ORM), Elasticsearch (loading/unloading data, query building), Redis.
  • Workflow Automation & Broker: Working knowledge of Apache Airflow and Kafka.
  • Cloud & DevOps: AWS S3, EC2, Docker, Docker-Compose.
  • Soft Skills: Strong communication skills (English, Hindi), ability to create flow diagrams, UML diagrams, and deliver technical presentations.
  • Development Methodologies: Experience working in an Agile environment.

Good to Have:

  • Languages & Frameworks: Java, Spring Boot.
  • Databases: MongoDB.
  • Cloud Services: AWS Lambda.
  • AI/ML & Big Data:
  • Experience with Machine Learning libraries and frameworks (TensorFlow, PyTorch, Keras).
  • Strong understanding of Data Structures, Algorithms, and Software Engineering principles.
  • Familiarity with Natural Language Processing (NLP) tools (SpaCy, NLTK, Hugging Face).
  • Working knowledge of Spark with PySpark for big data processing.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Bengaluru (Bangalore), Mumbai
4 - 10 yrs
Best in industry
skill iconPython
skill iconReact.js
skill iconRedux/Flux
skill iconDjango
skill iconFlask

About the Role:

We are looking for a skilled Full Stack Developer (Python & React) to join our Data & Analytics team. You will design, develop, and maintain scalable web applications while collaborating with cross-functional teams to enhance our data products.


Responsibilities:

  • Develop and maintain web applications (front-end & back-end).
  • Write clean, efficient code in Python and TypeScript (React).
  • Design and implement RESTful APIs.
  • Work with Snowflake, NoSQL, and streaming data platforms.
  • Build reusable components and collaborate with designers & developers.
  • Participate in code reviews and improve development processes.
  • Debug and resolve software defects while staying updated with industry trends.

Qualifications:

  • Passion for immersive user experiences and data visualization tools (e.g., Apache Superset).
  • Proven experience as a Full Stack Developer.
  • Proficiency in Python (Django, Flask) and JavaScript/TypeScript (React).
  • Strong understanding of HTML, CSS, SQL/NoSQL, and Git.
  • Knowledge of software development best practices and problem-solving skills.
  • Experience with AWS, Docker, Kubernetes, and FaaS.
  • Knowledge of Terraform and testing frameworks (Playwright, Jest, pytest).
  • Familiarity with Agile methodologies and open-source contributions.


Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Bengaluru (Bangalore), Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Navi Mumbai, Kolkata, Rajasthan
5 - 24 yrs
₹9L - ₹70L / yr
skill iconC
skill iconC++
Visual C++
Embedded C++
Artificial Intelligence (AI)
+32 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-fri role, In office, with excellent perks and benefits!


Position Overview

We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9


Key Responsibilities:

1. System Architecture & Design

● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.

● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.

● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.


2. Perception & AI Integration

● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.

● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.

● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.


3. Embedded & Real-Time Systems

● Design high-performance embedded software stacks for real-time robotic control and autonomy.

● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.

● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.


4. Robotics Simulation & Digital Twins

● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.

● Leverage synthetic data generation (Omniverse Replicator) for training AI models.

● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.


5. Navigation & Motion Planning

● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.

● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.

● Implement reinforcement learning-based policies using Isaac Gym.


6. Performance Optimization & Scalability

● Ensure low-latency AI inference and real-time execution of robotics applications.

● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.

● Develop benchmarking and profiling tools to measure software performance on edge AI devices.


Required Qualifications:

● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.

● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.

● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.

● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.

● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.

● Strong background in robotic perception, planning, and real-time control.

● Experience with cloud-edge AI deployment and scalable architectures.


Preferred Qualifications

● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym

● Knowledge of robot kinematics, control systems, and reinforcement learning

● Expertise in distributed computing, containerization (Docker), and cloud robotics

● Familiarity with automotive, industrial automation, or warehouse robotics

● Experience designing architectures for autonomous systems or multi-robot systems.

● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics

● Experience with microservices or service-oriented architecture (SOA)

● Knowledge of machine learning and AI integration within robotic systems

● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)

Read more
Jio Tesseract
TARUN MISHRA
Posted by TARUN MISHRA
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Bengaluru (Bangalore), Pune, Hyderabad, Mumbai, Navi Mumbai
5 - 40 yrs
₹8.5L - ₹75L / yr
Microservices
Architecture
API
NOSQL Databases
skill iconMongoDB
+33 more

JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.


Mon-Fri, In office role with excellent perks and benefits!


Key Responsibilities:

1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.

2. Build and implement scalable and robust microservices and integrate API gateways.

3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).

4. Implement real-time data pipelines using Kafka.

5. Collaborate with front-end developers to ensure seamless integration of backend services.

6. Write clean, reusable, and efficient code following best practices, including design patterns.

7. Troubleshoot, debug, and enhance existing systems for improved performance.


Mandatory Skills:

1. Proficiency in at least one backend technology: Node.js or Python, or Java.


2. Strong experience in:

i. Microservices architecture,

ii. API gateways,

iii. NoSQL databases (e.g., MongoDB, DynamoDB),

iv. Kafka

v. Data structures (e.g., arrays, linked lists, trees).


3. Frameworks:

i. If Java : Spring framework for backend development.

ii. If Python: FastAPI/Django frameworks for AI applications.

iii. If Node: Express.js for Node.js development.


Good to Have Skills:

1. Experience with Kubernetes for container orchestration.

2. Familiarity with in-memory databases like Redis or Memcached.

3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
5 - 10 yrs
₹8L - ₹15L / yr
skill iconPython
PySpark
skill iconAmazon Web Services (AWS)
CI/CD
skill iconGitHub

About the Role:

We are seeking a skilled Python Backend Developer to join our dynamic team. This role focuses on designing, building, and maintaining efficient, reusable, and reliable code that supports both monolithic and microservices architectures. The ideal candidate will have a strong understanding of backend frameworks and architectures, proficiency in asynchronous programming, and familiarity with deployment processes. Experience with AI model deployment is a plus.

Overall 5+ years of IT experience with minimum of 5+ Yrs of experience on Python and in Opensource web framework (Django) with AWS Experience.


Key Responsibilities:

- Develop, optimize, and maintain backend systems using Python, Pyspark, and FastAPI.

- Design and implement scalable architectures, including both monolithic and microservices.

-3+ Years of working experience in AWS (Lambda, Serverless, Step Function and EC2)

-Deep Knowledge on Python Flask/Django Framework

-Good understanding of REST API’s

-Sound Knowledge on Database

-Excellent problem-solving and analytical skills

-Leadership Skills, Good Communication Skills, interested to learn modern technologies

- Apply design patterns (MVC, Singleton, Observer, Factory) to solve complex problems effectively.

- Work with web servers (Nginx, Apache) and deploy web applications and services.

- Create and manage RESTful APIs; familiarity with GraphQL is a plus.

- Use asynchronous programming techniques (ASGI, WSGI, async/await) to enhance performance.

- Integrate background job processing with Celery and RabbitMQ, and manage caching mechanisms using Redis and Memcached.

- (Optional) Develop containerized applications using Docker and orchestrate deployments with Kubernetes.


Required Skills:

- Languages & Frameworks:Python, Django, AWS

- Backend Architecture & Design:Strong knowledge of monolithic and microservices architectures, design patterns, and asynchronous programming.

- Web Servers & Deployment:Proficient in Nginx and Apache, and experience in RESTful API design and development. GraphQL experience is a plus.

-Background Jobs & Task Queues: Proficiency in Celery and RabbitMQ, with experience in caching (Redis, Memcached).

- Additional Qualifications: Knowledge of Docker and Kubernetes (optional), with any exposure to AI model deployment considered a bonus.


Qualifications:

- Bachelor’s degree in Computer Science, Engineering, or a related field.

- 5+ years of experience in backend development using Python and Django and AWS.

- Demonstrated ability to design and implement scalable and robust architectures.

- Strong problem-solving skills, attention to detail, and a collaborative mindset.


Preferred:

- Experience with Docker/Kubernetes for containerization and orchestration.

- Exposure to AI model deployment processes.

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
6 - 9 yrs
Best in industry
Azure
skill iconMachine Learning (ML)
databricks
skill iconPython
SQL
+2 more

About Data Axle:

Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.


Data Axle Pune is pleased to have achieved certification as a Great Place to Work!


Roles & Responsibilities:

We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.


We are looking for a Senior Data Scientist who will be responsible for:

  1. Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
  2. Design or enhance ML workflows for data ingestion, model design, model inference and scoring
  3. Oversight on team project execution and delivery
  4. Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
  5. Visualize and publish model performance results and insights to internal and external audiences


Qualifications:

  1. Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
  2. Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
  3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
  4. Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
  5. Proficiency in Python and SQL required; PySpark/Spark experience a plus
  6. Ability to conduct a productive peer review and proper code structure in Github
  7. Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
  8. Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.


It is not intended to be a complete list of assigned duties but to describe a position level.

Read more
Data Axle

at Data Axle

2 candid answers
Eman Khan
Posted by Eman Khan
Pune
6 - 9 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconFlask
skill iconReact.js
GraphQL
+2 more

General Summary:

The Senior Software Engineer will be responsible for designing, developing, testing, and maintaining full-stack solutions. This role involves hands-on coding (80% of time), performing peer code reviews, handling pull requests and engaging in architectural discussions with stakeholders. You'll contribute to the development of large-scale, data-driven SaaS solutions using best practices like TDD, DRY, KISS, YAGNI, and SOLID principles. The ideal candidate is an experienced full-stack developer who thrives in a fast-paced, Agile environment.


Essential Job Functions:

  • Design, develop, and maintain scalable applications using Python and Django.
  • Build responsive and dynamic user interfaces using React and TypeScript.
  • Implement and integrate GraphQL APIs for efficient data querying and real-time updates.
  • Apply design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure maintainable and scalable code.
  • Develop and manage RESTful APIs for seamless integration with third-party services.
  • Design, optimize, and maintain SQL databases like PostgreSQL, MySQL, and MSSQL.
  • Use version control systems (primarily Git) and follow collaborative workflows.
  • Work within Agile methodologies such as Scrum or Kanban, participating in daily stand-ups, sprint planning, and retrospectives.
  • Write and maintain unit tests, integration tests, and end-to-end tests, following Test-Driven Development (TDD).
  • Collaborate with cross-functional teams, including Product Managers, DevOps, and UI/UX Designers, to deliver high-quality products


Essential functions are the basic job duties that an employee must be able to perform, with or without reasonable accommodation. The function is considered essential if the reason the position exists is to perform that function.


Supportive Job Functions:

  • Remain knowledgeable of new emerging technologies and their impact on internal systems.
  • Available to work on call when needed.
  • Perform other miscellaneous duties as assigned by management.


These tasks do not meet the Americans with Disabilities Act definition of essential job functions and usually equal 5% or less of time spent. However, these tasks still constitute important performance aspects of the job.


Skills

  • The ideal candidate must have strong proficiency in Python and Django, with a solid understanding of Object-Oriented Programming (OOPs) principles. Expertise in JavaScript,
  • TypeScript, and React is essential, along with hands-on experience in GraphQL for efficient data querying.
  • The candidate should be well-versed in applying design patterns such as Factory, Singleton, Observer, Strategy, and Repository to ensure scalable and maintainable code architecture.
  • Proficiency in building and integrating REST APIs is required, as well as experience working with SQL databases like PostgreSQL, MySQL, and MSSQL.
  • Familiarity with version control systems (especially Git) and working within Agile methodologies like Scrum or Kanban is a must.
  • The candidate should also have a strong grasp of Test-Driven Development (TDD) principles.
  • In addition to the above, it is good to have experience with Next.js for server-side rendering and static site generation, as well as knowledge of cloud infrastructure such as AWS or GCP.
  • Familiarity with NoSQL databases, CI/CD pipelines using tools like GitHub Actions or Jenkins, and containerization technologies like Docker and Kubernetes is highly desirable.
  • Experience with microservices architecture and event-driven systems (using tools like Kafka or RabbitMQ) is a plus, along with knowledge of caching technologies such as Redis or
  • Memcached. Understanding OAuth2.0, JWT, SSO authentication mechanisms, and adhering to API security best practices following OWASP guidelines is beneficial.
  • Additionally, experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation, and familiarity with performance monitoring tools such as New Relic or Datadog will be considered an advantage.


Abilities:

  • Ability to organize, prioritize, and handle multiple assignments on a daily basis.
  • Strong and effective inter-personal and communication skills
  • Ability to interact professionally with a diverse group of clients and staff.
  • Must be able to work flexible hours on-site and remote.
  • Must be able to coordinate with other staff and provide technological leadership.
  • Ability to work in a complex, dynamic team environment with minimal supervision.
  • Must possess good organizational skills.


Education, Experience, and Certification:

  • Associate or bachelor’s degree preferred (Computer Science, Engineer, etc.), but equivalent work experience in a technology related area may substitute.
  • 2+ years relevant experience, required.
  • Experience using version control daily in a developer environment.
  • Experience with Python, JavaScript, and React is required.
  • Experience using rapid development frameworks like Django or Flask.
  • Experience using front end build tools.


Scope of Job:

  1. No direct reports.
  2. No supervisory responsibility.
  3. Consistent work week with minimal travel
  4. Errors may be serious, costly, and difficult to discover.
  5. Contact with others inside and outside the company is regular and frequent.
  6. Some access to confidential data.
Read more
Reliable Group IN
Rahul Singh
Posted by Rahul Singh
Pune
8 - 14 yrs
₹10L - ₹30L / yr
skill iconPython
skill iconDjango
skill iconPostgreSQL
MySQL
skill iconMongoDB
+1 more

Job Summary: 


We are seeking a highly skilled and experienced Backend Lead to design, develop, and maintain scalable, reliable, and high-performance backend solutions for our multi-tenant SaaS products. The ideal candidate will have a deep understanding of distributed systems, microservices, and cloud-based architectures, with a proven track record in handling production issues at scale. You will collaborate closely with the AI Lead and Frontend Lead to ensure seamless integration of backend services, AI/ML pipelines, and front-end functionalities, thereby

delivering a robust, secure, and feature-rich experience to our customers.


Key Responsibilities:


1. System Architecture & Design

  • Define and implement the overall backend architecture for multi-tenant SaaS applications, ensuring scalability, high availability, security, and compliance.
  • Design microservices and RESTful/GraphQL APIs and Websockets in alignment with front-end and AI requirements.
  • Establish design patterns and best practices for code modularity, performance optimization, and scalability.

2. Scalability & Performance

  • Identify performance bottlenecks and oversee optimization strategies (caching, load balancing, message queues, etc.).
  • Implement and maintain monitoring/observability solutions (e.g., Prometheus, Grafana, Loki, ELK Stack) to ensure real-time system health insights and rapid incident response.
  • Establish performance baselines, conduct stress tests, and implement disaster recovery strategies.


3. Production Stability & Issue Resolution

  • Proactively monitor production environments, anticipating potential failures and bottlenecks.
  • Triage, diagnose, and resolve production incidents with minimal downtime, using robust logging, tracing, and on-call rotation strategies.
  • Drive root cause analysis and post-mortems for production incidents, implementing preventative measures.

4. Collaboration & Cross-Functional Coordination

  • Work closely with the AI team to integrate MLOps pipelines, ensuring smooth data flow and secure model deployment.
  • Collaborate with the Frontend team to provide well-defined API contracts, enabling efficient UI/UX development.
  • Partner with DevOps to define CI/CD pipelines, container orchestration (Docker, Kubernetes), and infrastructure-as-code (Terraform, CloudFormation) practices.


5. Team Leadership & Mentorship

  • Lead and mentor a team of backend developers, setting clear goals and providing guidance on best practices.
  • Perform code reviews to ensure high code quality, maintainability, and adherence to design standards.
  • Foster a culture of continuous learning, encouraging the adoption of new technologies, tools, and methodologies.


6. Security & Compliance

  • Implement secure coding practices and follow industry standards (e.g., OWASP Top 10).
  • Work with compliance teams to ensure data privacy and regulatory requirements (HIPAA) are met.
  • Oversee authentication/authorization frameworks (OAuth, JWT, etc.) and data encryption at rest and in transit(Encryption, SSL/TLS).


7. Documentation

  • Maintain comprehensive technical documentation, including architecture diagrams, APIs, database schemas, and operational runbooks.
  • Facilitate knowledge sharing sessions and handovers to ensure smooth onboarding of new team members.


Qualifications


Education:

Bachelor’s or master's degree in computer science,engineering, or a related field.


Experience:

  • 8+ years of experience in backend or full-stack development, with 3+ years in a technical lead or architect role.
  • Demonstrated history of leading and delivering large-scale, distributed systems in production environments.


Technical Expertise:

  • Languages & Frameworks: Proficiency in modern backend languages (e.g., Python) and associated frameworks (Django/FastAPI, etc.).
  • Database Systems: Strong knowledge of both SQL (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra) databases, including data modeling and query optimization.
  • Microservices & Architecture: Hands-on experience with microservices,containerization (Docker, Kubernetes), and service mesh architectures.
  • Cloud Platforms: Experience with cloud providers like AWS and Azure for deployment, scaling, and monitoring.
  • CI/CD & DevOps: Familiarity with CI/CD pipelines, Git workflows,infrastructure-as-code, and automated testing.
  • Monitoring & Observability: Proficiency with tools like Prometheus, Grafana, ELK Stack, Loki for real-time monitoring and log analysis.


Soft Skills:

  • Leadership: Ability to lead teams, manage conflict, and drive a vision for the backend architecture.
  • Communication: Strong written and verbal communication skills to coordinate with cross-functional teams (AI, Frontend, QA, Product).
  • Problem-Solving: Analytical mindset for troubleshooting complex issues in high- pressure environments.
  • Collaboration: Demonstrated capability to work seamlessly across multiple teams and stakeholders.


Read more
OnActive
Mansi Gupta
Posted by Mansi Gupta
Gurugram, Pune, Bengaluru (Bangalore), Chennai, Bhopal, Hyderabad, Jaipur
5 - 8 yrs
₹6L - ₹12L / yr
skill iconPython
Spark
SQL
AWS CloudFormation
skill iconMachine Learning (ML)
+3 more

Level of skills and experience:


5 years of hands-on experience in using Python, Spark,Sql.

Experienced in AWS Cloud usage and management.

Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).

Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.

Experience with orchestrators such as Airflow and Kubeflow.

Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).

Fundamental understanding of Parquet, Delta Lake and other data file formats.

Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.

Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Read more
Fractal Analytics

at Fractal Analytics

5 recruiters
Eman Khan
Posted by Eman Khan
Bengaluru (Bangalore), Hyderabad, Gurugram, Noida, Mumbai, Pune, Chennai, Coimbatore
5 - 9 yrs
₹15L - ₹35L / yr
Large Language Models (LLM) tuning
Large Language Models (LLM)
LangChain
Retrieval Augmented Generation (RAG)
Artificial Intelligence (AI)
+8 more

Responsibilities

  • Design and implement advanced solutions utilizing Large Language Models (LLMs).
  • Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
  • Conduct research and stay informed about the latest developments in generative AI and LLMs.
  • Develop and maintain code libraries, tools, and frameworks to support generative AI development.
  • Participate in code reviews and contribute to maintaining high code quality standards.
  • Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
  • Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
  • Possess strong analytical and problem-solving skills.
  • Demonstrate excellent communication skills and the ability to work effectively in a team environment.


Primary Skills

  • Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
  • Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
  • Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
  • Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
  • Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
  • Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Gurugram, Bhopal, Jaipur
5 - 15 yrs
₹20L - ₹35L / yr
Spark
ETL
Data Transformation Tool (DBT)
skill iconPython
Apache Airflow
+2 more

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.


Qualifications & Experience:


bachelor's or master's degree in computer science, Information Systems, or a related field.


5+ years of experience in data engineering, with expertise in data architecture and pipeline development.


☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.


️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.


Strong proficiency in Python and data modelling.


Experience in testing and validation of data pipelines.


Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.


If you meet the above criteria and are interested, please share your updated CV along with the following details:


Total Experience:


Current CTC:


Expected CTC:


Current Location:


Preferred Location:


Notice Period / Last Working Day (if serving notice):


⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response!

Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Pune, Bengaluru (Bangalore), Hyderabad, Chennai, Noida, Ahmedabad, Indore
6 - 10 yrs
₹20L - ₹22L / yr
skill iconPython
skill iconJenkins
Groovy

DevOps Test & Build Engineer

Shift timing: General Shift

Relevant Experience required: 5+ years

Education Required: Bachelor’s / Masters / PhD: Bachelor’s

Work Mode: EIC Office( 5 Days)

Must have skills: Python, Jenkins, Groovy script

Required Technologies

  • Strong analytical abilities to analyze the effectiveness of the test and build environment and make the appropriate improvements
  • Effective communication and leadership skills to collaborate with and support engineering
  • Experience with managing Windows, Linux & OS X systems
  • Strong understanding of CI/CD principles
  • Strong coding knowledge on at least one programming language (python, java, perl or groovy)
  • Hands-on experience with Jenkins master, plugins and node management
  • Working knowledge on Docker and Kubernetes (CLI)
  • Proficiency with scripting languages: bash, PowerShell, python or groovy
  • Familiarity with build systems: Make, CMake, Conan
  • Familiar with git CLI
  • Basic understanding of embedded software, C/C++ language
  • Quickly adapt new technology and complete assign tasks in defined timeline

Preferred

  • Familiarity with Artifactory (conan or docker registry)
  • Knowledge on ElectricFlow
  • CI/CD in Gitlab or Github actions
  • Hands on with Nagios, Grafana
  • Exposure to Ansible or similar systems
  • Worked on Jira integration with CI/CD
  • General knowledge on AWS tools and technologies
  • Fundamental understanding of embedded devices and its integration with CI/CD
  • Exposure to Agile methodologies and CI/CD SDLC best practice


Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Pune, Ahmedabad
5 - 10 yrs
₹21L - ₹22L / yr
skill iconPython
embedded testing
automative

Python Automation Engineer


Shift timing: General Shift

Work Mode: 5 Days work from Office

Relevant Experience required: Python, Embedded Testing, Automative Knowledge

Education Required: Bachelor’s / Masters / PhD: Bachelor’s

Must have: Python, Embedded Testing, Automotive domain and Tools

Required Technologies

  • 5+ years of experience with Test Automation with Embedded systems
  • Strong knowledge in Python + Py Test for automation and scripting tasks
  • Experience in framework development
  • Problem-solving: Demonstrated ability to effectively analyze and solve technical challenges.
  • Hardware Testing: Good to have experience with hardware automation tools and processes
  • Must have experience with Jira, Git
  • Strong troubleshooting skills
  • Good communication skills.
  • Automotive domain knowledge.
  • Automotive tools understanding – Signal generator, Picoscope, CANalyzer, etc
  • Understanding in Audio functionality


Read more
Client based at Pune location.

Client based at Pune location.

Agency job
Bengaluru (Bangalore), Mumbai, Pune, Hyderabad, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
9 - 12 yrs
₹26L - ₹27L / yr
Bash
Shell Scripting
circileci
skill iconPython
skill iconDocker
+6 more

Senior DevOps Engineer 

Shift timing: Rotational shift (15 days in same Shift)

Relevant Experience: 7 Years relevant experience in DevOps

Education Required: Bachelor’s / Masters / PhD – Any Graduate

Must have skills:

BASH Shell-script, CircleCI pipeline, Python, Docker, Kubernetes, Terraform, Github, Postgresql Server, DataDog, Jira

Good to have skills:

AWS, Serverless architecture, static-tools like flake8, black, mypy, isort, Argo CD

Candidate Roles and Responsibilities

Experience: 8+ years in DevOps, with a strong focus on automation, cloud infrastructure, and CI/CD practices.

Terraform: Advanced knowledge of Terraform, with experience in writing, testing, and deploying modules.

AWS: Extensive experience with AWS services (EC2, S3, RDS, Lambda, VPC, etc.) and best practices in cloud architecture.

Docker & amp; Kubernetes: Proven experience in containerization with Docker and orchestration with Kubernetes in production environments.

CI/CD: Strong understanding of CI/CD processes, with hands-on experience in CircleCI or similar tools.

Scripting: Proficient in Python and Linux Shell scripting for automation and process improvement.

Monitoring & amp, Logging: Experience with Datadog or similar tools for monitoring and alerting in large-scale environments.

Version Control: Proficient with Git, including branching, merging, and collaborative workflows.

Configuration Management: Experience with Kustomize or similar tools for managing Kubernetes configurations

Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
5 - 10 yrs
₹4L - ₹14L / yr
MERN Stack
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
skill iconNextJs (Next.js)
+1 more

We’re looking for a Tech Lead with expertise in ReactJS (Next.js), backend technologies, and database management to join our dynamic team.

Key Responsibilities:

  • Lead and mentor a team of 4-6 developers.
  • Architect and deliver innovative, scalable solutions.
  • Ensure seamless performance while handling large volumes of data without system slowdowns.
  • Collaborate with cross-functional teams to meet business goals.

Required Expertise:

  • Frontend: ReactJS (Next.js is a must).
  • Backend: Experience in Node.js, Python, or Java.
  • Databases: SQL (mandatory), MongoDB (nice to have).
  • Caching & Messaging: Redis, Kafka, or Cassandra experience is a plus.
  • Proven experience in system design and architecture.
  • Cloud certification is a bonus.
Read more
DataToBiz Pvt. Ltd.

at DataToBiz Pvt. Ltd.

2 recruiters
Vibhanshi Bakliwal
Posted by Vibhanshi Bakliwal
Pune
8 - 12 yrs
₹15L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.


Location - Pune (Hybrid 3 days)


Responsibilities:


Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.

Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.

Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.

Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.

Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.

Troubleshoot and resolve technical issues related to Power BI dashboards and reports.

Provide technical guidance and mentorship to junior team members.

Stay abreast of the latest trends and technologies in the Power BI ecosystem.

Ensure data security, governance, and compliance with industry best practices.

Contribute to the development and improvement of the organization's data and analytics strategy.

May lead and mentor a team of junior Power BI developers.


Qualifications:


8-12 years of experience in Business Intelligence and Data Analytics.

Proven expertise in Power BI development, including DAX, advanced data modeling techniques.

Strong SQL skills, including writing complex queries, stored procedures, and views.

Experience with ETL/ELT processes and tools.

Experience with data warehousing concepts and methodologies.

Excellent analytical, problem-solving, and communication skills.

Strong teamwork and collaboration skills.

Ability to work independently and proactively.

Bachelor's degree in Computer Science, Information Systems, or a related field preferred.

Read more
Client located in pune location

Client located in pune location

Agency job
Remote, Pune
3 - 5 yrs
₹10L - ₹15L / yr
skill iconPython
Linux/Unix
skill iconAmazon Web Services (AWS)
Windows Azure
Large Language Models (LLM) tuning
+4 more

position: Data Scientist

Job Category: Embedded HW_SW

Job Type: Full Time

Job Location: Pune

Experience: 3 - 5 years

Notice period: 0-30 days

Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms

Education Required: Bachelor’s / Masters / PhD:

Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering

Bachelors with 5 years or Masters with 3 years

Mandatory Skills

  • Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
  • 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
  • Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
  • Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
  • Some experience with open-source OCR models
  • Proficiency in working with large datasets and experience with feature engineering techniques is a plus

Key Responsibilities

  • Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
  • Use agile software development processes to make iterative improvements to our back-end systems.
  • Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.

Optional Skills

  • Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
  • Knowledge of computer vision techniques and experience with libraries like OpenCV
  • Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
  • Ability to work on a dynamic, research-oriented team that has concurrent projects.
  • Working knowledge of Git/version control.
  • Expertise in PyTorch, Tensor Flow, Keras.
  • Excellent coding skills, especially in Python.
  • Experience with Linux-Ubuntu based OS


Read more
client located in pune location

client located in pune location

Agency job
Remote, Pune
5 - 7 yrs
₹15L - ₹20L / yr
Linux/Unix
skill iconPython
Security awareness

Job Title: Embedded HW/SW Engineer (Python Expert)

Location: Pune, India (Hybrid work culture – 3-4 days from office)

Job Type: Full-Time

Job Category: Embedded HW/SW

Experience: 5-7 years

Job Overview:

We are looking for a highly skilled Embedded HW/SW Engineer with expertise in Python to join our team in Pune. The ideal candidate will have 5-7 years of experience in the automotive IVI (In-Vehicle Infotainment) or audio amplifier embedded domain, with a strong focus on Python programming. The role involves working on innovative automotive solutions in a hybrid work culture, with at least 3-4 days required in the office.

Key Responsibilities:

  • Embedded System Development: Work on the development, integration, and testing of embedded systems for automotive applications, focusing on IVI and audio amplifier domains.
  • Python Programming: Develop and maintain Python-based scripts and applications to support embedded system development, testing, and automation.
  • Collaboration: Collaborate with cross-functional teams, including hardware engineers, firmware developers, and system architects, to design and implement robust embedded solutions.
  • Debugging & Testing: Conduct debugging, unit testing, and validation of embedded systems to ensure performance, security, and reliability.
  • System Optimization: Optimize embedded software and hardware systems to improve performance and efficiency in automotive environments.
  • Documentation: Create and maintain technical documentation, including system specifications, test plans, and reports.
  • Continuous Improvement: Stay up to date with the latest trends and technologies in embedded systems, automotive, and Python programming to continuously improve system performance and capabilities.

Qualifications:

  • Experience: 5-7 years of experience in embedded hardware/software development, preferably in automotive IVI or audio amplifier embedded domain.
  • Python Expertise: Strong hands-on experience with Python, especially in embedded development and testing automation.
  • Embedded Systems: Strong understanding of embedded system design, hardware-software integration, and real-time constraints.
  • Automotive Domain Knowledge: Familiarity with automotive industry standards and practices, particularly in IVI and audio amplifier systems.
  • Debugging & Testing: Experience with debugging tools, testing frameworks, and quality assurance practices in embedded systems.
  • Teamwork: Ability to work effectively in a collaborative environment, sharing knowledge and contributing to team success.
  • Communication Skills: Strong verbal and written communication skills to clearly document work and present findings.

Preferred Skills:

  • Experience with C/C++ for embedded software development.
  • Familiarity with embedded Linux or RTOS environments.
  • Knowledge of automotive communication protocols such as CAN and Ethernet.
  • Experience in audio signal processing or related areas within the automotive industry.


Read more
Clients located in Bangalore,Chennai &Pune Location

Clients located in Bangalore,Chennai &Pune Location

Agency job
Bengaluru (Bangalore), Pune, Chennai
3 - 8 yrs
₹8L - ₹16L / yr
ETL
skill iconPython
Shell Scripting
Data modeling
Datawarehousing

Role: Ab Initio Developer

Experience: 2.5 (mandate) - 8 years

Skills: Ab Initio Development

Location: Chennai/Bangalore/Pune

only Immediate to 15 days joiners

should be available for in person interview only

Its a long term contract role with IBM and Arnold is the payrolling company.

JOB DESCRIPTION:

We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.


Key Responsibilities:

·      Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.

·      Analyze complex data requirements and translate them into effective Ab Initio designs.

·      Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.

·      Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.

·      Optimize performance and scalability of Ab Initio jobs.

·      Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.

·      Stay up-to-date with the latest Ab Initio technologies and industry best practices.

Required Skills and Experience:

·      2.5 to 8 years of hands-on experience in Ab Initio development.

·      Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.

·      Proficiency in Ab Initio's graphical interface and command-line tools.

·      Experience in data modeling, data warehousing, and ETL concepts.

·      Strong SQL skills and experience with relational databases.

·      Excellent problem-solving and analytical skills.

·      Ability to work independently and as part of a team.

·      Strong communication and documentation skills.

Preferred Skills:

·      Experience with cloud-based data integration platforms.

·      Knowledge of data quality and data governance concepts.

·      Experience with scripting languages (e.g., Python, Shell scripting).

·      Certification in Ab Initio or related technologies.

Read more
Rigel Networks Pvt Ltd
Pune
5 - 9 yrs
₹8L - ₹15L / yr
Big data Engineer
Software deployment
Release Management
Software release life cycle
Release engineering
+6 more

Dear Candidate,

We are urgently looking for a Release- Big data Engineer For Pune Location.


Experience : 5-8 yrs

Location : Pune

Skills: Big data Engineer , Release Engineer ,DevOps, Aws/Azure/GCP Cloud exp. ,


JD:

  • Oversee the end-to-end release lifecycle, from planning to post-production monitoring. Coordinate with cross-functional teams (DBA, BizOps, DevOps, DNS).
  • Partner with development teams to resolve technical challenges in deployment and automation test runs
  • Work with shared services DBA teams for schema-based multi-tenancy designs and smooth migrations.
  • Drive automation for batch deployments and DR exercises. YAML based micro service deployment using shell/Python/Go
  • Provide oversight for Big Data toolsets for deployment (e.g., Spark, Hive, HBase) in private cloud and public cloud CDP environments
  • Ensure high-quality releases with a focus on stability and long-term performance.
  • Able to run the automation batch scripts and debug the deployment and functional aspects/ work with dev leads to resolve the release cycle issues.



Regards,

Minakshi Soni

Executive- Talent Acquisition

Rigel Networks

Read more
KPI Partners
Hyderabad, Bengaluru (Bangalore), Pune
6 - 12 yrs
₹30L - ₹35L / yr
Generative AI
Large Language Models (LLM)
Large Language Models (LLM) tuning
Natural Language Processing (NLP)
LLMOps
+5 more

We are seeking a Senior Data Scientist with hands-on experience in Generative AI (GenAI) and Large Language Models (LLM). The ideal candidate will have expertise in building, fine-tuning, and deploying LLMs, as well as managing the lifecycle of AI models through LLMOps practices. You will play a key role in driving AI innovation, developing advanced algorithms, and optimizing model performance for various business applications.

Key Responsibilities:

  • Develop, fine-tune, and deploy Large Language Models (LLM) for various business use cases.
  • Implement and manage the operationalization of LLMs using LLMOps best practices.
  • Collaborate with cross-functional teams to integrate AI models into production environments.
  • Optimize and troubleshoot model performance to ensure high accuracy and scalability.
  • Stay updated with the latest advancements in Generative AI and LLM technologies.

Required Skills and Qualifications:

  • Strong hands-on experience with Generative AI, LLMs, and NLP techniques.
  • Proven expertise in LLMOps, including model deployment, monitoring, and maintenance.
  • Proficiency in programming languages like Python and frameworks such as TensorFlow, PyTorch, or Hugging Face.
  • Solid understanding of AI/ML algorithms and model optimization.
Read more
Nirmitee.io

at Nirmitee.io

4 recruiters
Gitashri K
Posted by Gitashri K
Pune
2 - 4 yrs
₹2L - ₹5L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+4 more

Responsibilities:

  • Coordinating with development teams to determine application requirements.
  • Writing scalable code using Python programming language.
  • Testing and debugging applications.
  • Developing back-end components.
  • Integrating user-facing elements using server-side logic.
  • Assessing and prioritizing client feature requests.
  • Integrating data storage solutions.
  • Coordinating with front-end developers
  • Developing digital tools to monitor online traffic.
  • Performing all phases of software engineering including requirements analysis, application design, and code development and testing.
  • Designing and implementing product features in collaboration with business and IT stakeholders.
  • Must be able to contribute to tally automation.


Skills required:

  • Web development using HTML, CSS, JS, good team player, agile delivery, application deployment on cloud using docker/kubernetes containers.
  • Should be able analyze the requirement, develop the scripts/POCs
  • Should have knowledge on deployments and documentation of the deliverables
  • Experience working on Linux environments
  • Experience working on Python Libraries
  • Hands on Experience on version control tool
  • Experience on SQL data base.
  • Expert knowledge of Python and related frameworks including Django and Flask.
  • A deep understanding and multi-process architecture and the threading limitations of Python.
  • Must have experience on MVC framework
  • Ability to collaborate on projects and work independently when required.


Read more
iBizlogic

at iBizlogic

1 recruiter
swapnil Patil
Posted by swapnil Patil
Pune
5 - 15 yrs
₹5L - ₹10L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+4 more

We are looking for full stack developer/Leader to lead, OWN and deliver across the entire application development life-cycle. He / She will be responsible for creating and owning the product road map of enterprise software product.


-A responsible and passionate professional who has will power to drive the product goals and ensure the outcomes expected from the team.


- He / She should have a strong desire and eagerness to learn new and emerging technologies.


- Skills Required :


- Python/Django Rest Framework,


- Database Structure


-Cloud-Ops - AWS


Roles & responsibilities :-


- Developer responsibilities include writing and testing code, debugging programs


- Design and implementation of REST API


- Build, release, and manage the configuration of all production systems


- Manage a continuous integration and deployment methodology for server-based technologies


- Identify customer problems and create functional prototypes offering a solution


If you are willing to take up challenges and contribute in developing world class products, - this is the place for you.




About FarmMobi :




A trusted enterprise software product company in AgTech space started with a mission to revolutionize the Global agriculture sector.


We operate on software as a service model (SASS) and cater to the needs of global customers in the field of Agriculture.


The idea is to use emerging technologies like Mobility, IOT, Drones, Satellite imagery, Blockchain etc. to digitally transform the agriculture landscape.

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Kolkata, Pune, Mumbai, Delhi, Noida, Kochi (Cochin), Bhubaneswar
8 - 12 yrs
₹8L - ₹26L / yr
skill iconDocker
skill iconKubernetes
DevOps
cicd
skill iconJenkins
+5 more

6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.

• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.

• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.

• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.

• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms

• Experience with Agile software development and knowledge of best practices for agile Scrum team.

• Proficient with GIT version control

• Experience working with Linux and cloud compute platforms.

• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.

• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Hyderabad, Bengaluru (Bangalore), Pune, Mumbai, Kolkata, Delhi, Noida
12 - 14 yrs
₹11L - ₹27L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+4 more

Responsibilities

  • Develop and maintain robust APIs to support various applications and services.
  • Design and implement scalable solutions using AWS cloud services.
  • Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
  • Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
  • Ensure the security and integrity of applications by implementing best practices and security measures.
  • Optimize application performance and troubleshoot issues to ensure smooth operation.
  • Provide technical guidance and mentorship to junior team members.
  • Conduct code reviews to ensure adherence to coding standards and best practices.
  • Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
  • Develop and maintain documentation for code processes and procedures.
  • Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
  • Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
  • Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.

 

Qualifications

  • Possess strong expertise in developing and maintaining APIs.
  • Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
  • Have extensive experience with Python frameworks such as Flask and Django.
  • Exhibit strong analytical and problem-solving skills to address complex technical challenges.
  • Show ability to collaborate effectively with cross-functional teams and stakeholders.
  • Display excellent communication skills to convey technical concepts clearly.
  • Have a background in the Consumer Lending domain is a plus.
  • Demonstrate commitment to continuous learning and staying updated with industry trends.
  • Possess a strong understanding of agile development methodologies.
  • Show experience in mentoring and guiding junior team members.
  • Exhibit attention to detail and a commitment to delivering high-quality software solutions.
  • Demonstrate ability to work effectively in a hybrid work model.
  • Show a proactive approach to identifying and addressing potential issues before they become problems.
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune
5 - 10 yrs
Best in industry
Object Oriented Programming (OOPs)
Amazon Redshift
DSA
Big Data
Hadoop
+3 more

Job Summary:

We are seeking a skilled Senior Data Engineer with expertise in application programming, big data technologies, and cloud services. This role involves solving complex problems, designing scalable systems, and working with advanced technologies to deliver innovative solutions.

Key Responsibilities:

  • Develop and maintain scalable applications using OOP principles, data structures, and problem-solving skills.
  • Build robust solutions using Java, Python, or Scala.
  • Work with big data technologies like Apache Spark for large-scale data processing.
  • Utilize AWS services, especially Amazon Redshift, for cloud-based solutions.
  • Manage databases including SQL, NoSQL (e.g., MongoDB, Cassandra), with Snowflake as a plus.

Qualifications:

  • 5+ years of experience in software development.
  • Strong skills in OOPS, data structures, and problem-solving.
  • Proficiency in Java, Python, or Scala.
  • Experience with Spark, AWS (Redshift mandatory), and databases (SQL/NoSQL).
  • Snowflake experience is good to have.
Read more
Rigel Networks Pvt Ltd
Minakshi Soni
Posted by Minakshi Soni
Bengaluru (Bangalore), Pune, Mumbai, Chennai
8 - 12 yrs
₹8L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Terraform
Amazon Redshift
Redshift
Snowflake
+16 more

Dear Candidate,


We are urgently Hiring AWS Cloud Engineer for Bangalore Location.

Position: AWS Cloud Engineer

Location: Bangalore

Experience: 8-11 yrs

Skills: Aws Cloud

Salary: Best in Industry (20-25% Hike on the current ctc)

Note:

only Immediate to 15 days Joiners will be preferred.

Candidates from Tier 1 companies will only be shortlisted and selected

Candidates' NP more than 30 days will get rejected while screening.

Offer shoppers will be rejected.


Job description:

 

Description:

 

Title: AWS Cloud Engineer

Prefer BLR / HYD – else any location is fine

Work Mode: Hybrid – based on HR rule (currently 1 day per month)


Shift Timings 24 x 7 (Work in shifts on rotational basis)

Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.

Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting



Experience and Skills Requirements:


Experience:

8 years of experience in a technical role working with AWS


Mandatory

Technical troubleshooting and problem solving

AWS management of large-scale IaaS PaaS solutions

Cloud networking and security fundamentals

Experience using containerization in AWS

Working Data warehouse knowledge Redshift and Snowflake preferred

Working with IaC – Terraform and Cloud Formation

Working understanding of scripting languages including Python and Shell

Collaboration and communication skills

Highly adaptable to changes in a technical environment

 

Optional

Experience using monitoring and observer ability toolsets inc. Splunk, Datadog

Experience using Github Actions

Experience using AWS RDS/SQL based solutions

Experience working with streaming technologies inc. Kafka, Apache Flink

Experience working with a ETL environments

Experience working with a confluent cloud platform


Certifications:


Minimum

AWS Certified SysOps Administrator – Associate

AWS Certified DevOps Engineer - Professional



Preferred


AWS Certified Solutions Architect – Associate


Responsibilities:


Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.


The following is a list of expected responsibilities:


To manage and support a customer’s AWS platform

To be technical hands on

Provide Incident and Problem management on the AWS IaaS and PaaS Platform

Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner

Actively monitor an AWS platform for technical issues

To be involved in the resolution of technical incidents tickets

Assist in the root cause analysis of incidents

Assist with improving efficiency and processes within the team

Examining traces and logs

Working with third party suppliers and AWS to jointly resolve incidents


Good to have:


Confluent Cloud

Snowflake




Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Rigel Networks

Worldwide Locations: USA | HK | IN 

Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Pune
4 - 8 yrs
₹1L - ₹12L / yr
PySpark
Data engineering
Big Data
Hadoop
Spark
+4 more

Job Description :

Job Title : Data Engineer

Location : Pune (Hybrid Work Model)

Experience Required : 4 to 8 Years


Role Overview :

We are seeking talented and driven Data Engineers to join our team in Pune. The ideal candidate will have a strong background in data engineering with expertise in Python, PySpark, and SQL. You will be responsible for designing, building, and maintaining scalable data pipelines and systems that empower our business intelligence and analytics initiatives.


Key Responsibilities:

  • Develop, optimize, and maintain ETL pipelines and data workflows.
  • Design and implement scalable data solutions using Python, PySpark, and SQL.
  • Collaborate with cross-functional teams to gather and analyze data requirements.
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Monitor and troubleshoot data pipelines to ensure reliability and performance.
  • Work on hybrid data environments involving on-premise and cloud-based systems.
  • Assist in the deployment and maintenance of big data solutions.

Required Skills and Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or related field.
  • 4 to 8 Years of experience in Data Engineering or related roles.
  • Proficiency in Python and PySpark for data processing and analysis.
  • Strong SQL skills with experience in writing complex queries and optimizing performance.
  • Familiarity with data pipeline tools and frameworks.
  • Knowledge of cloud platforms such as AWS, Azure, or GCP is a plus.
  • Excellent problem-solving and analytical skills.
  • Strong communication and teamwork abilities.

Preferred Qualifications:

  • Experience with big data technologies like Hadoop, Hive, or Spark.
  • Familiarity with data visualization tools and techniques.
  • Knowledge of CI/CD pipelines and DevOps practices in a data engineering context.

Work Model:

  • This position follows a hybrid work model, with candidates expected to work from the Pune office as per business needs.

Why Join Us?

  • Opportunity to work with cutting-edge technologies.
  • Collaborative and innovative work environment.
  • Competitive compensation and benefits.
  • Clear career progression and growth opportunities.


Read more
TapRootz
Pune
3.5 - 8 yrs
₹15L - ₹25L / yr
LogRhythm
Security Information and Event Management (SIEM)
skill iconPython
Powershell

Job Title: L2 SIEM Administrator - LogRhythm


Location:

Pune – Customer Site (Magarpatta)


Job Summary:


We are seeking an experienced and proactive L2 SIEM Administrator with expertise in LogRhythm to manage, maintain, and optimize our Security Information and Event Management (SIEM) infrastructure.


The ideal candidate will develop use case frameworks, implement SIEM rules, and ensure efficient log management and threat detection.


Key Responsibilities:


LogRhythm Administration:

Manage and maintain the LogRhythm SIEM platform for optimal performance.

Develop, implement, and fine-tune use case frameworks and detection rules to enhance threat detection.

Incident Analysis:

Investigate security alerts and logs to identify and respond to threats.

Escalate unresolved issues to higher-level teams or external stakeholders.

Log Management:

Onboard and configure log sources, ensuring accurate data ingestion and normalization.

Validate log integrity across network and endpoint sources.

Optimization and Troubleshooting:

Resolve technical issues and optimize system performance.

Monitor and maintain dashboards and reporting tools for actionable insights.

Qualifications:


Proven expertise with LogRhythm, including creating and managing use case frameworks and detection rules.

3+ years of experience in SIEM administration.

Strong understanding of security logs, event correlation, and incident analysis.

Familiarity with scripting (Python, PowerShell) and security frameworks (e.g., MITRE ATT&CK).

Relevant certifications (e.g., LogRhythm Certified Professional (LRCP)) are a plus.

Read more
Saint Anton Hospital
Dubai, Abu Dhabi, Los Angeles California, Detroit, USA, Delhi, Qatar, RIYADH (Saudi Arabia), Berlin, London, Brisbane, Hyderabad, Bengaluru (Bangalore), Mumbai, Pune
3 - 5 yrs
₹35L - ₹70L / yr
Engineering Management
skill iconJava
skill iconNodeJS (Node.js)
skill iconPython
skill iconAndroid Development
+1 more

Job Summary:


Job Description Combine your passion for technology, healthcare, and communications as an HCS Web/React UI Software Engineer. You will develop and improve our software workflow control systems through the full software development cycle from design to maintenance, while improving hospital communication and performance. This role will work on re-engineering existing applications into the SAAS model using state of the art technologies.

Primarily responsible for technical leadership, support, and guidance to local engineering teams, ensuring that projects are completed safely, efficiently, and within budget as well as overseeing the planned maintenance programs of facilities.


Key Responsibilities:

• Oversee and coordinate all facilities management and project activities, including maintenance, renovation, and remodeling projects.

• Lead and manage facility-related projects, such as renovations, expansions, or relocations. Develop project plans, allocate resources, set timelines, and monitor progress to ensure successful completion within budget and schedule.

• Oversee relationships with external vendors, contractors, and service providers. Negotiate contracts, monitor performance, and ensure compliance with agreed-upon terms and service level agreements.

• Identify and implement best practices in facilities management. Ensure facilities comply with all relevant codes, regulations, and health and safety standards. Stay updated on changes in regulations and implement necessary changes to maintain compliance.

• Oversee facilities planned maintenance programs.

• Monitor project progress, identify risks, and implement mitigation strategies.

• Lead and oversee engineering projects within the Dubai & Northern Emirates region.

• Prepare financial reports and forecasts for management review.

• Identify and mitigate project risks and issues.

• Ensure compliance with environmental, health, and safety regulations.

• Generate progress reports, status updates, and other project-related documentation as required.

• Ensure that all project deliverables are of the highest quality, and that they meet the requirements of the project scope.


Qualifications:

• Bachelor's Degree in engineering or related fields.

• Engineering qualifications in Electrical or HVAC disciplines will be an advantage.

• 5 - 10 years of experience in Engineering in Healthcare setup or a similar field.

• Strong analytical and problem-solving skills.

• Must have hands-on experience in hospital systems.

• Ability to think critically and maintain a high level of confidentiality.

• Must be willing to participate in crisis management at facility level.

• Strong interpersonal, verbal, and written communication skills.

• Familiarity with project management software.


Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
3 - 5 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
skill iconGo Programming (Golang)
+7 more

We're seeking an experienced Backend Software Engineer to join our team.

As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.

This includes APIs, databases, and server-side logic.


Responsibilities:

  • Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
  • Write clean, efficient, and well-documented code that adheres to industry standards and best practices
  • Participate in code reviews and contribute to the improvement of the codebase
  • Debug and resolve issues in the existing codebase
  • Develop and execute unit tests to ensure high code quality
  • Work with DevOps engineers to ensure seamless deployment of software changes
  • Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
  • Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
  • Collaborate with cross-functional teams to identify and prioritize project requirements

Requirements:

  • At least 2+ years of experience building scalable and reliable backend systems
  • Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
  • Experience with either of the frameworks such as Django, Express, gRPC
  • Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
  • Familiarity with containerization technologies such as Docker and Kubernetes
  • Understanding of software development methodologies such as Agile and Scrum
  • Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
  • Bachelor's/Master's degree in Computer Science or related field
  • Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
  • Good written and verbal communication skills in English
Read more
Pune
4 - 7 yrs
₹18L - ₹30L / yr
Large Language Models (LLM)
skill iconPython
skill iconDocker
Retrieval Augmented Generation (RAG)
SQL
+7 more

Job Description

Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.


Role & Responsibilities

  • Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
  • LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
  • Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
  • Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
  • Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
  • Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.


Preferred Candidate Profile

  • Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
  • Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
  • Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
  • AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
  • Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
  • MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
  • Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
  • Education: Degree in Computer Science, Data Engineering, or related field.

Perks and Benefits

  • Competitive Compensation: INR 20L to 30L per year.
  • Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.


Read more
Gyaan AI Private Limited

at Gyaan AI Private Limited

2 candid answers
Arwa Virpurwala
Posted by Arwa Virpurwala
Pune
2 - 8 yrs
₹10L - ₹25L / yr
skill iconPython
Databases
skill iconDjango
Relational Database (RDBMS)
skill iconRedis
+1 more

About Gyaan:

Gyaan empowers Go-To-Market teams to ascend to new heights in their sales performance, unlocking boundless opportunities for growth. We're passionate about helping sales teams excel beyond expectations. Our pride lies in assembling an unparalleled team and crafting a crucial solution that becomes an indispensable tool for our users. With Gyaan, sales excellence becomes an attainable reality.


About the Job:

Gyaan is seeking an experienced backend developer with expertise in Python, Django, AWS, and Redis to join our dynamic team! As a backend developer, you will be responsible for building responsive and scalable applications using Python, Django, and associated technologies.


Required Qualifications:

  • 2+ years of hands-on experience programming in Python, Django
  • Good understanding of CI/CD tools (Github Action, Gitlab CI) in a SaaS environment.
  • Experience in building and running modern full-stack cloud applications using public cloud technologies such as AWS/
  • Proficiency with at least one relational database system like MySQL, Oracle, or PostgreSQL.
  • Experience with unit and integration testing.
  • Effective communication skills, both written and verbal, to convey complex problems across different levels of the organization and to customers.
  • Familiarity with Agile methodologies, software design lifecycle, and design patterns.
  • Detail-oriented mindset to identify and rectify errors in code or product development workflow.
  • Willingness to learn new technologies and concepts quickly, as the "cloud-native" field evolves rapidly.


Must Have Skills:

  • Python
  • Django Framework
  • AWS
  • Redis
  • Database Management


Qualifications:

  • Bachelor’s degree in Computer Science or equivalent experience.


If you are passionate about solving problems and have the required qualifications, we want to hear from you! You must be an excellent verbal and written communicator, enjoy collaborating with others, and welcome discussing a plan upfront. We offer a competitive salary, flexible work hours, and a dynamic work environment.


Read more
AbleCredit

at AbleCredit

2 candid answers
Utkarsh Apoorva
Posted by Utkarsh Apoorva
Bengaluru (Bangalore), Pune
2 - 5 yrs
₹15L - ₹30L / yr
skill iconPython
PyTorch
Shell Scripting

Salary: INR 15 to INR 30 lakhs per annum

Performance Bonus: Up to 10% of the base salary can be added

Location: Bangalore or Pune

Experience: 2-5 years


About AbleCredit:

AbleCredit is on a mission to solve the Credit Gap of emerging economies. In India alone, the Credit Gap is over USD 5T (Trillion!). This is the single largest contributor to poverty, poor genie index and lack of opportunities. Our Vision is to deploy AI reliably, and safely to solve some of the greatest problems of humanity.



Job Description:

This role is ideal for someone with a strong foundation in deep learning and hands-on experience with AI technologies.


  • You will be tasked with solving complex, real-world problems using advanced machine learning models in a privacy-sensitive domain, where your contributions will have a direct impact on business-critical processes.
  • As a Machine Learning Engineer at AbleCredit, you will collaborate closely with the founding team, who bring decades of industry expertise to the table.
  • You’ll work on deploying cutting-edge Generative AI solutions at scale, ensuring they align with strict privacy requirements and optimize business outcomes.


This is an opportunity for experienced engineers to bring creative AI solutions to one of the most challenging and evolving sectors, while making a significant difference to the company’s growth and success.



Requirements:

  • Experience: 2-4 years of hands-on experience in applying machine learning and deep learning techniques to solve complex business problems.
  • Technical Skills: Proficiency in standard ML tools and languages, including:
  • Python: Strong coding ability for building, training, and deploying machine learning models.
  • PyTorch (or MLX or Jax): Solid experience in one or more deep learning frameworks for developing and fine-tuning models.
  • Shell scripting: Familiarity with Unix/Linux shell scripting for automation and system-level tasks.
  • Mathematical Foundation: Good understanding of the mathematical principles behind machine learning and deep learning (linear algebra, calculus, probability, optimization).
  • Problem Solving: A passion for solving tough, ambiguous problems using AI, especially in data-sensitive, large-scale environments.
  • Privacy & Security: Awareness and understanding of working in privacy-sensitive domains, adhering to best practices in data security and compliance.
  • Collaboration: Ability to work closely with cross-functional teams, including engineers, product managers, and business stakeholders, and communicate technical ideas effectively.
  • Work Experience: This position is for experienced candidates only.


Additional Information:

  • Location: Pune or Bangalore
  • Work Environment: Collaborative and entrepreneurial, with close interactions with the founders.
  • Growth Opportunities: Exposure to large-scale AI systems, GenAI, and working in a data-driven privacy-sensitive domain.
  • Compensation: Competitive salary and ESOPs, based on experience and performance
  • Industry Impact: You’ll be at the forefront of applying Generative AI to solve high-impact problems in the finance/credit space, helping shape the future of AI in the business world.
Read more
Pune, Hybrid
3 - 5 yrs
₹8L - ₹16L / yr
skill iconPython
skill iconReact.js
skill iconAngularJS (1.x)
skill iconHTML/CSS
skill iconNodeJS (Node.js)

About the Company:

We are a dynamic and innovative company committed to delivering exceptional solutions that empower our clients to succeed. With our headquarters in the UK and a global footprint across the US, Noida, and Pune in India, we bring a decade of expertise to every endeavour, driving real results. We take a holistic approach to project delivery, providing end-to-end services that encompass everything from initial discovery and design to implementation, change management, and ongoing support. Our goal is to help clients leverage the full potential of the Salesforce platform to achieve their business objectives.

What Makes VE3 The Best For You We think of your family as our family, no matter the shape or size. We offer maternity leaves, PF Fund Contributions, 5 days working week along with a generous paid time off program that benefits balance your work & personal life.


Job Overview:

We are looking for a talented and experienced Senior Full Stack Web Developer who will be responsible for designing, developing, and implementing software solutions. As a part of our innovative team in Pune, you will work closely with global teams, transforming requirements into technical solutions while maintaining a strong focus on quality and efficiency.


Requirements


Key Responsibilities:

Software Design & Development:

Design software solutions based on requirements and within the constraints of architectural and design guidelines.

Derive software requirements, validate software specifications, and conduct feasibility analysis.

Accurately translate software architecture into design and code.

Technical Leadership:

Guide Scrum team members on design topics and ensure consistency against the design and architecture.

Lead the team in test automation design and implementation.

Identify opportunities for harmonization and reuse of


components/technology.

Coding & Implementation:

Actively participate in coding features and bug-fixing, ensuring adherence to coding and quality guidelines.

Lead by example in delivering solutions for self-owned components.

Collaboration & Coordination:

Collaborate with globally distributed teams to develop scalable and high-quality software products.

Ensure seamless integration and communication across multiple locations.


Required Skills and Qualifications:

Education: Bachelor's degree in Engineering or a related technical field.

Experience: 4-5 years of experience in software design and development.

Technical Skills:


Backend Development:

Strong experience in microservices API development using Java, Python, Spring Cloud.

Proficiency in build tools like Maven.


Frontend Development:

Expertise in full stack web development using JavaScript, Angular, React JS, NodeJS HTML5, and CSS3.


Database Knowledge:

Working knowledge of Oracle/PostgreSQL databases.

Cloud & DevOps:

Hands-on experience with AWS (Lambda, API Gateway, S3, EC2, EKS).

Exposure to CI/CD tools, code analysis, and test automation.

Operating Systems:

Proficiency in both Windows and Unix-based environments.

Nice to Have:

Experience with Terraform for infrastructure automation.

Soft Skills:

Individual Contributor: Ability to work independently while being a strong team player.

Problem-Solving: Strong analytical skills to identify issues and implement effective solutions.

Communication: Excellent verbal and written communication skills for collaboration with global teams.


Benefits

  • Competitive salary and benefits package.
  • Unlimited Opportunities for professional growth and development.
  • Collaborative and supportive work environment. 
  • Flexible working hours
  • Work life balance
  • Onsite opportunities
  • Retirement Plans
  • Team Building activities
  • Visit us @ https://www.ve3.global


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort