50+ Python Jobs in Bangalore (Bengaluru) | Python Job openings in Bangalore (Bengaluru)
Apply to 50+ Python Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


Mandatory
Strong Senior / Lead Software Engineer profile
Mandatory (Experience 1) - Must have Min 6 YOE in Software development, wherein 1-2 Yrs as Senior or Lead Role.
Mandatory (Experience 2) - Must have experience with Python + Django / Flask or similar framework
Mandatory (Experience 3) - Must have experience with Relational Databases (like MySQL, PostgreSQL, Oracle etc)
Mandatory (Experience 4) - Must have good experience in Micro Services or Distributed System frameworks(eg, Kafka, Google pub / Sub, AWS SNS, Azure Service Bus) or Message brokers(eg,RabbitMQ)
Mandatory (Location) - Candidate must be from Bengaluru
Mandatory (Company) - Product / Start-up companies only
Mandatory (Stability) - Should have worked for at least 2 years in 1 Company in last 3 years..

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills

Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 2+ years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
· Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
· Implement data governance and security best practices to ensure compliance and data integrity.
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· 2+ years of prior experience in data engineering, with a focus on designing and building data pipelines.
· Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
· Strong programming skills in languages such as Python, Java, or Scala.
· Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
· Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.

Role & Responsibilities
Take end-to-end ownership of critical backend services — from architecture and development to deployment and scale.
Design systems for performance, reliability, and observability. Identify bottlenecks and eliminate them proactively.
Collaborate with product and design to deeply understand user pain points and build the right solutions.
Work independently and own complex modules with minimal oversight.
Champion clean, maintainable code and help set a high bar for engineering excellence across the team.
Stay up-to-date with new tools, technologies, and backend trends — and bring the best ideas into our stack.
Ideal Candidate
2+ years of backend development experience, ideally with Kotlin and Spring Boot (or willingness to ramp up quickly).
You’ve worked in fast-moving teams and thrive when given room to figure things out.
You take initiative and can drive complex modules to completion without needing constant guidance.
Strong with both low-level and high-level design; you know how to build scalable, reliable RESTful APIs.
Proficient with relational databases and aware of common performance pitfalls.
Confident with debugging and optimizing — memory leaks, latency issues, and other hard-to-find problems don’t scare you.
You write clean, testable code and know how to leave systems better than you found them.
You bring a product mindset — caring not just about what’s built, but why and how it delivers value to users.

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

Role Overview
We are looking for a QA Engineer with 2–5 years of experience in manual testing and software quality assurance. The ideal candidate is detail-oriented, analytical, and a strong team player, capable of ensuring a high standard of product quality across our web and mobile applications. While the role is primarily focused on manual testing, a foundational understanding of automation and scripting will be helpful when collaborating with the automation team.
Responsibilities
- Design, write, and execute comprehensive manual test cases for web and mobile applications.
- Perform various types of testing including functional, regression, exploratory, UI, and API testing.
- Log and track bugs using test management and defect tracking tools.
- Collaborate with developers, product managers, and other QA members to ensure thorough test coverage.
- Support the automation team by identifying automatable scenarios and reviewing scripts when necessary.
- Participate in all phases of the QA lifecycle – including test planning, execution, and release sign-off.
- Ensure timely delivery of high-quality software releases.
Requirements
Education:
- Bachelor’s degree in Engineering (Computer Science, IT, or a related field)
Experience:
- 2–5 years of hands-on experience in manual testing for web and mobile applications
Must-Have Skills:
- Strong knowledge of test case writing, planning, and defect management
- Basic familiarity with automation tools like Selenium
- Basic knowledge of Python
- Experience with API testing using tools like Postman
- Understanding of Agile/Scrum development methodologies
- Excellent communication skills and ability to collaborate across functions
Good-to-Have:
- Exposure to version control systems like Git and CI/CD tools like Jenkins
- Basic understanding of performance or security testing


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills

The CRM team is responsible for communications across email, mobile push and web push channels. We focus on our existing customers and manage our interactions and touchpoints to ensure that we optimise revenue generation, drive traffic to the website and app, and extend the active customer lifecycle. We also work closely with the Marketing and Product teams to ensure that any initiatives are integrated with CRM activities.
Our setup is highly data driven and requires the understanding and skill set to work with large datasets, employing data science techniques to create personalised content at a 1:1 level. The candidate for this role will have to demonstrate a strong background working in this environment, and have a proven track record of striving to find technical solutions for the many projects and situations that the business encounters.
Overview of role :
- Setting up automation pipelines in Python and SQL to flow data in and out of CRM platform for reporting, personalisation and use in data warehousing (Redshift)
- Writing, managing, and troubleshooting template logic written in Freemarker.
- Building proprietary algorithms for use in CRM campaigns, targeted at improving all areas of customer lifecycle.
- Working with big datasets to segment audiences on a large scale.
- Driving innovation by planning and implementing a range of AB tests.
- Acting as a technical touchpoint for developer and product teams to push projects over the line.
- Integrating product initiatives into CRM, and performing user acceptance testing (UAT)
- Interacting with multiple departments, and presenting to our executive team to help them understand CRM activities and plan new initiatives.
- Working with third party suppliers to optimise and improve their offering.
- Creating alert systems and troubleshooting tools to check in on health of automated jobs running in Jenkins and CRM platform.
- Setting up automated reporting in Amazon Quicksight.
- Assisting other teams with any technical advice/information they may require.
- When necessary, working in JavaScript to set up Marketing and CRM tags in Adobe Launch.
- Training team members and working with them to make processes more efficient.
- Working with REST APIs to integrate CRM System with a range of technologies from third party vendors to in-house services.
- Contributing to discussions on future strategy, interpretation of test results, and helping resolve any major CRM issues
Key skills required :
- Strong background in SQL
- Experience with a programming language (preferably Python OR Free marker)
- Understanding of REST APIs and how to utilise them
- Technical-savvy - you cast a creative eye on all activities of the team and business and suggest new ideas and improvements
- Comfortable presenting and interacting with all levels of the business and able to communicate technical information in a clear and concise manner.
- Ability to work under pressure and meet tight deadlines.
- Strong attention to detail
- Experience working with large datasets, and able to spot and pick up on important trends
- Understanding of key CRM metrics on performance and deliverability


Exp: 4-6 years
Position: Backend Engineer
Job Location: Bangalore ( office near cubbon park - opp JW marriott)
Work Mode : 5 days work from office
Requirements:
● Engineering graduate with 3-5 years of experience in software product development.
● Proficient in Python, Node.js, Go
● Good knowledge of SQL and NoSQL
● Strong Experience in designing and building APIs
● Experience with working on scalable interactive web applications
● A clear understanding of software design constructs and their implementation
● Understanding of the threading limitations of Python and multi-process architecture
● Experience implementing Unit and Integration testing
● Exposure to the Finance domain is preferred
● Strong written and oral communication skills

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.

About the job
Location: Bangalore, India
Job Type: Full-Time | On-Site
Job Description
We are looking for a highly skilled and motivated Python Backend Developer to join our growing team in Bangalore. The ideal candidate will have a strong background in backend development with Python, deep expertise in relational databases like MySQL, and hands-on experience with AWS cloud infrastructure.
Key Responsibilities
- Design, develop, and maintain scalable backend systems using Python.
- Architect and optimize relational databases (MySQL), including complex queries and indexing.
- Manage and deploy applications on AWS cloud services (EC2, S3, RDS, DynamoDB, API Gateway, Lambda).
- Automate cloud infrastructure using CloudFormation or Terraform.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Mentor junior developers and contribute to a culture of technical excellence.
- Proactively identify issues and provide solutions to challenging backend problems.
Mandatory Requirements
- Minimum 3 years of professional experience in Python backend development.
- Expert-level knowledge in MySQL database creation, optimization, and query writing.
- Strong experience with AWS services, particularly EC2, S3, RDS, DynamoDB, API Gateway, and Lambda.
- Hands-on experience with infrastructure as code using CloudFormation or Terraform.
- Proven problem-solving skills and the ability to work independently.
- Demonstrated leadership abilities and team collaboration skills.
- Excellent verbal and written communication.
Role: Senior Software Engineer - Backend
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Senior Backend Engineer with a minimum of 3 years of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that power our applications. You will work closely with cross-functional teams to ensure seamless integration between frontend and backend components, leveraging your expertise to architect scalable, secure, and high-performance solutions. As a senior team member, you will mentor junior developers and lead technical initiatives to drive innovation and excellence.
Annual Compensation: 12-18 LPA
Responsibilities:
- Lead the design, development, and maintenance of scalable and efficient backend systems and APIs.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases and NoSQL databases.
- Mentor and guide junior developers, fostering a culture of knowledge sharing and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and provide technical leadership in architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 3 years of proven experience as a Backend Engineer, with a strong portfolio of product-building projects.
- Strong proficiency in backend development using Java, Python, and JavaScript, with experience in building scalable and high-performance applications.
- Experience with popular backend frameworks and libraries for Java (e.g., Spring Boot) and Python (e.g., Django, Flask).
- Strong expertise in SQL and NoSQL databases (e.g., MySQL, MongoDB) with a focus on data modeling and scalability.
- Practical experience with caching mechanisms (e.g., Redis) to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities


About the Role:
- We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
- This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.
Key Responsibilities:
- Design, develop, test, and maintain backend services using Python.
- Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
- Work with popular Python frameworks such as Django or Flask for rapid development.
- Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
- Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
- Apply object-oriented programming principles to solve real-world problems efficiently.
- Implement and support event-driven architectures where applicable.
- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
- Write clean, maintainable, and reusable code with proper documentation.
- Contribute to system architecture and code review processes.
Required Skills and Qualifications:
- Minimum of 5 years of hands-on experience in Python development.
- Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
- Proficiency in building and consuming REST APIs.
- Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
- Hands-on experience with Python frameworks like Django, Flask, or similar.
- Familiarity with event-driven programming and asynchronous processing.
- Excellent problem-solving, debugging, and troubleshooting skills.
- Strong communication and collaboration abilities to work effectively in a team environment.

Key Responsibilities:
- Design, develop, and maintain data pipelines on AWS.
- Work with large-scale data processing using SQL, Python or PySpark.
- Implement and optimize ETL processes for structured and unstructured data.
- Develop and manage data models in Snowflake.
- Ensure data security, integrity, and compliance on AWS cloud infrastructure.
- Collaborate with cross-functional teams to support data-driven decision-making.
Required Skills:
- Strong hands-on experience with AWS services
- Proficiency in SQL, Python, or PySpark for data processing and transformation.
- Experience working with Snowflake for data warehousing.
- Strong understanding of data modeling, data governance, and performance tuning.
- Knowledge of CI/CD pipelines for data workflows is a plus.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG


Job Description :
We are seeking a highly skilled and motivated Python Developer with 4 to 6 years of experience to join our dynamic development team. The ideal candidate will have expertise in Python programming and be proficient in building scalable, secure, and efficient applications. The role involves collaborating with cross-functional teams to design, develop, and maintain software solutions.
The core responsibilities for the job include the following :
1.Application Development :
- Write clean, efficient, and reusable Python code.
- Develop scalable backend solutions and RESTful APIs.
- Optimize applications for maximum speed and scalability.
2.Integration and Database Management :
- Integrate data storage solutions such as SQL, PostgreSQL, or NoSQL databases (e. g., MongoDB).
- Work with third-party APIs and libraries to enhance application functionality.
3.Collaboration and Problem-Solving :
- Collaborate with front-end developers, designers, and project managers.
- Debug, troubleshoot, and resolve application issues promptly.
4.Code Quality and Documentation :
- Adhere to coding standards and best practices.
- Write comprehensive technical documentation and unit tests.
5.Innovation and Optimization :
- Research and implement new technologies and frameworks to improve software performance.
- Identify bottlenecks and devise solutions to optimize performance.
6.Requirements :
- Strong programming skills in Python with 4-6 years of hands-on experience.
- Proficiency in at least one Python web framework (e. g., Django, Flask, FastAPI).
- Experience with RESTful API development and integration.
- Knowledge of database design and management using SQL (MySQL, PostgreSQL) and NoSQL (MongoDB).
- Familiarity with cloud platforms (e. g., AWS, Azure, or Google Cloud) and containerization tools like Docker.
- Experience with version control systems like Git.
- Strong understanding of software development lifecycle (SDLC) and Agile methodologies.
- Knowledge of front-end technologies (e. g., HTML, CSS, JavaScript) is a plus.
- Experience with testing frameworks like Pytest or Unittest.
- Working knowledge of Java is a plus.
- Bachelors or Masters degree in Computer Science, Engineering, or a related field.
7.Preferred Skills :
- Knowledge of data processing libraries such as Pandas or NumPy.
- Experience with machine learning frameworks like TensorFlow or PyTorch (optional but a plus).
- Familiarity with CI/CD pipelines and deployment practices.
- Experience in message brokers like RabbitMQ or Kafka.
8.Soft Skills :
- Excellent problem-solving skills and attention to detail.
- Strong communication and teamwork abilities.
- Ability to manage multiple tasks and meet deadlines in a fast-paced environment.
- Willingness to learn and adapt to new technologies.
General Description:
Owns all technical aspects of software development for assigned applications.
Participates in the design and development of systems & application programs.
Functions as Senior member of an agile team and helps drive consistent development practices – tools, common components, and documentation.
Required Skills:
In depth experience configuring and administering EKS clusters in AWS.
In depth experience in configuring **DataDog** in AWS environments especially in **EKS**
In depth understanding of OpenTelemetry and configuration of **OpenTelemetry Collectors**
In depth knowledge of observability concepts and strong troubleshooting experience.
Experience in implementing comprehensive monitoring and logging solutions in AWS using **CloudWatch**.
Experience in **Terraform** and Infrastructure as code.
Experience in **Helm**
Strong scripting skills in Shell and/or python.
Experience with large-scale distributed systems and architecture knowledge (Linux/UNIX and Windows operating systems, networking, storage) in a cloud computing or traditional IT infrastructure environment.
Must have a good understanding of cloud concepts (Storage /compute/network).
Experience in Collaborating with several cross functional teams to architect observability pipelines for various GCP services like GKE, cloud run Big Query etc.
Experience with Git and GitHub.
Proficient in developing and maintaining technical documentation, ADRs, and runbooks.


Role & Responsibilities
Lead the design, development, and deployment of complex, scalable, reliable, and highly available features for world-class SaaS products and services.
Guide the engineering team in adopting best practices for software development, code quality, and architecture.
Make strategic architectural and technical decisions, ensuring the scalability, security, and performance of software applications.
Proactively identify, prioritize, and address technical debt to improve system performance, maintainability, and long-term scalability, ensuring a solid foundation for future development.
Collaborate with cross-functional teams (product managers, designers, and stakeholders) to define project scope, requirements, and timelines.
Mentor and coach team members, providing technical guidance and fostering professional development.
Oversee code reviews, ensuring adherence to best practices and maintaining high code quality standards.
Drive continuous improvement in development processes, tools, and technologies to increase team productivity and product quality.
Stay updated with the latest industry trends and emerging technologies to drive innovation and keep the team at the cutting edge.
Ensure project timelines and goals are met, managing risks and resolving any technical challenges that arise during development.
Foster a collaborative and inclusive team culture, promoting open communication and problem-solving.
Imbibe and maintain a strong customer delight attitude while designing and building products.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.

Location: Malleshwaram/MG Road
Work: Initially Onsite and later Hybrid
We are committed to becoming a true DevOps house and want your help. The role will
require close liaison with development and test teams to increase effectiveness of
current dev processes. Participation in an out-of-hours emergency support rotationally
will be required. You will be shaping the way that we use our DevOps tools and
innovating to deliver business value and improve the cadence of the entire dev team.
Required Skills:
Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,
S3, RDS, Lambda, Cloudwatch, IAM etc)
• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS
• Good knowledge on creating and maintaining the infrastructure as code using Terraform
• Solid experience with CI-CD tools like Jenkins, git and Ansible
• Working experience on supporting Microservices (Deploying, maintaining and
monitoring Java web-based production applications using docker container)
• Strong knowledge on debugging production issues across the services and
technology stack and application monitoring (we use Splunk & Cloudwatch)
• Experience with software build tools (maven, and node)
• Experience with scripting and automation languages (Bash, groovy,
JavaScript, python)
• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu
• 4+ years in AWS DevOps Engineer
Optional skills:
• Oracle/SQL database maintenance experience
• Elastic Search
• Serverless/container based approach
• Automated testing of infrastructure deployments
• Experience of performance testing & JVM tuning
• Experience of a high-volume distributed eCommerce environment
• Experience working closely with Agile development teams
• Familiarity with load testing tools & process
• Experience with nginx, tomcat and apache
• Experience with Cloudflare
Personal attributes
The successful candidate will be comfortable working autonomously and
independently.
They will be keen to bring an entire team to the next level of delivering business value.
A proactive approach to problem


- Experience in Python
- Experience in any Framework like Django, and Flask.
- Primary and Secondary skills - Python, OOPs and Data Structure
- Good understanding of Rest Api
- Familiarity with event-driven programming in Python
- Good analytical and troubleshooting skills



About the Role:
We are looking for a skilled Full Stack Developer (Python & React) to join our Data & Analytics team. You will design, develop, and maintain scalable web applications while collaborating with cross-functional teams to enhance our data products.
Responsibilities:
- Develop and maintain web applications (front-end & back-end).
- Write clean, efficient code in Python and TypeScript (React).
- Design and implement RESTful APIs.
- Work with Snowflake, NoSQL, and streaming data platforms.
- Build reusable components and collaborate with designers & developers.
- Participate in code reviews and improve development processes.
- Debug and resolve software defects while staying updated with industry trends.
Qualifications:
- Passion for immersive user experiences and data visualization tools (e.g., Apache Superset).
- Proven experience as a Full Stack Developer.
- Proficiency in Python (Django, Flask) and JavaScript/TypeScript (React).
- Strong understanding of HTML, CSS, SQL/NoSQL, and Git.
- Knowledge of software development best practices and problem-solving skills.
- Experience with AWS, Docker, Kubernetes, and FaaS.
- Knowledge of Terraform and testing frameworks (Playwright, Jest, pytest).
- Familiarity with Agile methodologies and open-source contributions.


Job Description -
Role - Sr. Python Developer
Location -- Manyata Tech Park, Bangalore
Mode - Hybrid
Required Tech Skills:
- Experience in Python
- Experience in any Framework like Django, and Flask.
- Primary and Secondary skills - Python, OOPs and Data Structure
- Good understanding of Rest Api
- Familiarity with event-driven programming in Python
- Good analytical and troubleshooting skills


JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-Fri, In office role with excellent perks and benefits!
Key Responsibilities:
1. Design, develop, and maintain backend services and APIs using Node.js or Python, or Java.
2. Build and implement scalable and robust microservices and integrate API gateways.
3. Develop and optimize NoSQL database structures and queries (e.g., MongoDB, DynamoDB).
4. Implement real-time data pipelines using Kafka.
5. Collaborate with front-end developers to ensure seamless integration of backend services.
6. Write clean, reusable, and efficient code following best practices, including design patterns.
7. Troubleshoot, debug, and enhance existing systems for improved performance.
Mandatory Skills:
1. Proficiency in at least one backend technology: Node.js or Python, or Java.
2. Strong experience in:
i. Microservices architecture,
ii. API gateways,
iii. NoSQL databases (e.g., MongoDB, DynamoDB),
iv. Kafka
v. Data structures (e.g., arrays, linked lists, trees).
3. Frameworks:
i. If Java : Spring framework for backend development.
ii. If Python: FastAPI/Django frameworks for AI applications.
iii. If Node: Express.js for Node.js development.
Good to Have Skills:
1. Experience with Kubernetes for container orchestration.
2. Familiarity with in-memory databases like Redis or Memcached.
3. Frontend skills: Basic knowledge of HTML, CSS, JavaScript, or frameworks like React.js.

Role & Responsibilities
work with peers in Product, QA, and other Engineering departments;
coach and mentor team members;
cautiously drive adoption of new technologies and processes;
preserve our engineering values of quality, scalability, and maintainability;
“see around corners” — identify blind spots and prioritize work across teams;
work with international teams to ensure successful product development and delivery; and
own the overall architecture and systems engineering for your products.



Link to apply- https://tally.so/r/w8RpLk
About Us
At GreenStitch, we are on a mission to revolutionise the fashion and textile industry through cutting-edge climate-tech solutions. We are looking for a highly skilled Data Scientist-I with expertise in NLP and Deep Learning to drive innovation in our data-driven applications. This role requires a strong foundation in AI/ML, deep learning, and software engineering to develop impactful solutions aligned with our sustainability and business objectives.
The ideal candidate demonstrates up-to-date expertise in Deep Learning and NLP and applies it to the development, execution, and improvement of applications. This role involves supporting and aligning efforts to meet both customer and business needs.
You will play a key role in building strong relationships with stakeholders, identifying business challenges, and implementing AI-driven solutions. You should be adaptable to competing demands, organisational changes, and new responsibilities while upholding GreenStitch’s mission, values, and ethical standards.
What You’ll Do:
- AI-Powered Applications: Build and deploy AI-driven applications leveraging Generative AI and NLP to enhance user experience and operational efficiency.
- Model Development: Design and implement deep learning models and machine learning pipelines to solve complex business challenges.
- Cross-Functional Collaboration: Work closely with product managers, engineers, and business teams to identify AI/ML opportunities.
- Innovation & Experimentation: Stay up-to-date with the latest AI/ML advancements, rapidly prototype solutions, and iterate on ideas for continuous improvement.
- Scalability & Optimisation: Optimise AI models for performance, scalability, and real-world impact, ensuring production readiness.
- Knowledge Sharing & Thought Leadership: Contribute to publications, patents, and technical forums; represent GreenStitch in industry and academic discussions.
- Compliance & Ethics: Model compliance with company policies and ensure AI solutions align with ethical standards and sustainability goals.
- Communication: Translate complex AI/ML concepts into clear, actionable insights for business stakeholders.
What You’ll Bring:
- Education: Bachelors/Master’s degree in Data Science, Computer Science, AI, or a related field.
- Experience: 1-3 years of hands-on experience in AI/ML, NLP, and Deep Learning.
- Technical Expertise: Strong knowledge of transformer-based models (GPT, BERT, etc.), deep learning frameworks (TensorFlow, PyTorch), and cloud platforms (Azure, AWS, GCP).
- Software Development: Experience in Python, Java, or similar languages, and familiarity with MLOps tools.
- Problem-Solving: Ability to apply AI/ML techniques to real-world challenges with a data-driven approach.
- Growth Mindset: Passion for continuous learning and innovation in AI and climate-tech applications.
- Collaboration & Communication: Strong ability to work with cross-functional teams, communicate complex ideas, and drive AI adoption.
Why GreenStitch?
GreenStitch is at the forefront of climate-tech innovation, helping businesses in the fashion and textile industry reduce their environmental footprint. By joining us, you will:
- Work on high-impact AI projects that contribute to sustainability and decarbonisation.
- Be part of a dynamic and collaborative team committed to making a difference.
- Enjoy a flexible, hybrid work model that supports professional growth and work-life balance.
- Receive competitive compensation and benefits, including healthcare, parental leave, and learning opportunities.
Location: Bangalore(India)
Employment Type: Full Time, Permanent
Industry Type: Climate-Tech / Fashion-Tech
Department: Data Science & Machine Learning
Join us in shaping the future of sustainable fashion with cutting-edge AI solutions! 🚀

Apply Link - https://tally.so/r/wv0lEA
Key Responsibilities:
- Software Development:
- Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
- Contribute to the development of micro services, APIs, or UI components as per the project requirements.
- System Architecture:
- Collaborate and design and enhance system architecture.
- Analyse and identify opportunities for performance improvements and scalability.
- Code Reviews and Mentorship:
- Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
- Mentor and support junior developers, fostering a culture of learning and growth.
- Agile Collaboration:
- Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
- Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
- Problem-Solving:
- Investigate, troubleshoot, and resolve complex issues in production and development environments.
- Contribute to incident management and root cause analysis to improve system reliability.
- Continuous Improvement:
- Stay up-to-date with emerging technologies and industry trends.
- Propose and implement improvements to existing codebases, tools, and development processes.
Qualifications:
Must-Have:
- Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
- Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- Technical Skills:
- Strong proficiency in [programming languages/frameworks/tools].
- Experience with cloud platforms like AWS, Azure, or GCP.
- Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
- Understanding of data structures, algorithms, and system design principles.
Nice-to-Have:
- Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Knowledge of database technologies (SQL and NoSQL).
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent written and verbal communication skills.
- Ability to work in a fast-paced environment and manage multiple priorities effectively.


Job DescriptionWe are seeking an experienced HubSpot CRM Developer to join our team. The ideal candidate will have a strong background in HubSpot development, integrations, and dashboard creation while working with cross-functional teams to optimize CRM functionality.
Key Responsibilities:● Develop, customize, and optimize HubSpot CRM to meet business needs.
● Integrate HubSpot with other technologies and external business intelligence tools like Power BI and Tableau.
● Build and maintain dashboards and reports to support marketing, sales, operations, and finance teams.
● Work with backend technologies (PHP, Python, Node.js, etc.) to implement integrations and automation.
● High proficiency in JavaScript.
● Develop and maintain scripts in any suitable scripting language to support CRM functionality.
● Ensure seamless data flow and synchronization between HubSpot and external platforms.
● Collaborate with multiple teams to understand data requirements and optimize CRM processes.
Requirements:● 7-10 years of relevant experience in HubSpot development and integration.
● Strong expertise in backend development using any PHP, Python, or Node.js.
● High proficiency in JavaScript and any scripting languages.
● Experience in HubSpot API integration with other platforms.
● Proficiency in dashboard and reporting tools like Power BI and Tableau.
● Ability to work independently (IC role) while collaborating with cross-functional teams.
● Must have experience in handling end-to-end projects from scratch.
● Exposure to multiple industries/domains is preferred (not limited to a single company for 5-10 years).


Key Responsibilities:
- Design, build, and maintain scalable, real-time data pipelines using Apache Flink (or Apache Spark).
- Work with Apache Kafka (mandatory) for real-time messaging and event-driven data flows.
- Build data infrastructure on Lakehouse architecture, integrating data lakes and data warehouses for efficient storage and processing.
- Implement data versioning and cataloging using Apache Nessie, and optimize datasets for analytics with Apache Iceberg.
- Apply advanced data modeling techniques and performance tuning using Apache Doris or similar OLAP systems.
- Orchestrate complex data workflows using DAG-based tools like Prefect, Airflow, or Mage.
- Collaborate with data scientists, analysts, and engineering teams to develop and deliver scalable data solutions.
- Ensure data quality, consistency, performance, and security across all pipelines and systems.
- Continuously research, evaluate, and adopt new tools and technologies to improve our data platform.
Skills & Qualifications:
- 3–6 years of experience in data engineering, building scalable data pipelines and systems.
- Strong programming skills in Python, Go, or Java.
- Hands-on experience with stream processing frameworks – Apache Flink (preferred) or Apache Spark.
- Mandatory experience with Apache Kafka for stream data ingestion and message brokering.
- Proficiency with at least one DAG-based orchestration tool like Airflow, Prefect, or Mage.
- Solid understanding and hands-on experience with SQL and NoSQL databases.
- Deep understanding of data lakehouse architectures, including internal workings of data lakes and data warehouses, not just usage.
- Experience working with at least one cloud platform, preferably AWS (GCP or Azure also acceptable).
- Strong knowledge of distributed systems, data modeling, and performance optimization.
Nice to Have:
- Experience with Apache Doris or other MPP/OLAP databases.
- Familiarity with CI/CD pipelines, DevOps practices, and infrastructure-as-code in data workflows.
- Exposure to modern data version control and cataloging tools like Apache Nessie.


Engineer the Future of AI-Powered Recruitment Applications
About Us:
At PyjamaHR, we're creating recruitment software so intuitive you could use it in your pajamas. We integrate cutting-edge LLMs and Generative AI technologies into practical, powerful applications that are transforming how 525+ companies hire talent. While others are still talking about AI potential, we're shipping AI-powered features that deliver real value. As we scale from Bangalore to the US market, we need exceptional full-stack talent who can turn advanced AI capabilities into elegant, user-friendly experiences.
The Role:
As a Senior Full-Stack Engineer at PyjamaHR, you'll build the applications and interfaces that bring our AI capabilities to life. This isn't about training models—it's about implementing, integrating, and optimizing Generative AI within robust full-stack applications. You'll work across our entire technology ecosystem to create seamless experiences that leverage AI to solve real recruitment challenges. Your code will directly impact how companies discover, evaluate, and hire talent.
What You'll Do:
- Implement AI-Powered Features - Integrate Generative AI capabilities into practical, user-facing applications
- Build End-to-End Solutions - Develop both frontend experiences and backend services that leverage AI capabilities
- Create Scalable Architectures - Design systems that grow with our rapidly expanding user base
- Optimize Performance - Ensure our applications remain responsive even when processing complex AI operations
- Enhance User Experiences - Translate advanced AI capabilities into intuitive, accessible interfaces
- Drive Technical Excellence - Set standards for code quality and engineering practices across the team
- Collaborate Across Functions - Work with product, design, and data teams to deliver cohesive solutions
- Solve Complex Challenges - Apply your technical creativity to the unique problems of AI-powered recruitment
Who We're Looking For:
- 3-5 years of full-stack development experience building production applications
- Experience implementing and integrating Generative AI into web applications
- Expertise with either Django/React or Node.js/React technology stacks
- Strong experience with cloud platforms (Azure preferred, AWS acceptable)
- Proven ability to build performant, scalable web applications
- Excellence in modern JavaScript/TypeScript and frontend development
- Solid understanding of software architecture principles
- Product-minded approach to engineering decisions
- Ability to thrive in fast-paced environments with evolving priorities
- Bachelor's or higher in Computer Science or equivalent practical experience
The Rewards:
- Salary range of ₹30-40 LPA for exceptional talent (with room to negotiate for truly outstanding candidates)
- Equity package with significant growth potential
- Minimal bureaucracy and maximum impact
- Autonomy to make important technical decisions
- The opportunity to shape an industry-leading product
- Collaborative, innovation-focused work environment
Location: Bangalore (Koramangala 8th Block)
This role is perfect for engineers who want to apply their full-stack expertise to implement cutting-edge AI technologies in real-world applications. If you're excited about building the interfaces and systems that make advanced AI accessible and valuable to users, we want to talk to you.

Role & Responsibilities
- work with peers in Product, QA, and other Engineering departments;
- coach and mentor team members;
- cautiously drive adoption of new technologies and processes;
- preserve our engineering values of quality, scalability, and maintainability;
- “see around corners” — identify blind spots and prioritize work across teams;
- work with international teams to ensure successful product development and delivery; and
- own the overall architecture and systems engineering for your products.
Ideal Candidate
- exceptional communication and organizational skills;
- experience recruiting for and building software engineering teams;
- the ability to inspire and motivate team members;
- a strong technical background in software engineering and architecture, and experience with modern programming languages and frameworks;
- 6+ years of software development experience; and
- 4+ years of experience managing software teams (as a Team Lead or a Manager) or similar experience as an individual contributor.

Job Title: Cloud Operations Engineer (SRE)
Location: Bangalore, India
Experience: 5 – 12 years
Notice Period: Preferred short joiners
About Us:
At OUR CUSTOMER, we are on a mission to empower broadband operators to deliver an exceptional connected home experience for their subscribers. As the most widely deployed provider of Smart Wi-Fi solutions, our technologies enhance user experiences in over 35 million homes worldwide.
Our portfolio includes Smart Wi-Fi software, a cloud-based experience management platform, and advanced data-driven solutions. We provide customized engineering and testing services to help broadband operators deliver high-quality, seamless connectivity.
Join us in building resilient, scalable, and secure cloud solutions that drive the future of broadband!
Job Overview:
We are seeking a highly skilled Cloud Operations Engineer (SRE) with expertise in AWS cloud infrastructure, automation, monitoring, and performance optimization. You will be responsible for ensuring the reliability, scalability, security, and efficiency of our cloud-based applications while driving automation and operational excellence.
This role requires deep knowledge of AWS services, Infrastructure as Code (IaC), monitoring, troubleshooting, and DevOps best practices.
Key Responsibilities:
1. AWS Infrastructure Management:
- Design, implement, and manage scalable, secure, and high-performance AWS cloud environments.
- Optimize cost, performance, and reliability across cloud services.
2. Site Reliability Engineering (SRE):
- Ensure high availability and performance of cloud environments through monitoring, automation, and incident response.
- Implement self-healing and fault-tolerant architectures.
3. Monitoring & Operations:
- Deploy and manage monitoring and logging tools (AWS CloudWatch, Datadog, ELK, Prometheus, Grafana).
- Define SLIs, SLOs, and SLAs for cloud services.
- Proactively analyze performance trends and prevent outages.
4. Automation & Infrastructure as Code (IaC):
- Automate cloud provisioning and management using Terraform, AWS CloudFormation, or Ansible.
- Build and maintain CI/CD pipelines for seamless deployments.
- Use Python, Bash, or other scripting languages to automate operational tasks.
5. Incident Management & Troubleshooting:
- Respond to incidents, outages, and performance issues in a timely manner.
- Conduct root cause analysis (RCA) and implement preventive measures.
- Document incidents and create post-mortem reports.
6. Security & Compliance:
- Implement AWS security best practices, including IAM policies, network security, and encryption.
- Ensure compliance with industry regulations (GDPR, HIPAA, etc.).
- Regularly audit and enhance cloud security posture.
7. Backup, Disaster Recovery & Capacity Planning:
- Develop and manage backup strategies, disaster recovery plans, and high-availability architectures.
- Plan for future capacity needs, scaling resources based on demand.
8. Collaboration & Documentation:
- Work closely with Development, QA, Security, and Product teams to streamline operations.
- Create and maintain detailed technical documentation, architecture diagrams, and operational runbooks.
Qualifications & Skills:
Mandatory:
✅ Education: Bachelor’s degree in Computer Science, Information Technology, or related fields.
✅ Experience: 5+ years as a Cloud SRE, Operations Engineer, or DevOps Engineer with a focus on AWS services.
✅ Cloud Expertise: Strong hands-on experience with AWS services (EC2, S3, RDS, Lambda, ECS, EKS, VPC, Route 53, IAM, etc.).
✅ Infrastructure as Code (IaC): Proficiency with Terraform, AWS CloudFormation, or Ansible.
✅ Linux & Automation: Strong knowledge of Linux/Unix system administration, shell scripting, and automation using Python or Bash.
✅ Monitoring & Logging: Experience with CloudWatch, Datadog, Prometheus, ELK stack, Grafana.
✅ CI/CD & DevOps: Hands-on experience with Jenkins, GitLab CI/CD, CircleCI, or equivalent tools.
✅ Networking: Solid understanding of DNS, Load Balancing, VPNs, Firewalls, and Network Security.
✅ Problem-Solving: Strong analytical skills to troubleshoot and resolve cloud infrastructure issues.
✅ Communication: Excellent verbal and written communication skills to collaborate across teams.
Preferred:
⭐ AWS Certification (AWS Certified Solutions Architect, AWS Certified DevOps Engineer).
⭐ Experience with containerization (Docker, Kubernetes, EKS, Fargate).
⭐ Knowledge of security best practices, compliance (GDPR, HIPAA, etc.).
⭐ Familiarity with GitOps, service mesh technologies, and serverless architectures.
⭐ Experience working in Agile & DevOps environments.

Research Engineers.
Location: Banglore,Hyderabad
Employment Type: Full-time
Experience Level: 5 to 12 Years
About the Role:
We are looking for an innovative and highly skilled Senior Software Engineer to join our Innovation Group and drive the development of cutting-edge software solutions for Wi-Fi, Fixed Wireless Access (FWA), and AI-driven technologies. You will play a key role in bringing up new software products, developing proof-of-concept (PoC) prototypes, and experimenting with emerging technologies aligned with our Technology Roadmap.
The ideal candidate will have hands-on experience with 3GPP standards (4G & 5G) and their application in FWA networks. A strong background in Linux system/kernel programming, embedded systems, and IP networking is essential, along with a passion for open-source development and innovative problem-solving.
Key Responsibilities:
Develop software for testing, prototyping, and validating new features in FWA networks.
Design and implement end-to-end software solutions, bringing innovative ideas to life.
Work with Linux and cloud-based software architectures to enhance networking solutions.
Conduct experiments and performance evaluations for new technologies in wireless communications.
Collaborate with cross-functional teams to align new developments with the Technology Roadmap.
Work with open-source communities to contribute and leverage existing tools.
Participate in IPR (Intellectual Property Rights) and patent documentation within the networking domain.
Author whitepapers and technical publications to showcase research and innovation.
Required Qualifications & Skills:
Proficiency in C/C++ programming for system and network applications.
Strong experience in Python and shell scripting.
Expertise in Linux system programming, network programming, and kernel module development.
Proficiency in Embedded Linux build systems, toolchains, and cross-compilation environments.
Deep understanding of IP networking concepts and the Linux network stack.
Strong knowledge of 3GPP standards (LTE, 5G) and Wi-Fi standards (IEEE 802.11g/n/ac/ax/be).
Familiarity with Git and open-source development practices.
Ability to work independently and within a collaborative team environment.
Preferred Qualifications:
M.Tech / Ph.D. in Computer Engineering, Electronic Engineering, or a related field.
Hands-on experience in patent filing, IPR documentation, and technical publications.
Prior experience in cloud-based software development.


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst

Job Summary:
We are seeking a skilled Senior Tableau Developer to join our data team. In this role, you will design and build interactive dashboards, collaborate with data teams to deliver impactful insights, and optimize data pipelines using Airflow. If you are passionate about data visualization, process automation, and driving business decisions through analytics, we want to hear from you.
Key Responsibilities:
- Develop and maintain dynamic Tableau dashboards and visualizations to provide actionable business insights.
- Partner with data teams to gather reporting requirements and translate them into effective data solutions.
- Ensure data accuracy by integrating various data sources and optimizing data pipelines.
- Utilize Airflow for task orchestration, workflow scheduling, and monitoring.
- Enhance dashboard performance by streamlining data processing and improving query efficiency.
Requirements:
- 5+ years of hands-on experience in Tableau development.
- Proficiency in Airflow for building and automating data pipelines.
- Strong skills in data transformation, ETL processes, and data modeling.
- Solid understanding of SQL and database management.
- Excellent problem-solving skills and the ability to work collaboratively across teams.
Nice to Have:
- Experience with cloud platforms like AWS, GCP, or Azure.
- Familiarity with programming languages such as Python or R.
Why Join Us?
- Work on impactful data projects with a talented and collaborative team.
- Opportunity to innovate and shape data visualization strategies.
- Competitive compensation and professional growth opportunities

Role & Responsibilities
Design, develop, and implement high-quality software solutions for payment processing.
Maintain a regular release cadence and manage the product backlog.
Ensure timely and lossless communication across teams.
Uphold engineering values and best practices.
Collaborate with international teams to ensure successful product development and delivery.
Ideal Candidate
A strong technical background in software engineering and architecture, with experience in modern programming language
Ability to work independently with very little direction, taking full ownership of projects.
Identifying blind spots, anticipate challenges, and prioritize work effectively
Exceptional communication and organizational skills.
A Bachelor's degree in Computer Science, Engineering, or equivalent experience.
8+ years of relevant experience preferred
Proficiency using Python, Kafka, Kubernetes, and AWS
Experience with Distributed Task Queues such as Celery and RabbitMQ is preferred.
Experience with RDBMS/SQL is also preferred.
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.

About Us
Binocs.co empowers institutional lenders with next-generation loan management software, streamlining the entire loan lifecycle and facilitating seamless interaction among stakeholders.
Team: Binocs.co is led by a passionate team with extensive experience in financial technology, lending, AI, and software development.
Investors: Our journey is backed by renowned investors who share our vision for transforming the loan management landscape: Beenext, Arkam Ventures, Accel, Saison Capital, Blume Ventures, Premji Invest, and Better Capital.
What we're looking for
We seek a motivated, talented, and intelligent individual who shares our vision of being a changemaker. We value individuals dissatisfied with the status quo, strive to make improvements, and envision making significant contributions. We look for those who embrace challenges and dedicate themselves to solutions. We seek individuals who push for data-driven decisions, are unconstrained by titles, and value collaboration. We are building a fast-paced team to shape various business and technology aspects.
Responsibilities
- Be a part of the initial team to define and set up a best-in-class digital platform for the Private Credit industry, and take full ownership of the components of the digital platform
- You will build robust and scalable web-based applications and need to think of platforms & reuse
- Driving and active contribution to High-Level Designs(HLDs) and Low-Level Designs (LLDs).
- Collaborate with frontend developers, product managers, and other stakeholders to understand requirements and translate them into technical specifications.
- Mentor team members in adopting effective coding practices. Conduct comprehensive code reviews, focusing on both functional and non-functional aspects.
- Ensure the security, performance, and reliability of backend systems through proper testing, monitoring, and optimization.
- Participate in code reviews, sprint planning, and agile development processes to maintain code quality and project timelines.
- Simply, be an owner of the platform and do whatever it takes to get the required output for customers
- Be curious about product problems and have an open mind to dive into new domains eg: gen-AI.
- Stay up-to-date with the latest development trends, tools, and technologies.
Qualifications
- experience in backend development
- Proficiency in at least one backend programming language (e.g.,Python, Golang, Node.js, Java) & tech stack to write maintainable, scalable, unit-tested code.
- Good understanding of databases (e.g. MySQL, PostgreSQL) and NoSQL (e.g. MongoDB, Elasticsearch, etc)
- Solid understanding of RESTful API design principles and best practices.
- Experience with multi-threading and concurrency programming
- Extensive experience in object-oriented design skills, knowledge of design patterns, and huge passion and ability to design intuitive module and class-level interfaces.
- Experience of cloud computing platforms and services (e.g., AWS, Azure, Google Cloud Platform)
- Knowledge of Test Driven Development
Good to have
- Experience with microservices architecture
- Knowledge of serverless computing and event-driven architectures (e.g., AWS Lambda, Azure Functions)
- Understanding of DevOps practices and tools for continuous integration and deployment (CI/CD).
- Contributions to open-source projects or active participation in developer communities.
- Experience working LLMs and AI technologies

Required Skills and Qualifications:
- Proficiency in AWS Cloud services
- Strong experience with SQL for data manipulation and analysis.
- Hands-on programming experience in Python or pyspark.
- Working knowledge of Snowflake, including data modeling, performance tuning, and security configurations.
- Familiarity with CI/CD pipelines and version control (e.g., Git) is a plus.
- Excellent problem-solving and communication skills.
Note : one face-to-face (F2F) round is mandatory, and as per the process, you will need to visit the office for this.


Job Title : Principal Software Architect – AI/ML & Product Innovation
Location : Bangalore, Karnataka & Trichy, Tamilnadu, India (No remote work available)
Company : Zybisys Consulting Services LLP
Reports To : CEO
Job Type : Full-Time
Experience Required: Minimum of 10+ years in software development, with at least 5 years in software architect role.
About Us:
At Zybisys, we’re not just another cloud hosting and software development company—we’re all about pushing boundaries in the FinTech world. We don’t just solve problems; we rethink how businesses operate, making things smoother, smarter, and more efficient. Our tech helps FinTech companies stay ahead in the digital game with confidence and flexibility.
Innovation is in our DNA, and we’re always on the lookout for bold thinkers who can tackle big challenges with creativity and precision. At Zybisys, we believe in growing together, nurturing talent, and building a future where technology transforms the way FinTech works.
Role Overview:
We're looking for a Principal Software Architect who’s passionate about AI/ML and product innovation. In this role, you’ll be at the forefront of designing and building smart, AI-driven solutions that tackle complex business challenges. You’ll work closely with teams across product, development, and research to shape our tech strategy and ensure everything aligns with our next-gen platform. If you love pushing the boundaries of technology and driving real innovation, this is the role for you!
Key Responsibilities:
- Architect & Design: Architect, design, and develop large-scale distributed cloud services and solutions with a focus on AI/ML, high availability, scalability, and robustness. Design scalable and efficient solutions, considering factors such as performance, security, and cost-effectiveness.
- AI/ML Integration: Spearhead the application of AI/ML in solving business problems at scale. Stay at the forefront of AI/ML technologies, trends, and industry standards to provide cutting-edge solutions
- Product Roadmap : Work closely with Product Management to set the technical product roadmap, definition, and direction. Analyze the current technology landscape and identify opportunities for improvement and innovation.
- Technology Evaluation: Evaluate different programming languages and frameworks to determine the most suitable ones for project requirements
- Component Design: Develop and oversee the creation of modular software components that can be reused and adapted across different projects.
- UI/UX Collaboration: Work closely with design teams to craft intuitive and engaging user interfaces and experiences.
- Project Oversight: Oversee projects from initiation to completion, creating project plans, defining objectives, and managing resources effectively
- Team Mentorship: Guide and inspire a team of engineers and designers, fostering a culture of continuous learning and improvement.
- Innovation & Ideation: Champion the generation of new ideas for product features, staying ahead of industry trends and customer needs.
- Research & Development: Leading initiatives that explore new technologies or methodologies.
- Strategic Planning: Participating in high-level decisions that shape the direction of products and services.
- Industry Influence: Representing the company in industry forums or partnerships with academic institutions.
- Open-Source Community Handling: Manage and contribute to the open-source community, fostering collaboration, sharing knowledge, and ensuring adherence to open-source best practices.
Qualifications:
- Experience: Minimum of 10 years in software development, with at least 5 years in a scalable software architect role.
- Technical Expertise: Proficient in software architecture, AI/ML technologies, and UI/UX principles.
- Leadership Skills: Proven track record of mentoring teams and driving cross-functional collaboration.
- Innovative Mindset: Demonstrated ability to think creatively and introduce groundbreaking ideas.
- Communication: Excellent verbal and written skills, with the ability to engage effectively with both technical and non-technical stakeholders.
- Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
What We Offer:
- A dynamic work environment where your ideas truly matter.
- Opportunities to attend and speak at industry conferences.
- Collaboration with cutting-edge technology and tools.
- A culture that values innovation, autonomy, and personal growth.
Responsibilities:
- Spearheaded building of the new transformation backend platform from 0 to 1
- Design the software for all its stakeholders - its end-users, developers, technical and product support, and DevOps.
- Evaluate technical/architectural options and tradeoffs.
- Implement proof-of-concepts. Hands-on.
- Create a solutions/design pattern library for similar problems and advocate for them.
- Provide technical leadership and guide development teams.
- Set up best practices for design, coding, testing, security, monitoring, and release management.
- Interface with cloud, and customers' technical teams.
- Measuring and constantly improving developer productivity.
- Work with product managers to build application extensibility into design.
- Occasional project management is when a project is more technical-focused.
- Occasional people management in the absence of other senior leaders.
Requirements:
- Experience in backend multiple stacks like Python, Golang, and Node.
- Proven experience with full development life cycle for large-scale software products.
- Clear communication, decision-making, understanding, and explaining trade-offs.
- Engineering best practices - code quality, testability, security, and release management.
- Good knowledge of performance, scalability, software architecture, and networking.
- Capacity to think in abstract and also obsess about details.
- Experience designing microservices architecture.
- Strong bent on engineering culture and focused on improving the developer experience.
- Self-managed and motivated.
- Capacity to break complex problems and work on abstract problems.

Job Title : Senior Python Scripting Engineer (Testing)
Experience : 7 to 8 Years
Location : Bangalore
Employment Type : Full-Time
Job Overview :
- We are looking for a Senior Python Scripting Engineer with 4 to 5 Years of advanced Python scripting experience and 2 years of testing experience.
- The ideal candidate will be responsible for developing robust automation scripts, ensuring software quality, and collaborating with cross-functional teams to enhance product reliability.
Key Responsibilities :
- Develop and maintain advanced Python scripts for automation and software development.
- Design, implement, and execute test cases, automation scripts, and testing frameworks to ensure software quality.
- Collaborate with developers and QA teams to identify and resolve software defects.
- Optimize existing scripts and test cases to improve efficiency and accuracy.
- Work on test automation, debugging, and continuous integration pipelines.
- Maintain clear documentation for scripts, test cases, and processes.
Required Skills & Qualifications :
- 7 to 8 Years of overall experience in software development/testing.
- 4 to 5 Years of strong hands-on experience in Python scripting (Advanced Level).
- 2 Years of experience in Software Testing (manual and automation).
- Experience with test automation frameworks like PyTest, Selenium, or Robot Framework.
- Proficiency in debugging, troubleshooting, and optimizing scripts.
- Knowledge of CI/CD pipelines, version control systems (Git), and Agile methodologies.
- Strong problem-solving skills and ability to work in a dynamic environment.
Preferred Skills :
- Experience with cloud platforms (AWS, Azure, or GCP).
- Exposure to performance testing and security testing.
- Familiarity with DevOps practices and containerization (Docker, Kubernetes).

Position: UiPath Developer
Experience: 3-7 years
Key Responsibilities:
1. Develop and implement automation solutions using UiPath.
2. Design, develop, test, and deploy RPA bots for process automation.
3. Write and optimize SQL queries, including joins, to manage and manipulate data effectively.
4. Develop scripts using Python, VB, .NET, or JavaScript to enhance automation capabilities.
5. Work with business stakeholders to analyze and optimize automation workflows.
6. Troubleshoot and resolve issues in RPA processes and scripts.
7. Ensure adherence to best practices in automation development and deployment.
Required Skills & Experience:
1. 3-7 years of experience in RPA development with UiPath.
2. Strong expertise in SQL, including writing complex queries and joins.
3. Hands-on experience with at least one scripting language: Python, VB, .NET, or JavaScript.
4. Understanding of RPA best practices, exception handling, and performance optimization.
5. Experience integrating UiPath with databases and other applications.
6. Strong problem-solving and analytical skills.
Preferred Qualifications:
1. UiPath certification is a plus.
2. Experience working in Agile environments.
3. Knowledge of APIs and web automation (AA).

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!

DevOps Test & Build Engineer
Shift timing: General Shift
Relevant Experience required: 5+ years
Education Required: Bachelor’s / Masters / PhD: Bachelor’s
Work Mode: EIC Office( 5 Days)
Must have skills: Python, Jenkins, Groovy script
Required Technologies
- Strong analytical abilities to analyze the effectiveness of the test and build environment and make the appropriate improvements
- Effective communication and leadership skills to collaborate with and support engineering
- Experience with managing Windows, Linux & OS X systems
- Strong understanding of CI/CD principles
- Strong coding knowledge on at least one programming language (python, java, perl or groovy)
- Hands-on experience with Jenkins master, plugins and node management
- Working knowledge on Docker and Kubernetes (CLI)
- Proficiency with scripting languages: bash, PowerShell, python or groovy
- Familiarity with build systems: Make, CMake, Conan
- Familiar with git CLI
- Basic understanding of embedded software, C/C++ language
- Quickly adapt new technology and complete assign tasks in defined timeline
Preferred
- Familiarity with Artifactory (conan or docker registry)
- Knowledge on ElectricFlow
- CI/CD in Gitlab or Github actions
- Hands on with Nagios, Grafana
- Exposure to Ansible or similar systems
- Worked on Jira integration with CI/CD
- General knowledge on AWS tools and technologies
- Fundamental understanding of embedded devices and its integration with CI/CD
- Exposure to Agile methodologies and CI/CD SDLC best practice

Senior DevOps Engineer
Shift timing: Rotational shift (15 days in same Shift)
Relevant Experience: 7 Years relevant experience in DevOps
Education Required: Bachelor’s / Masters / PhD – Any Graduate
Must have skills:
BASH Shell-script, CircleCI pipeline, Python, Docker, Kubernetes, Terraform, Github, Postgresql Server, DataDog, Jira
Good to have skills:
AWS, Serverless architecture, static-tools like flake8, black, mypy, isort, Argo CD
Candidate Roles and Responsibilities
Experience: 8+ years in DevOps, with a strong focus on automation, cloud infrastructure, and CI/CD practices.
Terraform: Advanced knowledge of Terraform, with experience in writing, testing, and deploying modules.
AWS: Extensive experience with AWS services (EC2, S3, RDS, Lambda, VPC, etc.) and best practices in cloud architecture.
Docker & amp; Kubernetes: Proven experience in containerization with Docker and orchestration with Kubernetes in production environments.
CI/CD: Strong understanding of CI/CD processes, with hands-on experience in CircleCI or similar tools.
Scripting: Proficient in Python and Linux Shell scripting for automation and process improvement.
Monitoring & amp, Logging: Experience with Datadog or similar tools for monitoring and alerting in large-scale environments.
Version Control: Proficient with Git, including branching, merging, and collaborative workflows.
Configuration Management: Experience with Kustomize or similar tools for managing Kubernetes configurations