50+ DevOps Jobs in India
Apply to 50+ DevOps Jobs on CutShort.io. Find your next job, effortlessly. Browse DevOps Jobs and apply today!

POSITION: Sr. Devops Engineer
Job Type: Work From Office (5 days)
Location: Sector 16A, Film City, Noida / Mumbai
Relevant Experience: Minimum 4+ year
Salary: Competitive
Education- B.Tech
About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com
We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices.
Roles and Responsibilities
- Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures
- Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes
- Implement the CI/CD pipelines to automate the build, test, and deployment processes
- Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution
- Work closely with cross-functional teams to troubleshoot and resolve issues.
- Implement and enforce security best practices across infrastructure components
- Establish and enforce configuration standards across various environments.
- Implement and manage infrastructure using Infrastructure as Code principles
- Leverage tools like Terraform for provisioning and managing resources.
- Stay abreast of industry trends and emerging technologies.
- Evaluate and recommend new tools and technologies to enhance infrastructure and operations
Must have Skills:
Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux
Good to have Skills:
Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST
Interview Process:
Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure
with your short success story into Devops and Tech
Cheers
For more details, visit our website- https://www.devnagri.com
Note for approver

Azure DE
Primary Responsibilities -
- Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure Create data models for analytics purposes
- Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations
- Use Azure Data Factory and Databricks to assemble large, complex data sets
- Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data.
- Ensure data security and compliance
- Collaborate with data engineers, and other stakeholders to understand requirements and translate them into scalable and reliable data platform architectures
Required skills:
- Blend of technical expertise, analytical problem-solving, and collaboration with cross-functional teams
- Azure DevOps
- Apache Spark, Python
- SQL proficiency
- Azure Databricks knowledge
- Big data technologies
The DEs should be well versed in coding, spark core and data ingestion using Azure. Moreover, they need to be decent in terms of communication skills. They should also have core Azure DE skills and coding skills (pyspark, python and SQL).
Out of the 7 open demands, 5 of The Azure Data Engineers should have minimum 5 years of relevant Data Engineering experience.
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
➕ Experience on AWS Sagemaker, AWS Bedrock


About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to haves:
○ Prior experience in working with startups or product-based companies
○ Experience mentoring tech leads and helping shape engineering culture
○ Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?:
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Key Responsibilities:
- Lead the development and delivery of high-quality, scalable web/mobile applications.
- Design, build, Manage and mentor a team of Developers across backend, frontend, and DevOps.
- Collaborate with cross-functional teams including Product, Design, and QA to ship fast and effectively.
- Integrate third-party APIs and financial systems (e.g., payment gateways, fraud detection, etc.).
- Troubleshoot production issues, optimize performance, and implement robust logging & monitoring.
- Define and enforce best practices in Planning, coding, architecture, and agile development.
- Identify and implement the right tools, frameworks, and technologies.
- Own the system architecture and make key decisions on performance, security, and scalability.
- Continuously monitor tech performance and drive improvements.
Requirements:
- 8+ years of software development experience, with at least 2+ years in a leadership role.
- Proven track record in managing Development teams and delivering consumer-facing products.
- Strong knowledge of backend technologies (Node.js, Java, Python, etc.) and frontend frameworks (React, Angular, etc.).
- Experience in cloud infrastructure (AWS/GCP), CI/CD pipelines, and containerization (Docker, Kubernetes).
- Deep understanding of system design, REST APIs, microservices architecture, and database management.
- Excellent communication and stakeholder management skills.
- Experience in Any Ecommerce Online app a web Applications or any Payment Transaction Environment will be a plus point.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion

We are looking for a talented full-stack software engineer to join our small and extremely talented product team. We love to build amazing products fast and with a high standard of code and UX, always on the cutting edge of new technologies.
Tasks
- Design and build features end-to-end. You'll be working on the frontend, backend, and anything in-between, whatever it takes to solve problems and delight users.
- Build beautiful and usable UIs with our modern toolset of React (NextJS, ES11, TypeScript) and TailwindCSS.
- Continue developing our distributed backend using Node (NestJS + TypeScript) and Rust. Technologies also include PostgreSQL, Redis, and RabbitMQ.
- Stitch many different services and APIs together, even if you have not worked with them before.
- Build solutions that take scaling and growth into consideration while balancing the time to launch.
- Engineer solutions to solve existing tech debt and write maintainable, sound and safe code.
Requirements
- 3+ years of experience as a Software Engineer, working with Typescript, React and serious interest or even better experience with Rust. However, we care much more about your general engineering skill than your knowledge of a particular language or framework.
- The ability to put yourself in the users' shoes and craft intuitive, high-quality web UI experiences
- A high bar for code quality and robustness for long-term expansion and use
- High personal initiative to learn, improve, and excel
- A Can-Do, entrepreneurial mentality with enjoyment for being a generalist
- DevOps: Experience with CI/CD pipelines, Docker, Kubernetes, and other DevOps tools.
- Experience with distributed systems, scaling applications, and/or microservices.
- Fluent in English (German is a plus)
Plus:
- Hands-on experience with the Rust
- Experience with functional programming
- Previously worked in an early-stage startup and experienced scaling of engineering teams
Overview
As an engineer in the Service Operations division, you will be responsible for the day-to-day management of the systems and services that power client products. Working with your team, you will ensure daily tasks and activities are successfully completed and where necessary, use standard operating procedures and knowledge to resolve any faults/errors encountered.
Job Description
Key Tasks and Responsibilities:
Ensure daily tasks and activities have successfully completed. Where this is not the case, recovery and remediation steps will be undertaken.
Undertake patching and upgrade activities in support of ParentPay compliance programs. These being PCI DSS, ISO27001 and Cyber Essentials+.
Action requests from the ServiceNow work queue that have been allocated to your relevant resolver group. These include incidents, problems, changes and service requests.
Investigate alerts and events detected from the monitoring systems that indicate a change in component health.
Create and maintain support documentation in the form of departmental wiki and ServiceNow knowledge articles that allow for continual improvement of fault detection and recovery times.
Work with colleagues to identify and champion the automation of all manual interventions undertaken within the team.
Attend and complete all mandatory training courses.
Engage and own the transition of new services into Service Operations.
Participate in the out of hours on call support rota.
Qualifications and Experience:
Experience working in an IT service delivery or support function OR
MBA or Degree in Information Technology or Information Security.
Experience working with Microsoft technologies.
Excellent communication skills developed working in a service centric organisation.
Ability to interpret fault descriptions provided by customers or internal escalations and translate these into resolutions.
Ability to manage and prioritise own workload.
Experience working within Education Technology would be an advantage.
Technical knowledge:
Advanced automation scripting using Terraform and Powershell.
Knowledge of bicep and ansible advantageous.
Advanced Microsoft Active Directory configuration and support.
Microsoft Azure and AWS cloud hosting platform administration.
Advanced Microsoft SQL server experience.
Windows Server and desktop management and configuration.
Microsoft IIS web services administration and configuration.
Advanced management of data and SQL backup solutions.
Advanced scripting and automation capabilities.
Advanced knowledge of Azure analytics and KQL.
Skills & Requirements
IT Service Delivery, Information Technology, Information Security, Microsoft Technologies, Communication Skills, Fault Interpretation, Workload Prioritization, Automation Scripting, Terraform, PowerShell, Microsoft Active Directory, Microsoft Azure, AWS, Microsoft SQL Server, Windows Server, Windows Desktop Configuration, Microsoft IIS, Data Backup Management, SQL Backup Solutions, Scripting, Azure Analytics, KQL.


Role - AI Architect
Location - Noida/Remote
Mode - Hybrid - 2 days WFO
As an AI Architect at CLOUDSUFI, you will play a key role in driving our AI strategy for customers in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, and Fintech sectors. You will be responsible for delivering large-scale AI transformation programs for multinational organizations, preferably Fortune 500 companies. You will also lead a team of 10-25 Data Scientists to ensure successful project execution.
Required Experience
● Minimum 12+ years of experience in Data & Analytics domain and minimum 3 years as AI Architect
● Master’s or Ph.D. in a discipline such as Computer Science, Statistics or Applied Mathematics with an emphasis or thesis work on one or more of the following: deep learning, machine learning, Generative AI and optimization.
● Must have experience of articulating and presenting business transformation journey using AI / Gen AI technology to C-Level Executives
● Proven experience in delivering large-scale AI and GenAI transformation programs for multinational organizations
● Strong understanding of AI and GenAI algorithms and techniques
● Must have hands-on experience in open-source software development and cloud native technologies especially on GCP tech stack
● Proficiency in python and prominent ML packages Proficiency in Neural Networks is desirable, though not essential
● Experience leading and managing teams of Data Scientists, Data Engineers and Data Analysts
● Ability to work independently and as part of a team Additional Skills
(Preferred):
● Experience in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, or Fintech sectors
● Knowledge of cloud platforms (AWS, Azure, GCP)
● GCP Professional Cloud Architect and GCP Professional Machine Learning Engineer Certification
● Experience with AI frameworks and tools (TensorFlow, PyTorch, Keras)

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills


Key Responsibilities:
● Software Development: Iteratively and incrementally design, develop, test and maintain applications and services using C# and .NET.
● CI/CD Pipelines: Develop, maintain, and optimize continuous integration and continuous delivery pipelines using GitLab, Devtron and Kubernetes.
● Teamwork and Pair/Mob Programming: Engage with developers, operations, and team members via pair or mob programming sessions to ensure the highest quality product delivery.
● Containerization: Contribute to the push toward full containerization and zero-downtime deployment goals
● DevOps Practices: Implement and maintain infrastructure as code (IaC) using tools such as bicep and Terraform.
● Monitoring and Logging: Implement and manage monitoring, logging, and alerting solutions using tools like OpenTelemetry, Prometheus, Grafana to make our products more supportable.
● Design and Architecture: Contribute to on-going discussion of our evolving software design and architecture.
● Cloud Management: Assist in managing and optimizing our private cloud infrastructure (VMWare Tanzu) to ensure high availability, scalability, and efficient resource usage.
● Security: Implement security best practices and ensure compliance with relevant regulations and standards.
● Automation: Identify opportunities for automation to improve efficiency, reduce manual efforts and deskilling in deployment, testing and maintenance tasks.
● Troubleshooting: Diagnose and resolve infrastructure and application issues promptly and effectively.
● Documentation: Create and maintain comprehensive documentation for applications, infrastructure, processes, and procedures.
● Continuous Improvement: Advocate for and implement best practices promoting a culture of continuous improvement.
Qualifications:
Minimum of 5 years of experience in software development, particularly with C# and .NET.
Must have skills:
● Strong knowledge of C# programming language and .NET stack
● Familiarity with CI/CD tools and practices, including TDD.
● Understanding of DevOps principles
● Strong collaboration and communication skills.
Nice to have skills
● Familiarity with containerization and orchestration tools like Docker and Kubernetes is a plus.
● Knowledge of infrastructure as code (IaC).
● Experience in scripting languages such as Python, Bash, or PowerShell is a bonus.
● Excellent problem-solving skills and attention to detail.
● Experience with pair/mob programming.
● Understanding of networking concepts and security best practices.
Job Description
We are seeking a skilled DevOps Specialist to join our global automotive team. As DevOps Specialist, you will be responsible for managing operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Daily maintenance tasks on application availability, response times, pro-active incident tracking on system logs and resources monitoring
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
DevOps Skillset: Logfile analysis /troubleshooting (ELK Stack), Linux administration, Monitoring (App Dynamics, Checkmk, Prometheus, Grafana), Security (Black Duck, SonarQube, Dependabot, OWASP or similar)
Experience with Docker.
Familiarity with DevOps principles and ticket tools like ServiceNow.
Experience in handling confidential data and safety sensitive systems
Strong analytical, communication, and organizational abilities. Easy to work with.
Optional: Experience with our relevant business domain (Automotive / Manufacturing industry, especially production management systems). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
DevOps, Logfile Analysis, Troubleshooting, ELK Stack, Linux Administration, Monitoring, AppDynamics, Checkmk, Prometheus, Grafana, Security, Black Duck, SonarQube, Dependabot, OWASP, Docker, CI/CD, Jenkins, GitLab, Kubernetes, AWS, Azure, ServiceNow, Incident Management, Change Management, Problem Management, Documentation, Communication, Analytical Skills, Organizational Skills, SCRUM, ITIL, Automotive Industry, Manufacturing Industry, Production Management Systems.
We are seeking a highly motivated and experienced DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will play a crucial role in building, automating, and maintaining our infrastructure and deployment pipelines. You will collaborate closely with development and operations teams to ensure the smooth and efficient delivery of our software products. You will champion automation, scalability, and reliability, driving continuous improvement in our development and operational processes.
Responsibilities:
- Infrastructure as Code (IaC):
- Design, implement, and maintain infrastructure as code using tools like Terraform, CloudFormation, or similar.
- Manage and provision cloud resources (AWS, Azure, GCP) effectively.
- Continuous Integration/Continuous Delivery (CI/CD):
- Design, build, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or GitHub Actions.
- Automate software builds, testing, and deployments.
- Implement and manage containerization and orchestration technologies (Docker, Kubernetes).
- Configuration Management:
- Utilize configuration management tools like Ansible, Chef, or Puppet to automate system configurations.
- Ensure consistent and reproducible environments across development, testing, and production.
- Monitoring and Logging:
- Implement and manage monitoring and logging solutions (Prometheus, Grafana, ELK stack, Datadog).
- Proactively identify and resolve performance bottlenecks and system issues.
- Establish and maintain alerting systems.
Infrastructure as Code (IaC):
- Design, implement, and maintain infrastructure as code using tools like Terraform, CloudFormation, or similar.
- Automate infrastructure provisioning and configuration across multiple environments (development, staging, production).
CI/CD Pipelines:
- Design, build, and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI/CD, CircleCI, or GitHub Actions.
- Implement automated testing, build, and deployment processes.
- Optimize pipelines for speed, reliability, and security.
Cloud Infrastructure:
- Manage and optimize cloud infrastructure on platforms like AWS, Azure, or GCP.
- Monitor and troubleshoot cloud infrastructure performance and availability.
- Implement security best practices for cloud environments.
- Implement cost optimization strategies for cloud resources.
Job Summary
We are seeking a skilled Snowflake Developer to design, develop, migrate, and optimize Snowflake-based data solutions. The ideal candidate will have hands-on experience with Snowflake, SQL, and data integration tools to build scalable and high-performance data pipelines that support business analytics and decision-making.
Key Responsibilities:
Develop and implement Snowflake data warehouse solutions based on business and technical requirements.
Design, develop, and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and processing.
Write and optimize complex SQL queries for data retrieval, performance enhancement, and storage optimization.
Collaborate with data architects and analysts to create and refine efficient data models.
Monitor and fine-tune Snowflake query performance and storage optimization strategies for large-scale data workloads.
Ensure data security, governance, and access control policies are implemented following best practices.
Integrate Snowflake with various cloud platforms (AWS, Azure, GCP) and third-party tools.
Troubleshoot and resolve performance issues within the Snowflake environment to ensure high availability and scalability.
Stay updated on Snowflake best practices, emerging technologies, and industry trends to drive continuous improvement.
Qualifications:
Education: Bachelor’s or master’s degree in computer science, Information Systems, or a related field.
Experience:
6+ years of experience in data engineering, ETL development, or similar roles.
3+ years of hands-on experience in Snowflake development.
Technical Skills:
Strong proficiency in SQL, Snowflake Schema Design, and Performance Optimization.
Experience with ETL/ELT tools like dbt, Talend, Matillion, or Informatica.
Proficiency in Python, Java, or Scala for data processing.
Familiarity with cloud platforms (AWS, Azure, GCP) and integration with Snowflake.
Experience with data governance, security, and compliance best practices.
Strong analytical, troubleshooting, and problem-solving skills.
Communication: Excellent communication and teamwork abilities, with a focus on collaboration across teams.
Preferred Skills:
Snowflake Certification (e.g., SnowPro Core or Advanced).
Experience with real-time data streaming using tools like Kafka or Apache Spark.
Hands-on experience with CI/CD pipelines and DevOps practices in data environments.
Familiarity with BI tools like Tableau, Power BI, or Looker for data visualization and reporting.
Snowflake Architect
Job Summary: We are looking for a Snowflake Architect to lead the design, architecture, and migration of customer data into the Snowflake DB. This role will focus on creating a consolidated platform for analytics, driving data modeling and migration efforts while ensuring high-performance and scalable data solutions. The ideal candidate should have extensive experience in Snowflake, cloud data warehousing, data engineering, and best practices for optimizing large-scale architectures.
Key Responsibilities:
Architect and implement Snowflake data warehouse solutions based on technical and business requirements.
Define and enforce best practices for performance, security, scalability, and cost optimization in Snowflake.
Design and build ETL/ELT pipelines for data ingestion and transformation.
Collaborate with stakeholders to understand data requirements and create efficient data models.
Optimize query performance and storage strategies for large-scale data workloads.
Work with data engineers, analysts, and business teams to ensure seamless data access.
Implement data governance, access controls, and security best practices.
Troubleshoot and resolve performance bottlenecks in Snowflake.
Stay updated on Snowflake features and industry trends to drive innovation.
Qualifications:
Bachelor’s or master’s degree in computer science, Information Systems, or related field.
10+ years of experience in data engineering or architecture.
5+ years of hands-on experience with Snowflake architecture, administration, and development.
Expertise in SQL, Snowflake Schema Design, and Performance Optimization.
Experience with ETL/ELT tools such as dbt, Talend, Matillion, or Informatica.
Proficiency in Python, Java, or Scala for data processing.
Knowledge of cloud platforms (AWS, Azure, GCP) and Snowflake integration.
Experience with data governance, security, and compliance best practices.
Strong problem-solving skills and the ability to work in a fast-paced environment.
Excellent communication and stakeholder management skills.
Preferred Skills:
Experience in the customer engagement or contact center industry.
Familiarity with DevOps practices, containerization (Docker, Kubernetes), and infrastructure-as-code.
Knowledge of distributed systems, performance tuning, and scalability.
Familiarity with security best practices and secure coding standards.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled DevOps Service Desk Specialist to join our global DevOps team. As a DevOps Service Desk Specialist, you will be responsible for managing service desk operations, system monitoring, troubleshooting, and supporting automation workflows to ensure operational stability and excellence for enterprise IT projects. You will be providing 24/7 support for critical application environments for industry leaders in the automotive industry.
Responsibilities:
Incident Management: Monitor and respond to tickets raised by the DevOps team or end-users.
Support users with prepared troubleshooting Maintain detailed incident logs, track SLAs, and prepare root cause analysis reports.
Change & Problem Management: Support scheduled changes, releases, and maintenance activities. Assist in identifying and tracking recurring issues.
Documentation & Communication: Maintain process documentation, runbooks, and knowledge base articles. Provide regular updates to stakeholders on incidents and resolutions.
On-Call Duty: staffing of a 24/7 on-call service, handling of incidents outside normal office hours, escalation to and coordination with the onsite team, customers and 3rd parties where necessary.
Tool & Platform Support: Manage and troubleshoot CI/CD tools (e.g., Jenkins, GitLab), container platforms (e.g., Docker, Kubernetes), and cloud services (e.g., AWS, Azure).
Requirements:
Technical experience in Java Enterprise environment as developer or DevOps specialist.
Familiarity with DevOps principles and ticketing tools like ServiceNow.
Strong analytical, communication, and organizational abilities. Easy to work with.
Strong problem-solving, communication, and ability to work in a fast-paced 24/7 environment.
Optional: Experience with our relevant business domain (Automotive industry). Familiarity with IT process frameworks SCRUM, ITIL.
Skills & Requirements
Java Enterprise, DevOps, Service Desk, Incident Management, Change Management, Problem Management, System Monitoring, Troubleshooting, Automation Workflows, CI/CD, Jenkins, GitLab, Docker, Kubernetes, AWS, Azure, ServiceNow, Root Cause Analysis, Documentation, Communication, On-Call Support, ITIL, SCRUM, Automotive Industry.
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines to automate and streamline deployment processes.
- Manage and optimize cloud infrastructure (AWS, GCP, Azure) to ensure scalability, performance, and cost-efficiency.
- Implement and monitor system performance, ensuring high availability and uptime of services.
- Collaborate with software developers to ensure code is efficiently deployed and integrated into production systems.
- Automate routine tasks and processes to increase productivity and reduce errors.
- Build and maintain monitoring, logging, and alerting systems to ensure proactive issue resolution.
- Develop and maintain system configurations, security settings, and deployment scripts.
- Troubleshoot and resolve issues with application performance, infrastructure, and deployments.
- Ensure that systems are compliant with security best practices and regulatory standards.
- Stay up-to-date with the latest DevOps tools, trends, and technologies to continually improve operations.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.

Location: Malleshwaram/MG Road
Work: Initially Onsite and later Hybrid
We are committed to becoming a true DevOps house and want your help. The role will
require close liaison with development and test teams to increase effectiveness of
current dev processes. Participation in an out-of-hours emergency support rotationally
will be required. You will be shaping the way that we use our DevOps tools and
innovating to deliver business value and improve the cadence of the entire dev team.
Required Skills:
Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,
S3, RDS, Lambda, Cloudwatch, IAM etc)
• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS
• Good knowledge on creating and maintaining the infrastructure as code using Terraform
• Solid experience with CI-CD tools like Jenkins, git and Ansible
• Working experience on supporting Microservices (Deploying, maintaining and
monitoring Java web-based production applications using docker container)
• Strong knowledge on debugging production issues across the services and
technology stack and application monitoring (we use Splunk & Cloudwatch)
• Experience with software build tools (maven, and node)
• Experience with scripting and automation languages (Bash, groovy,
JavaScript, python)
• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu
• 4+ years in AWS DevOps Engineer
Optional skills:
• Oracle/SQL database maintenance experience
• Elastic Search
• Serverless/container based approach
• Automated testing of infrastructure deployments
• Experience of performance testing & JVM tuning
• Experience of a high-volume distributed eCommerce environment
• Experience working closely with Agile development teams
• Familiarity with load testing tools & process
• Experience with nginx, tomcat and apache
• Experience with Cloudflare
Personal attributes
The successful candidate will be comfortable working autonomously and
independently.
They will be keen to bring an entire team to the next level of delivering business value.
A proactive approach to problem
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.
🚀 Hiring: Azure DevOps Engineer – Immediate Joiners Only! 🚀
📍 Location: Pune (Hybrid)
💼 Experience: 5+ Years
🕒 Mode of Work: Hybrid
Are you a proactive and skilled Azure DevOps Engineer looking for your next challenge? We are hiring immediate joiners to join our dynamic team! If you are passionate about CI/CD, cloud automation, and SRE best practices, we want to hear from you.
🔹 Key Skills Required:
✅ Cloud Expertise: Proficiency in any cloud (Azure preferred)
✅ CI/CD Pipelines: Hands-on experience in designing and managing pipelines
✅ Containers & IaC: Strong knowledge of Docker, Terraform, Kubernetes
✅ Incident Management: Quick issue resolution and RCA
✅ SRE & Observability: Experience with SLI/SLO/SLA, monitoring, tracing, logging
✅ Programming: Proficiency in Python, Golang
✅ Performance Optimization: Identifying and resolving system bottlenecks

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for a skilled and experienced Senior Frontend Developer specializing in frontend development to join our team. In this role, you will be responsible for designing and implementing user interfaces that enhance our customer experience. Your contributions will play a critical role in driving the success of our projects by creating dynamic, intuitive, and responsive web applications.
Responsibilities:
Develop, maintain, and enhance web applications using front end tools to create seamless, user-friendly experiences.
Collaborate with cross-functional teams, including UX/UI designers, product managers, and backend developers, to deliver high-quality products.
Write clean, maintainable, and scalable code while adhering to best practices in frontend development.
Perform code reviews, optimize application performance, and debug issues for a smooth user experience.
Stay updated on the latest frontend features and web development trends to bring innovative ideas to the team.
Skills & Requirements
Angular 18+, TypeScript, Async Programming (e.g. Observables, Promises, RxJS, Signals), State management (e.g. ngRx, ngXs), Component libraries (e.g Angular CDK, PrimeNG), React.JS, State management (e.g. Redux), Component libraries, NextJS, Vue JS , TypeScript, State management (e.g. VueX, Redux), Component libraries (e.g Vuetify), NuxtJs, APIs (e.g. REST, RESTful API), Async Programming (e.g. Observables, Promises), Lazy Loading, Unit testing (e.g Jest), Integration testing / E2E testing (e.g Cypress), Linting (e.g. ESLint, Prettier, SonarLint), Build Tools (e.g. npm, webpack), Dev Tools (e.g. Git, Atlassian Jira, Atlassian Confluence), Single-Sign-On (e.g. OAuth2, Open ID Connect, JWT, SAML), Progressive Web Apps / pwa, Other APIs (e.g. GraphQL, WebSocket, Server-Sent-Events / SSE, gRPC), Containerisation (e.g. Docker, Kubernetes / K18n), CI/CD (e.g. Gitlab, GitHub, Jenkins, Bamboo, Azure DevOps), Cloud experience (e.g. AWS, Azure, GCP).

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for a skilled and experienced Senior Frontend Developer specializing in frontend development to join our team. In this role, you will be responsible for designing and implementing user interfaces that enhance our customer experience. Your contributions will play a critical role in driving the success of our projects by creating dynamic, intuitive, and responsive web applications.
Responsibilities:
Develop, maintain, and enhance web applications using front end tools to create seamless, user-friendly experiences.
Collaborate with cross-functional teams, including UX/UI designers, product managers, and backend developers, to deliver high-quality products.
Write clean, maintainable, and scalable code while adhering to best practices in frontend development.
Perform code reviews, optimize application performance, and debug issues for a smooth user experience.
Stay updated on the latest frontend features and web development trends to bring innovative ideas to the team.
Skills & Requirements
Angular 18+, TypeScript, Async Programming (e.g. Observables, Promises, RxJS, Signals), State management (e.g. ngRx, ngXs), Component libraries (e.g Angular CDK, PrimeNG), React.JS, State management (e.g. Redux), Component libraries, NextJS, Vue JS , TypeScript, State management (e.g. VueX, Redux), Component libraries (e.g Vuetify), NuxtJs, APIs (e.g. REST, RESTful API), Async Programming (e.g. Observables, Promises), Lazy Loading, Unit testing (e.g Jest), Integration testing / E2E testing (e.g Cypress), Linting (e.g. ESLint, Prettier, SonarLint), Build Tools (e.g. npm, webpack), Dev Tools (e.g. Git, Atlassian Jira, Atlassian Confluence), Single-Sign-On (e.g. OAuth2, Open ID Connect, JWT, SAML), Progressive Web Apps / pwa, Other APIs (e.g. GraphQL, WebSocket, Server-Sent-Events / SSE, gRPC), Containerisation (e.g. Docker, Kubernetes / K18n), CI/CD (e.g. Gitlab, GitHub, Jenkins, Bamboo, Azure DevOps), Cloud experience (e.g. AWS, Azure, GCP).
About SAP Fioneer
Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.
A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.
Product Technology Stack
- Languages: PowerShell, MgGraph, Git
- Storage & Databases: Azure Storage, Azure Databases
Role Overview
As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.
Key Responsibilities
Architecture & Design
- Design and document architecture blueprints and solution patterns for Azure-based applications.
- Implement hierarchical organizational governance using Azure Management Groups.
- Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.
Development & Implementation
- Build closed-loop, data-driven DevOps architectures using Azure Insights.
- Apply code-driven administration practices with PowerShell, MgGraph, and Git.
- Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
- Develop IAM standards with RBAC and EntraID.
Leadership & Collaboration
- Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
- Support the delivery of SaaS solutions on Azure.
Required Qualifications & Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in cloud solutions architecture and DevOps engineering.
- Extensive expertise in Azure services, core web technologies, and security best practices.
- Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
- Strong understanding of IAM, security best practices, and governance models in Azure.
- Experience working in Scrum-based environments with backlog management.
- Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.
Benefits
- Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
- Flexible work environment encouraging creativity and innovation.
- Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.
Diversity & Inclusion
At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG

Job Title: AI/ML Engineer (GenAI & System Modernization)
Experience: 4 to 8 years
Work Mode: Hybrid
Location: Bangalore
Job Overview:
We are seeking AI engineers passionate about Generative AI and AI-assisted modernization to build cutting-edge solutions. The candidate will work on in-house GenAI applications, support AI adoption across the organization, collaborate with vendors for POCs, and contribute to legacy system modernization using AI-driven automation.
Key Responsibilities:
- Design & develop in-house GenAI applications for internal use cases.
- Collaborate with vendors to evaluate and implement POCs for complex AI solutions.
- Work on AI-powered tools for SDLC modernization and code migration (legacy to modern tech).
- Provide technical support, training, and AI adoption strategies for internal users & teams.
- Assist in integrating AI solutions into software development processes.
Must Have:
- Bachelor’s / Master’s degree in Computer Science / Computer Engineering / Computer Applications / Information Technology OR AI/ML-related field.
- Relevant certification in AI/ML is an added advantage.
- Minimum of 2 successful AI/ML POCs or production deployments.
- Prior experience in AI-powered automation, AI-based DevOps, or AI-assisted coding.
- Proven track record of teamwork and successful project delivery.
- Strong analytical and problem-solving skills with a continuous learning mindset.
- A positive, can-do attitude with attention to detail.
AI/ML Expertise:
- Strong hands-on Python programming experience.
- Experience with Generative AI models and LLMs/SLMs (worked on real projects or POCs).
- Hands-on experience in Machine Learning & Deep Learning.
- Experience with AI/ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face).
- Understanding of MLOps pipelines and AI model deployment.
System Modernization & Enterprise Tech Familiarity:
- Basic understanding of Java, Spring, React, Kotlin, and Groovy.
- Experience and willingness to work on AI-driven migration projects (e.g., PL/SQL to Emery, JS to Groovy DSL).
- Experience with code quality, AI-assisted code refactoring, and testing frameworks.
Enterprise Integration & AI Adoption:
- Ability to integrate AI solutions into enterprise workflows.
- Experience with API development & AI model integration in applications.
- Familiarity with version control & collaboration tools (Git, CI/CD pipelines).
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
Our Professional Services team seeks a Cloud Engineer with a focus on Public Clouds for professional services engagements. In this role, the candidate will be ensuring the success of our engagements by providing deployment, configuration, and operationalization of Cloud infrastructure as well as various other cloud technologies such as On-Prem, Openshift, and Hybrid Environments.
A successful candidate for this position does require a good understanding of the Public Cloud systems (AWS, Azure) as well as a working knowledge of systems technologies, common enterprise software (Linux, Windows, Active Directory), Cloud Technologies (Kubernetes, VMware ESXi ), and a good understand of cloud automation (Ansible, CDK, Terraform, CloudFormation). The ideal candidate has industry experience and is confident working in a cross-functional team environment that is global in reach.
Key Roles & Responsibilities:
- Public Cloud: Lead Public Cloud deployments for our Cloud Engineering customers including setup, automation, configuration, documentation and troubleshooting. Redhat Openshift on AWS/Azure experience is preferred.
- Automation: Develop and maintain automated testing systems to ensure uniform and reproducible deployments for common infrastructure elements using tools like Ansible, Terraform, and CDK.
- Support: In this role the candidate may need to support the environment as part of the engagement through hand-off. Requisite knowledge of operations will be required
- Documentation: The role can require significant documentation of the deployment and steps to maintain the system. The Cloud Engineer will also be responsible for all required documentation needed as required for customer hand-off.
- Customer Skills: This position is customer facing and effective communication and customer service is essential.
Basic Qualifications:
- Bachelor's or Master's degree in computer programming or quality assurance.
- 5-8 years as an IT Engineer or DevOps Engineer with automation skills and AWS or Azure Expierence. Preferably in Professional Services.
- Proficiency in enterprise tools (Grafana, Splunk etc.), software (Windows, Linux, Databases, Active Directory, VMware ESXi, Kubernetes) and techniques (Knowledge of Best Practices).
- Demonstratable Proficiency with Automation Packages (Ansible, Git, CDK, Terraform, Cloudformation, Python)
Preferred Qualifications
- Exceptional communication and interpersonal skills.
- Strong ownership abilities, attention to detail.
Immediate Joiners Preferred. Notice Period - Immediate to 30 days.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled Full Stack (FSE) Developer with expertise in Java, Spring Boot development to join our dynamic team. In this role, you will be responsible for development of critical banking application across various business domains. You will collaborate closely with cross-functional teams to ensure high-quality solutions are developed, maintained, and continuously improved.
Responsibilities:
Development of business-critical banking applications.
Develop new features for banking applications using FSE technologies.
Ensure code quality through proper testing, reviews, and adherence to coding standards.
Collaborate with design, backend, and other teams to deliver seamless user experiences.
Troubleshoot, debug, and optimize performance issues.
Participate in agile development processes and contribute to continuous improvement initiatives.
Requirements:
Bachelors/Masters degree in computer science, Software Engineering, or a related field.
4 – 6 years of relevant experience in application development.
Solid experience in:
Java, Spring Boot.
APIs / REST.
Kubernetes / OpenShift.
Azure DevOps.
JMS, Message Queues.
Nice to have knowledge in:
Quarkus.
Apache Camel.
Soft skills / Personality:
Excellent English communication skills / proactive communication.
Candidate needs to have Self-dependent working style.
Candidate needs to have problem solving skills (Strong analytical skills to identify and solve complex issues).
Candidate needs to show high Adaptability (Flexibility in adjusting to different working environments and practices).
Candidate needs to be quick in Critical thinking (Evaluating information and making informed decisions).
Candidate needs to have Team collaboration (Ability to work collaboratively with a distributed team) character.
Candidate needs to have Cultural awareness ability.
Candidate needs to be Initiative (Proactively seeking solutions and improvements).
Good to have knowledge about Co Banking systems.
Good to have Banking domain knowledge.
Experience in customer facing is an advantage.
Skills & Requirements
Java, Spring Boot, APIs/REST, Kubernetes, OpenShift, Azure DevOps, JMS, Message Queues, Quarkus, Apache Camel, Excellent English communication, Proactive communication, Self-dependent working, Problem-solving, Analytical skills, Adaptability, Critical thinking, Team collaboration, Cultural awareness, Initiative, Co-banking systems knowledge, Banking domain knowledge, Customer-facing experience.

Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Senior Software Developer for IT Services Management Team.
In this role, you will be responsible for the application management of a B2C application to meet the agreed Service Level Agreements (SLAs) and fulfil customer expectations.
You will act as an On-call-duty team in the time between 6:00 PM to 8:00 AM IST- 365 days a year. We are seeking a hands-on Software Developer with strong experience in Frontend + DevOps knowledge.
Responsibilities:
Problem Management & Incident Management activities: Identifying and resolving technical issues and errors that arise during application usage.
Release and Update Coordination: Planning and executing software updates, new versions, or system upgrades to keep applications up to date.
Change Management: Responsible for implementing and coordinating changes to the application, considering the impact on ongoing operations.
Requirements:
Education and Experience:
A bachelor’s or master’s degree in a relevant field, with a minimum of 5 years of professional experience or equivalent work experience.
Skills & Expertise:
Proficient in ITIL service management frameworks.
Strong analytical and problem-solving abilities.
Experienced in project management methodologies (Agile, Kanban).
Excellent English Communication, presentation, and interpersonal skills, with the ability to engage and collaborate with stakeholders.
Experience in 24x7 application On call Support and ready to work in shifts on a Hybrid /WFH mode.
This assignment is a long-term contract for more than 3 years.
Good knowledge in Frontend Technologies like Frontend & DevOps – skill set given below.
Skills & Requirements
React JS (v17+), Tanstack, Context, React Router, Redux, Webpack / Vite, Storybook, Jest, React Testing Library, Vitest, Cypress, npm, Restful API, Azure, Azure Devops, Kubernetes, kubeAPI, docker/container, Debug Tools, openSSL, Kustomize, Curl.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Technical Team Lead to guide a local IT Services Management Team and also acting as a software developer. In this role, you will be responsible for the application management of a B2C application to meet the agreed Service Level Agreements (SLAs) and fulfil customer expectations.
Your Team will act as a on-call-duty team in the time between 6 pm to 8 am, 365 days a year. You will work together with the responsible Senior Project Manager in Germany.
We are seeking a hands-on leader who thrives in both team management and operational development. Whether you have experience in DevOps and Backend or Frontend, your expertise in both leadership and technical skills will be key to success in this position.
Responsibilities:
Problem Management & Incident Management activities: Identifying and resolving technical issues and errors that arise during application usage.
Release and Update Coordination: Planning and executing software updates, new versions, or system upgrades to keep applications up to date.
Change Management: Responsible for implementing and coordinating changes to the application, considering the impact on ongoing operations.
Requirements:
Education und Experience: A Bachelor’s or Master’s degree in a relevant field, with a minimum of 5 years of professional experience or equivalent work experience.
Skills & Expertise:
Proficient in ITIL service management frameworks.
Strong analytical and problem-solving abilities.
Experienced in project management methodologies (Agile, Kanban).
Leadership: Very good leadership skills with a customer orientated, proactive and results driven approach.
Communication: Excellent communication, presentation, and interpersonal skills, with the ability to engage and collaborate with stakeholders.
Language: English on a C2 Level.
Skills & Requirements
kubeAPI high Kustomize high docker/container high Debug Tools openSSL high Curl high Azure Devops, Pipeline, Repository, Deployment, ArgoCD, Certificates: Certificate Management / SSL, LetsEncrypt, Linux Shell, Keycloak.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
Job Overview:
We are seeking a highly skilled AWS Cloud SRE and Operations Engineer to join our cloud infrastructure team. The ideal candidate will be responsible for ensuring the reliability, availability, and performance of our AWS cloud infrastructure while automating and optimizing operational processes. You will play a key role in maintaining robust cloud environments, monitoring systems, troubleshooting issues, and enhancing the overall scalability and security of cloud-based applications.
Responsibilities:
- AWS Infrastructure Management: Design, implement, and manage scalable, secure, and reliable AWS infrastructure to support cloud-based applications.
- Site Reliability Engineering (SRE): Ensure the high availability, performance, and scalability of cloud environments through effective monitoring, automation, and incident response.
- Operations & Monitoring: Implement and maintain comprehensive monitoring solutions (e.g., CloudWatch, Datadog, Prometheus) to ensure visibility into the health and performance of applications and infrastructure.
- Automation: Automate operational tasks and processes, including provisioning, configuration management, deployment, and scaling of cloud resources using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation.
- Incident Response & Troubleshooting: Manage and respond to incidents, troubleshoot performance bottlenecks, and conduct root cause analysis to ensure system reliability and uptime.
- CI/CD Pipeline Support: Collaborate with development teams to optimize Continuous Integration/Continuous Deployment (CI/CD) pipelines and ensure smooth, automated deployments and rollbacks.
- Security & Compliance: Implement best practices for cloud security, including identity and access management (IAM), network security, encryption, and compliance with industry regulations.
- Backup & Disaster Recovery: Implement and manage backup strategies, disaster recovery plans, and high-availability architectures to ensure data integrity and system continuity.
- Performance Optimization: Continuously analyze and improve the performance of cloud resources, applications, and network traffic to optimize cost, speed, and availability.
- Capacity Planning: Monitor and plan for future capacity needs, scaling resources based on demand and application performance requirements.
- Documentation: Create and maintain detailed technical documentation, including architecture diagrams, operational procedures, and incident reports.
- Collaboration: Work closely with cross-functional teams, including development, QA, and security, to ensure smooth operations, deployments, and troubleshooting processes.
Qualifications:
- Education: Bachelor’s degree in Computer Science, Information Technology, or related fields.
- Experience:
- 3-5 years of experience as a Cloud SRE, Operations Engineer, or DevOps Engineer with a strong focus on AWS services with Experience in managing large-scale, distributed cloud environments.
- Technical Skills:
- Deep understanding of AWS services (EC2, S3, RDS, Lambda, ECS, EKS, VPC, Route 53, IAM, etc.).
- Experience with Infrastructure as Code (IaC) tools such as Terraform, AWS CloudFormation, or Ansible.
- Strong knowledge of Linux/Unix system administration and shell scripting.
- Proficiency in automation using Python, Bash, or other scripting languages.
- Experience with monitoring and logging tools (e.g., AWS CloudWatch, Datadog, ELK stack, Prometheus, Grafana).
- Hands-on experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.).
- Solid understanding of networking concepts, including DNS, load balancing, VPN, and firewalls.
- Problem-Solving: Strong troubleshooting and problem-solving skills for addressing cloud infrastructure and application issues.
- Communication: Excellent verbal and written communication skills, with the ability to collaborate effectively with teams and stakeholders.
Preferred:
- AWS certification (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.).
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of security best practices and regulatory compliance (e.g., GDPR, HIPAA).
- Familiarity with GitOps, service mesh technologies, and serverless architectures.
- Understanding of Agile methodologies and working in a DevOps environment.

Desired Technical Skills -
- Min 7 years of experience working in scalable cloud based product companies/MNCs
- Must have deeper understanding of Django, React, AWS Infrastructure, DevOps, sql and nosql databases
- Must have managed the architecture and solution design of product and have clear vision to handle the volume of data
- Min 7 years in Django Development with exposure to REST API, Third Party API integration, Social Media API Integrations, Oauth integration
- Must have knowledge of developing Microservices oriented architecture based services using REST APIs with Token based authentication (OAuth, JWT).
- Knowledge of Celery, Ngnix, Scheduler and multi-threading is required
- Must have ReactJS/Redux knoweldge
- Min 2 yrs of Frontend Development experience is must
- Must know Postman, Swagger and PostgreSQL with strong unit test and debugging skills.
- Good understanding of code versioning tools such as Git. Should be comfortable in working as an individual and in team.
- Application Deployment Experience is MUST
- In Depth knowledge of AWS Architecture (EC2, RDS, Lambda, Elastic Cache etc.)
- Any knowledge of Bootstrap or Cloud based services (AWS, GCP) is an advantage
- Must have management and communications Skills to handle developers and customers technical queries
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Job Description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange
We are seeking talented DevOps-Engineers with focus on Elastic Stack (ELK) to join our dynamic DPS team. In this role, you will be responsible for refining and advising on the further development of an existing monitoring solution based on the Elastic Stack (ELK). You will independently handle tasks related to architecture, setup, technical migration, and documentation.
The current application landscape features multiple Java web services running on JEE application servers, primarily hosted on AWS, and integrated with various systems such as SAP, other services, and external partners. DPS is committed to delivering the best digital work experience for the customers employees and customers alike.
Responsibilities:
Install, set up, and automate rollouts using Ansible/CloudFormation for all stages (Dev, QA, Prod) in the AWS Cloud for components such as Elastic Search, Kibana, Metric beats, APM server, APM agents, and interface configuration.
Create and develop regular "Default Dashboards" for visualizing metrics from various sources like Apache Webserver, application servers and databases.
Improve and fix bugs in installation and automation routines.
Monitor CPU usage, security findings, and AWS alerts.
Develop and extend "Default Alerting" for issues like OOM errors, datasource issues, and LDAP errors.
Monitor storage space and create concepts for expanding the Elastic landscape in AWS Cloud and Elastic Cloud Enterprise (ECE).
Implement machine learning, uptime monitoring including SLA, JIRA integration, security analysis, anomaly detection, and other useful ELK Stack features.
Integrate data from AWS CloudWatch.
Document all relevant information and train involved personnel in the used technologies.
Requirements:
Experience with Elastic Stack (ELK) components and related technologies.
Proficiency in automation tools like Ansible and CloudFormation.
Strong knowledge of AWS Cloud services.
Experience in creating and managing dashboards and alerts.
Familiarity with IAM roles and rights management.
Ability to document processes and train team members.
Excellent problem-solving skills and attention to detail.
Skills & Requirements
Elastic Stack (ELK), Elasticsearch, Kibana, Logstash, Beats, APM, Ansible, CloudFormation, AWS Cloud, AWS CloudWatch, IAM roles, AWS security, Automation, Monitoring, Dashboard creation, Alerting, Anomaly detection, Machine learning integration, Uptime monitoring, JIRA integration, Apache Webserver, JEE application servers, SAP integration, Database monitoring, Troubleshooting, Performance optimization, Documentation, Training, Problem-solving, Security analysis.
We are seeking a seasoned Engineering, Senior Manager (M3 Level) to join our dynamic team. As a first-line manager, you will lead a team of talented engineers, driving technical excellence, fostering a culture of ownership, and ensuring the successful delivery of high-impact projects. You will be responsible for guiding technical decisions, managing team performance, and aligning engineering efforts with business goals.
Responsibilities:
Technical Leadership:
• Provide technical leadership and direction for major projects, ensuring alignment with business goals and industry best practices.
• Be hands-on with code, maintaining high technical standards and actively participating in design and architecture decisions, code reviews, and helping engineers optimize their code.
• Ensure that high standards of performance, scalability, and reliability are maintained when architecting, designing, and developing complex software systems and applications.
• Ensure accountability for the team’s technical decisions and enforce engineering best practices (e.g., documentation, automation, code management, security principles, leverage CoPilot).
• Ensure the health and quality of services and incidents, proactively identifying and addressing issues. Utilize service health indicators and telemetry for action. Implement best practices for operational excellence.
• Play a pivotal role in the R.I.D.E. (Review, Inspect, Decide, Execute) framework. • Understand CI/CD pipelines from build, test, to deploy phases.
Team Management:
• Lead and manage a team of software engineers, fostering a collaborative and highperformance environment. Conduct regular performance reviews, provide feedback, and support professional development.
• Foster a culture of service ownership and enhance team engagement.
• Drive succession planning and engineering efficiency, focusing on quality and developer experience through data-driven approaches.
• Promote a growth mindset, understanding and driving organizational change.
• Actively seek opportunities for team growth and cross-functional collaboration.
• Works and guides the team on how to operate in a DevOps Model. Taking ownership from working with product management on requirements to design, develop, test, deploy and maintain the software in production.
Minimum Qualifications: • Bachelor’s degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. • 11+ years of experience in software development, with 3+ years in a technical leadership role and 2+ years in a people management role. • Proven track record of leading and delivering large-scale, complex software projects. • Deep expertise in one or more programming languages such as Python, Java, JavaScript. • Extensive experience with software architecture and design patterns. • Strong understanding of cloud technologies and DevOps principles. • Excellent problem-solving skills and attention to detail. • Excellent communication and leadership skills, with a demonstrated ability to influence and drive chang
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security


Minimum 7 years experience : Individual Contributor ( Back-end plus Cloud- Infra)
Banking/Payments Domain : Worked on projects from scratch and scaled
Experience with Payment Gateway Companies is a plus
Tech Stack
Java SpringBoot,AWS,GCP,REST
Worked on CI/CD (Manage and setup )
Different SQL and NO-SQL Databases -,PostgreSQL,MongoDB
Responsibilities:
End to end coding ; from software architecture to managing scaling,of high throughput(100000)RPS high volume transactions.
DIscuss business requirements and timelines with management and create a task plan for junior members.
Manage the day to day activities of all team members and report their work progress
Mentoring the team on best coding practices and making sure modules are Live
on time.
Management of security vulnerabilities.
Be a full individual contributor which means can work in a team as well as alone.
Attitude:
Passion for tech innovation and solve problems
GoGetter Attitude
Extremely humble and polite
Experience in Product companies and Managing small teams is a plus
Title: Azure Cloud Developer/Engineer
Exp: 5+ yrs
Location: T-Hub, Hyderabad
Work from office (5 days/week)
Interview rounds: 2-3
Excellent comm skills
Immediate Joiner
Job Description
Position Overview:
We are seeking a highly skilled Azure Cloud Developer/Engineer with experience in designing, developing, and managing cloud infrastructure solutions. The ideal candidate should have a strong background in Azure infrastructure deployment using Terraform,Kubernetes (AKS) with advanced networking, and Helm Charts for application management.Experience with AWS is a plus. This role requires hands-on expertise in deploying scalable, secure,and highly available cloud solutions with strong networking capabilities.
Key Responsibilities:
- Deploy and manage Azure infrastructure using Terraform through CI/CD pipelines.
- Design, deploy, and manage Azure Kubernetes Service (AKS) with advanced networking features, including on-premise connectivity.
- Create and manage Helm Charts, ensuring best practices for configuration, templating, and application lifecycle management.
- Collaborate with development, operations, and security teams to ensure optimal cloud infrastructure architecture.
- Implement high-level networking solutions including Azure Private Link, VNET Peering, ExpressRoute, Application Gateway, and Web Application Firewall (WAF).
- Monitor and optimize cloud environments for performance, cost, scalability, and security using tools like Azure Cost Management, Prometheus, Grafana, and Azure Monitor.
- Develop CI/CD pipelines for automated deployments using Azure DevOps, GitHub Actions, or Jenkins, integrating Terraform for infrastructure automation.
- Implement security best practices, including Azure Security Center, Azure Policy, and Zero Trust Architecture.
- Troubleshoot and resolve issues in the cloud environment using Azure Service Health, Log Analytics, and Azure Sentinel.
- Ensure compliance with industry standards (e.g., CIS, NIST, ISO 27001) and organizational security policies.
- Work with Azure Key Vault for secrets and certificate management.
- Explore multi-cloud strategies, integrating AWS services where necessary.
Key Skills Required:
- Azure Cloud Infrastructure Deployment: Expertise in provisioning and managing Azure resources using Terraform within CI/CD pipelines.
- Kubernetes (AKS) with Advanced Networking: Experience in designing AKS clusters with private networking, hybrid connectivity (ExpressRoute, VPN), and security best practices.
- Infrastructure as Code (Terraform, Azure Bicep): Deep understanding of defining and maintaining cloud infrastructure through code.
- Helm Charts: Strong expertise in creating, deploying, and managing Helm-based Kubernetes application deployments.
- Networking & Security: In-depth knowledge of VNET Peering, Private Link, ExpressRoute,Application Gateway, WAF, and hybrid networking.
- CI/CD Pipelines: Experience with building and managing Azure DevOps, GitHub Actions, or Jenkins pipelines for infrastructure and application deployment.
- Monitoring & Logging: Experience with Prometheus, Grafana, Azure Monitor, Log Analytics,and Azure Sentinel.
- Scripting & Automation: Proficiency in Bash, PowerShell, or Python.
- Cost Optimization (FinOps): Strong knowledge of Azure Cost Management and cloud financial governance.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in cloud engineering, preferably with Azure-focused infrastructure deployment and Kubernetes networking.
- Strong understanding of containerization, orchestration, and microservices architecture.
Certifications (Preferred):
Required:
- Microsoft Certified: Azure Solutions Architect Expert
- Microsoft Certified: Azure DevOps Engineer Expert
Nice to Have (AWS Experience):
- AWS Certified Solutions Architect – Associate or Professional
- AWS Certified DevOps Engineer – Professional
Nice to Have Skills:
- Experience with multi-cloud environments (Azure & AWS).
- Familiarity with container security tools (Aqua Security, Prisma Cloud).
- Experience with GitOps methodologies using tools like ArgoCD or Flux.
- Understanding of serverless computing and event-driven architectures (Azure Functions, Event Grid, Logic Apps).
Benefits
Why Join Us?
- Competitive salary with performance-based incentives.
- Opportunities for professional certifications (e.g., AWS, Kubernetes, Terraform).
- Access to training programs, workshops, and learning resources.
- Comprehensive health insurance coverage for employees and their families.
- Wellness programs and mental health support.
- Hands-on experience with large-scale, innovative cloud solutions.
- Opportunities to work with modern tools and technologies.
- Inclusive, supportive, and team-oriented environment.
- Opportunities to collaborate with global clients and cross-functional teams.
- Regular performance reviews with rewards for outstanding contributions.
- Employee appreciation events and programs.

Golang developer
Experience : 9 to 12 Years
Job Description-
We are looking for a Staff Software Engineer with expertise in building and supporting core SaaS platform technology including identity and access management (IAM), message bus, notifications, and related platform components to join our team. The ideal candidate will have a strong track record of providing technical leadership and implementing SaaS platform technologies.
As a Staff Software Engineer on our SaaS Platform Services Team, you will be a key player in designing, developing, and maintaining critical components of our platform. This role requires expertise in GoLang and developing, integrating with, and supporting shared platform services and technology such as Identify and Access Management (IAM), API Gateway, Kubernetes, and Kafka.
What you'll do:
- Help define and execute on the technical roadmap for our core SaaS platform technology
- Work closely with engineering teams to integrate these services with the rest of our platform.
- Help the engineering manager hire, train, and mentor engineers and maintain a high-performing engineering culture.
- Collaborate closely with both architecture and engineering teams to review project requirements, technical artifacts, and designs, and ensure that our platform meets the needs of our users.
- Design, develop, and maintain high-quality, scalable, and reliable software components using Golang.
- Collaborate with cross-functional teams to gather and refine requirements for platform services including IAM, API Gateway, notification services, and message bus.
- Architect, deploy, and manage containerized services leveraging Kubernetes.
- Implement best practices for code quality, security, and scalability.
You'll be expected to have:
- Bachelor's or higher degree in Computer Science, Software Engineering, or related field
- Minimum 7-10 years relevant experience in software development including extensive experience in GoLang programming language.
- Strong expertise in container technologies, with a focus on Kubernetes, experience with cloud platforms (e.g., AWS, GCP, Azure), and familiarity with CI/CD pipelines and DevOps practices.
- Proficiency in designing and implementing IAM solutions.
- Experience with developing and maintaining customer facing APIs.
- Deep understanding of Service Oriented Architecture and API standards such as REST and GraphQL.
- Solid understanding of microservices architecture and distributed systems.
- Strong problem-solving skills and ability to troubleshoot complex issues.
- Excellent written and verbal communication skills.
- Ability to work effectively both independently and in a collaborative team environment.
- Prior experience mentoring and providing technical guidance to junior engineers.
Job description
Location: Chennai, India
Experience: 5+ Years
Certification: Kafka Certified (Mandatory); Additional Certifications are a Plus
Job Overview:
We are seeking an experienced DevOps Engineer specializing in GCP Cloud Infrastructure Management and Kafka Administration. The ideal candidate should have 5+ years of experience in cloud technologies, Kubernetes, and Kafka, with a mandatory Kafka certification.
Key Responsibilities:
Cloud Infrastructure Management:
· Manage and update Kubernetes (K8s) on GKE.
· Monitor and optimize K8s resources, including pods, storage, memory, and costs.
· Oversee the general monitoring and maintenance of environments using:
o OpenSearch / Kibana
o KafkaUI
o BGP
o Grafana / Prometheus
Kafka Administration:
· Manage Kafka brokers and ACLs.
· Hands-on experience in Kafka administration (preferably Confluent Kafka).
· Independently debug, optimize, and implement Kafka solutions based on developer and business needs.
Other Responsibilities:
· Perform random investigations to troubleshoot and enhance infrastructure.
· Manage PostgreSQL databases efficiently.
· Administer Jenkins pipelines, supporting CI/CD implementation and maintenance.
Required Skills & Qualifications:
· Kafka Certified Engineer (Mandatory).
· 5+ years of experience in GCP DevOps, Cloud Infrastructure, and Kafka Administration.
· Strong expertise in Kubernetes (K8s), Google Kubernetes Engine (GKE), and cloud environments.
· Hands-on experience with monitoring tools like Grafana, Prometheus, OpenSearch, and Kibana.
· Experience managing PostgreSQL databases.
· Proficiency in Jenkins pipeline administration.
· Ability to work independently and collaborate with developers and business stakeholders.
If you are passionate about DevOps, Cloud Infrastructure, and Kafka, and meet the above qualifications, we encourage you to apply!
About the role:
Devtron is seeking a Technical Content Writer to join the team in Gurugram. This is an internship on-site position that requires excellent communication skills in English. The Technical Content Writer will work with the Marketing and Product teams to write technical documentation, product guides, blogs, videos, and other technical marketing content.
What You’ll Do:
● Execute the content calendar by building and delivering on the roadmap set by the management
● Determine the needs of end users of technical documentation and work closely with the Technology, DevOps, and Marketing teams to make products easier to use. This includes feature guides, FAQs, and onboarding materials.
● Create long-form content in the form of blogs and case studies among others
● Work closely with the creative teams to represent the content in the most suitable visual format
● Research target audience and industry-related topics
● Apply SEO in our content marketing strategies
● Optimize existing content using keyword research and SEO guidelines
● Repurpose content for different channels (web, email, social media)
● Collaborate with internal and external stakeholders for content inputs and illustrations
● Audit and update the content regularly to ensure relevance with incremental product feature releases, customer use cases, support issues, and other internal developments
What You’ll Need:
● Bachelor's degree in Computer Science or related field.
● Looking for a fresher from the 2025 batch.
● Familiarity with DevOps domain.
● Excellent skills in grammar, minimalist documentation design, and effective information architecture.
● Great teaching skills that translate into amazing written work.
● Experience with a rapidly scaling start-up environment.
Job Title : Senior AWS Data Engineer
Experience : 5+ Years
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Senior AWS Data Engineer with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.
Job Description:
The Sr. Software Engineer – DevOps will manage CI/CD pipelines, infrastructure, and containerized
environments to ensure reliable deployments.
Experience: 5+ years in DevOps engineering.
Responsibilities:
• Design and implement CI/CD pipelines using Jenkins or GitLab.
• Manage Kubernetes and OpenShift clusters for container orchestration.
• Monitor system performance using Prometheus, Grafana, and ELK Stack.
• Automate infrastructure provisioning with Terraform or Ansible.
Required Skills:
• Expertise in OCP, Kubernetes, Docker, and CI/CD tools.
• Proficiency in IT infrastructure and networking concepts.
Job Type: Full Time
Job Location: Gurgaon,Hyderabad
Job Mode :Work From Office.


Job Title : Software Engineer (.NET, Azure)
Location : Remote
Employment Type : [Full-time/Contract]
Experience Level : 3+ Years
Job Summary :
We are looking for a skilled Software Engineer (.NET, Azure) to develop high-quality, secure, and scalable software solutions. You will collaborate with product owners, security specialists, and engineers to deliver robust applications.
Responsibilities :
- Develop and maintain server-side software using .NET (C#), SQL, and Azure.
- Build and secure RESTful APIs.
- Deploy and manage applications on Azure.
- Ensure version control using Azure DevOps/Git.
- Write clean, maintainable, and scalable code.
- Debug and optimize application performance.
Qualifications :
- 3+ Years of server-side development experience.
- Strong proficiency in .NET, SQL, and Azure.
- Experience with RESTful APIs and DevOps/Git.
- Ability to work collaboratively and independently.
- Familiarity with Scrum methodologies.
DevOps Engineer
Bangalore / Full-Time
Job Description
At, we build Enterprise-Scale AI/ML-powered products for Manufacturing, Sustainability and Supply Chain. We are looking for a DevOps Engineer to help us deploying product updates, identifying production issues and implement integrations that meet customer needs. By joining our team, you will take part in various projects, involves working with clients to successfully implement and integrate a products, software, or systems into their existing infrastructure or cloud.
What You'll Do
• Collaborate with stakeholders to gather and analyze customer needs, ensuring that DevOps strategies align with business objectives.
• Deploy and manage various development, testing, and automation tools, alongside robust IT infrastructure to support our software lifecycle.
• Configure and maintain the necessary tools and infrastructure to support continuous integration, continuous deployment (CI/CD), and other DevOps processes.
• Establish and document processes for development, testing, release, updates, and support to streamline DevOps operations.
• Manage the deployment of software updates and bug fixes, ensuring minimal downtime and seamless integration.
• Develop and implement tools aimed at minimizing errors and enhancing the overall customer experience.
• Promote and develop automated solutions wherever possible to increase efficiency and reduce manual intervention.
• Evaluate, select, and deploy appropriate CI/CD tools that best fit the project requirements and organizational goals.
• Drive ongoing enhancements by building and maintaining robust CI/CD pipelines, ensuring seamless integration, development, and deployment cycles.
• Integrate requested features and services as per customer requirements to enhance product functionality.
• Conduct thorough root cause analyses for production issues, implementing effective solutions to prevent recurrence.
• Investigate and resolve technical problems promptly to maintain system stability and performance.
• Offer expert technical support, including GitOps for automated Kubernetes deployments, Jenkins pipeline automation, VPS setup, and more, ensuring smooth and reliable operations.
Requirements & Skills
• Bachelor’s degree in computer science, MCA or equivalent practical experience
• 4 to 6 years of hands-on experience as a DevOps Engineer
• Proven experience with cloud platforms such as AWS or Azure, including services like EC2, S3, RDS and Kubernetes Service (EKS).
• Strong understanding of networking concepts, including VPNs, firewalls, and load balancers.
• Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, Bitbucket Pipeline, or similar
• Experience with configuration management tools such as Ansible, Chef, or Puppet.
• Skilled in using IaC tools like Terraform, AWS CloudFormation, or similar.
• Strong knowledge of Docker and Kubernetes for container management and orchestration.
• Expertise in using Git and managing repositories on platforms like GitHub, GitLab, or Bitbucket.
• Ability to build and maintain automated scripts and tools to enhance DevOps processes.
• Experience with monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and performance.
• Experience with GitOps practices for managing Kubernetes deployments using Flux2 or similar.
• Proficiency in scripting languages such as Python, Yaml, Bash, or PowerShell.
• Strong analytical skills to perform root cause analysis and troubleshoot complex technical issues.
• Excellent teamwork and communication skills to work effectively with cross-functional teams and stakeholders.
• Ability to thrive in a fast-paced environment and adapt to changing priorities and technologies.
• Eagerness to stay updated with the latest DevOps trends, tools, and best practices.
Nice to have:
• AWS Certified DevOps Engineer
• Azure DevOps Engineer Expert
• Certified Kubernetes Administrator (CKA)
• Understanding of security compliance standards (e.g., GDPR, HIPAA) and best practices in DevOps.
• Experience with cost management and optimization strategies in cloud environments.
• Knowledge of incident management and response tools and processes.
Job Title : Tech Lead - Data Engineering (AWS, 7+ Years)
Location : Gurugram
Employment Type : Full-Time
Job Summary :
Seeking a Tech Lead - Data Engineering with expertise in AWS to design, build, and optimize scalable data pipelines and data architectures. The ideal candidate will have experience in ETL/ELT, data warehousing, and big data technologies.
Key Responsibilities :
- Build and optimize data pipelines using AWS (Glue, EMR, Redshift, S3, etc.).
- Maintain data lakes & warehouses for analytics.
- Ensure data integrity through quality checks.
- Collaborate with data scientists & engineers to deliver solutions.
Qualifications :
- 7+ Years in Data Engineering.
- Expertise in AWS services, SQL, Python, Spark, Kafka.
- Experience with CI/CD, DevOps practices.
- Strong problem-solving skills.
Preferred Skills :
- Experience with Snowflake, Databricks.
- Knowledge of BI tools (Tableau, Power BI).
- Healthcare/Insurance domain experience is a plus.


About ProjectDiscovery
ProjectDiscovery is an open-source powered cyber security company with a mission to democratize security. With one of the largest open-source security communities in the world, we host contributions from security researchers and engineers to our 20+ open-source projects, including tools like Nuclei and httpx, which have earned us over 100k GitHub stars and millions of downloads.
We’re a passionate, globally distributed team of ~35, driven by the shared mission of revolutionizing the application security landscape. Backed by $25M in funding, we’re looking for talented individuals to join us in Jaipur office.
Learn more at:
🌐 ProjectDiscovery.io
📂 ProjectDiscovery GitHub
About the Role
We are looking for a software engineer to join our core platform team. The Platform team is a small group of experienced engineers managing ProjectDiscovery infrastructure, improving developer velocity, and ensuring the security, reliability, and scalability of our software. We are highly engaged in shaping the engineering culture at ProjectDiscovery, and operate as “the execution arm” working closely with founders and our CTO.
- Own codebase health, iteration velocity, and developer experience for all of ProjectDiscovery engineering.
- Build and maintain scalable, high-performance web applications.
- Work across the stack, from crafting intuitive front-end interfaces to building robust backend systems.
- Design and implement secure, maintainable APIs for seamless integration of tools and platforms.
What We’re Looking For:
- Expert with JavaScript (Next.js, React) and backend technologies like Node.js and Go.
- Experience with cloud services (Vercel, AWS, GCP) and modern DevOps practices.
- Knowledge of RESTful APIs and database systems like PostgreSQL, Elastic or MongoDB.
- Self-motivated and passionate about solving complex problems at scale.
- Operating and optimizing CI/CD systems to improve developer velocity
Why Join Us?
- Competitive compensation package and stock options.
- Inclusive Healthcare Package.
- Learn and grow - we provide mentorship and send you to events that help you build your network and technical skills.
- Learn with intense innovation and software shipping cycles. We ship multiple times a week and push major releases a couple of times a month.
Our Interview Process
We value efficiency and technical excellence in our hiring process:
- Application Review: Your application is reviewed by a technical team member.
- Initial Screening: A short call to understand your background, goals, and fit.
- Technical Rounds:Coding Assessment: Solve challenges using our tech stack.
- Create PR: Develop or enhance a feature related to one of our open-source tools.
- Final Round: Showcase your work, share your vision, and discuss how you can contribute to ProjectDiscovery at our office in Jaipur.