50+ Windows Azure Jobs in India
Apply to 50+ Windows Azure Jobs on CutShort.io. Find your next job, effortlessly. Browse Windows Azure Jobs and apply today!



About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills


Roles & Responsibilities:
We are looking for a Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Manager Data Scientist who will be responsible for
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring 3. Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 12+ years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred) 3. Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English
Job Summary
We are seeking a skilled Infrastructure Engineer with 3 to 5 years of experience in Kubernetes to join our team. The ideal candidate will be responsible for managing, scaling, and securing our cloud infrastructure, ensuring high availability and performance. You will work closely with DevOps, SREs, and development teams to optimize our containerized environments and automate deployments.
Key Responsibilities:
- Deploy, manage, and optimize Kubernetes clusters in cloud and/or on-prem environments.
- Automate infrastructure provisioning and management using Terraform, Helm, and CI/CD pipelines.
- Monitor system performance and troubleshoot issues related to containers, networking, and storage.
- Ensure high availability, security, and scalability of Kubernetes workloads.
- Manage logging, monitoring, and alerting using tools like Prometheus, Grafana, and ELK stack.
- Optimize resource utilization and cost efficiency within Kubernetes clusters.
- Implement RBAC, network policies, and security best practices for Kubernetes environments.
- Work with CI/CD pipelines (Jenkins, ArgoCD, GitHub Actions, etc.) to streamline deployments.
- Collaborate with development teams to containerize applications and enhance performance.
- Maintain disaster recovery and backup strategies for Kubernetes workloads.
Required Skills & Qualifications:
- 3 to 5 years of experience in infrastructure and cloud technologies.
- Strong hands-on experience with Kubernetes (K8s), Helm, and container orchestration.
- Experience with cloud platforms (AWS, GCP, Azure) and managed Kubernetes services (EKS, GKE, AKS).
- Proficiency in Terraform, Ansible, or other Infrastructure as Code (IaC) tools.
- Knowledge of Linux system administration, networking, and security.
- Experience with Docker, container security, and runtime optimizations. Hands-on experience in monitoring, logging, and observability tools.
- Scripting skills in Bash, Python, or Go for automation.
- Good understanding of CI/CD pipelines and deployment automation.
- Strong troubleshooting skills and experience handling production incidents
We’re looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization.
Responsibilities:
- Lead the design of data warehouses, lakes, and ETL workflows.
- Collaborate with teams to gather requirements and build scalable solutions.
- Ensure data governance, security, and optimal performance of systems.
- Mentor junior engineers and drive end-to-end project delivery.
Requirements:
- 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects.
- Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms.
- Expertise in big data tools (e.g., Apache Spark, Kafka).
- Excellent communication skills and leadership abilities.
Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.

Job Title: Full Stack Developer
Job Description:
We are looking for a skilled Full Stack Developer with hands-on experience in building scalable web applications using .NET Core and ReactJS. The ideal candidate will have a strong understanding of backend development, cloud services, and modern frontend technologies.
Key Skills:
- .NET Core, C#
- SQL Server
- React JS
- Azure (Functions, Services)
- Entity Framework
- Microservices Architecture
Responsibilities:
- Design, develop, and maintain full-stack applications
- Build scalable microservices using .NET Core
- Implement and consume Azure Functions and Services
- Develop efficient database queries with SQL Server
- Integrate front-end components using ReactJS
- Collaborate with cross-functional teams to deliver high-quality solutions


Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 1-3 Years
Key Responsibilities:
Design, develop, and maintain Python-based applications and services.
Collaborate with cross-functional teams to define, design, and ship new features.
Write clean, maintainable, and efficient code following best practices.
Optimize applications for maximum speed and scalability.
Troubleshoot, debug, and upgrade existing systems.
Integrate user-facing elements with server-side logic.
Implement security and data protection measures.
Work with databases (SQL/NoSQL) and integrate data storage solutions.
Participate in code reviews to ensure code quality and share knowledge with the team.
Stay up-to-date with emerging technologies and industry trends.
Requirements:
1-3 years of professional experience in Python development.
Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
Experience with RESTful APIs and web services.
Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
Experience with version control systems (e.g., Git).
Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Good to Have:
Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
Knowledge of asynchronous programming and event-driven architecture.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with microservices architecture.
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
Hands on experience in RAG and LLM model intergration would be surplus.


About Us
Seeking a talented .NET Developer to join our team and work with one of our key clients on the development of a cloud-based SaaS product.
What You’ll Do
- Collaborate closely with the client’s Product Team to brainstorm ideas, suggest product flows, and influence technical direction a
- Develop and maintain robust, scalable, and maintainable code following SOLID principles, TDD/BDD, and clean architecture stan
- Work with Azure, C#/.NET, and React to build full-stack features and cloud-based services
- Design and implement microservices using CQRS, DDD, and other modern architectural patterns
- Manage and interact with SQL Server and other relational/non-relational databases
- Conduct code reviews and evaluate pull requests from team members
- Mentor junior developers and contribute to a strong engineering culture
- Take part in sprint reviews and agile ceremonies
- Analyze legacy systems for refactoring, modernization, and platform evolution
What We’re Looking For
- Strong hands-on experience with .NET (C#) and Azure Cloud Services
- Working knowledge of React for frontend development
- Deep understanding of Domain-Driven Design (DDD), Onion Architecture, CQRS, and Microservices Architecture
- Expertise in SOLID, OOP, Clean Code, KISS, and DRY principles
- Familiarity with both relational (SQL Server) and non-relational databases
- Experience with TDD/BDD testing approaches and scalable, testable code design
- Strong communication and collaboration skills
- Passion for mentoring and uplifting fellow engineers
Nice to Have
- Experience with event-driven architecture
- Exposure to containerization (Docker, Kubernetes ) - Familiarity with DevOps pipelines and CI/CD on Azure


About the Role:
- We are looking for a highly skilled and experienced Senior Python Developer to join our dynamic team based in Manyata Tech Park, Bangalore. The ideal candidate will have a strong background in Python development, object-oriented programming, and cloud-based application development. You will be responsible for designing, developing, and maintaining scalable backend systems using modern frameworks and tools.
- This role is hybrid, with a strong emphasis on working from the office to collaborate effectively with cross-functional teams.
Key Responsibilities:
- Design, develop, test, and maintain backend services using Python.
- Develop RESTful APIs and ensure their performance, responsiveness, and scalability.
- Work with popular Python frameworks such as Django or Flask for rapid development.
- Integrate and work with cloud platforms (AWS, Azure, GCP or similar).
- Collaborate with front-end developers and other team members to establish objectives and design cohesive code.
- Apply object-oriented programming principles to solve real-world problems efficiently.
- Implement and support event-driven architectures where applicable.
- Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues.
- Write clean, maintainable, and reusable code with proper documentation.
- Contribute to system architecture and code review processes.
Required Skills and Qualifications:
- Minimum of 5 years of hands-on experience in Python development.
- Strong understanding of Object-Oriented Programming (OOP) and Data Structures.
- Proficiency in building and consuming REST APIs.
- Experience working with at least one cloud platform such as AWS, Azure, or Google Cloud Platform.
- Hands-on experience with Python frameworks like Django, Flask, or similar.
- Familiarity with event-driven programming and asynchronous processing.
- Excellent problem-solving, debugging, and troubleshooting skills.
- Strong communication and collaboration abilities to work effectively in a team environment.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a Lead SDET with 8-10 years of experience to play a pivotal role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is an exciting opportunity to work on cutting-edge performance testing strategies and drive impactful initiatives across the organisation.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Review application architecture and suggest improvements to enhance scalability
- Leverage AI at appropriate layers to improve efficiency and drive positive business outcomes
- Drive performance testing initiatives across the organization and ensure seamless execution
- Automate the capturing of performance metrics and generate performance trend reports
- Research, evaluate, and conduct PoCs for new tools and solutions
- Collaborate with developers and architects to enhance frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Ensure high availability and reliability of applications and services
Requirements:
- 6-9 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimising frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
- Experience in increasing application/service availability from 99.9% (three 9s) to 99.99% or higher (four/five 9s)
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.
Overview
As an engineer in the Service Operations division, you will be responsible for the day-to-day management of the systems and services that power client products. Working with your team, you will ensure daily tasks and activities are successfully completed and where necessary, use standard operating procedures and knowledge to resolve any faults/errors encountered.
Job Description
Key Tasks and Responsibilities:
Ensure daily tasks and activities have successfully completed. Where this is not the case, recovery and remediation steps will be undertaken.
Undertake patching and upgrade activities in support of ParentPay compliance programs. These being PCI DSS, ISO27001 and Cyber Essentials+.
Action requests from the ServiceNow work queue that have been allocated to your relevant resolver group. These include incidents, problems, changes and service requests.
Investigate alerts and events detected from the monitoring systems that indicate a change in component health.
Create and maintain support documentation in the form of departmental wiki and ServiceNow knowledge articles that allow for continual improvement of fault detection and recovery times.
Work with colleagues to identify and champion the automation of all manual interventions undertaken within the team.
Attend and complete all mandatory training courses.
Engage and own the transition of new services into Service Operations.
Participate in the out of hours on call support rota.
Qualifications and Experience:
Experience working in an IT service delivery or support function OR
MBA or Degree in Information Technology or Information Security.
Experience working with Microsoft technologies.
Excellent communication skills developed working in a service centric organisation.
Ability to interpret fault descriptions provided by customers or internal escalations and translate these into resolutions.
Ability to manage and prioritise own workload.
Experience working within Education Technology would be an advantage.
Technical knowledge:
Advanced automation scripting using Terraform and Powershell.
Knowledge of bicep and ansible advantageous.
Advanced Microsoft Active Directory configuration and support.
Microsoft Azure and AWS cloud hosting platform administration.
Advanced Microsoft SQL server experience.
Windows Server and desktop management and configuration.
Microsoft IIS web services administration and configuration.
Advanced management of data and SQL backup solutions.
Advanced scripting and automation capabilities.
Advanced knowledge of Azure analytics and KQL.
Skills & Requirements
IT Service Delivery, Information Technology, Information Security, Microsoft Technologies, Communication Skills, Fault Interpretation, Workload Prioritization, Automation Scripting, Terraform, PowerShell, Microsoft Active Directory, Microsoft Azure, AWS, Microsoft SQL Server, Windows Server, Windows Desktop Configuration, Microsoft IIS, Data Backup Management, SQL Backup Solutions, Scripting, Azure Analytics, KQL.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website: https://www.gohighlevel.com/
YouTube Channel: https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post: https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a SDET III with 5-6 years of experience to play a crucial role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is a great opportunity to work on cutting-edge performance testing strategies and contribute to the success of our products.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Develop test strategies based on customer behavior to ensure high-performing applications
- Automate the capturing of performance metrics and generate performance trend reports
- Collaborate with developers and architects to optimize frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Research, evaluate, and conduct PoCs for new tools and solutions
- Ensure high availability and reliability of applications and services
Requirements:
- 4-7 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimizing frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.


JD:
The Senior Software Engineer works closely with our development team, product manager, dev-ops and business analysts to build our SaaS platform to support efficient, end-to-end business processes across the industry using modern flexible technologies such as GraphQL, Kubernetes and React.
Technical Skills : C#, Angular, Azure with preferably .Net
Responsibilities
· Develops and maintains back-end, front-end applications and cloud services using C#, . Angular, Azure
· Accountable for delivering high quality results
· Mentors less experienced members of the team
· Thrives in a test-driven development organization with high quality standards
· Contributes to architecture discussions as needed
· Collaborates with Business Analyst to understand user stories and requirements to meet functional needs
· Supports product team’s efforts to produce product roadmap by providing estimates for enhancements
· Supports user acceptance testing and user story approval processes on development items
· Participates in sessions to resolve product issues
· Escalates high priority issues to appropriate internal stakeholders as necessary and appropriate
· Maintains a professional, friendly, open, approachable, positive attitude
Location : Bangalore
Ideal Work Experience and Skills
· At least 7 - 15 years’ experience working in a software development environment
· Prefer Bachelor’s degree in software development or related field
· Development experience with Angular and .NET is beneficial but not required
· Highly self-motivated and able to work effectively with virtual teams of diverse backgrounds
· Strong desire to learn and grow professionally
· A track record of following through on commitments; Excellent planning, organizational, and time management skills
Job Title: Application Developer – Cloud Fullstack
Bands: 7A / 7B
Experience Required: 5 – 9 Years
Relevant Experience: 5 – 9 Years
Location: Chennai Only
Employment Type: Full-time
About the Role:
We are seeking a skilled and experienced Application Developer – Cloud Fullstack to join our dynamic team in Chennai. The ideal candidate will have a strong foundation in Java, Microservices, REST API Development, SQL, and Spring Boot. You will play a key role in designing, developing, and maintaining scalable applications, with opportunities to contribute to cloud-based modernization initiatives.
Key Responsibilities:
- Design, develop, and maintain high-performance, scalable, and secure applications using Java and Spring Boot.
- Develop and consume RESTful APIs for seamless service integration.
- Implement microservices architecture and ensure efficient communication between services.
- Write optimized and efficient SQL queries for data operations.
- Participate in the full SDLC, including requirements gathering, design, coding, testing, deployment, and support.
- Collaborate with cross-functional teams to ensure high-quality deliverables.
- Stay updated with emerging technologies and propose integration where applicable.
Mandatory Skills:
- Java (5+ years of experience)
- Spring Boot
- Microservices Architecture
- REST API Development
- SQL (Strong understanding of relational databases)
Good to Have Skills:
- Exposure to Cloud platforms (e.g., AWS, Azure, GCP)
- Understanding of CI/CD pipelines, containerization (Docker/Kubernetes)
- Experience with DevOps practices and cloud-native development
Ideal Candidate Attributes:
- Strong analytical and problem-solving skills
- Ability to work independently and as part of a team
- Excellent communication and interpersonal skills
- Proactive attitude with a passion for learning and growth

We Help Our Customers Build Great Products.
Innominds is a trusted innovation acceleration partner focused on designing, developing and delivering technology solutions for specialized practices in Big Data & Analytics, Connected Devices, and Security, helping enterprises with their digital transformation initiatives. We built these practices on top of our foundational services of innovation, like UX/UI, application development and testing.
Over 1,000 people strong, we are a pioneer at the forefront of technology and engineering R& D, priding ourselves as being forward thinkers and anticipating market changes to help our clients stay relevant and competitive.
About the Role:
We are looking for a seasoned Data Engineering Lead to help shape and evolve our data platform. This role is both strategic and hands-on—requiring leadership of a team of data engineers while actively contributing to the design, development, and maintenance of robust data solutions.
Key Responsibilities:
- Lead and mentor a team of Data Engineers to deliver scalable and reliable data solutions
- Own the end-to-end architecture and development of data pipelines, data lakes, and warehouses
- Design and implement batch data processing frameworks to support large-scale analytics
- Define and enforce best practices in data modeling, data quality, and system performance
- Collaborate with cross-functional teams to understand data requirements and deliver insights
- Ensure smooth and secure data ingestion, transformation, and export processes
- Stay current with industry trends and apply them to drive improvements in the platform
Requirements
- Strong programming skills in Python
- Deep expertise in Apache Spark, Big Data ecosystems, and Airflow
- Hands-on experience with Azure cloud services and data engineering tools
- Strong understanding of data architecture, data modeling, and data governance practices
- Proven ability to design scalable data systems and enterprise-level solutions
- Strong analytical mindset and problem-solving skills
For our company to deliver world-class products and services, our business depends on recruiting and hiring the best and the brightest from around the globe. We are looking for the engineers, designers and creative problem solvers that stand out from the rest of the crowd but are also humble enough to continue learning and growing, are eager to tackle complex problems and are able to keep up with the demanding pace of our business. We are looking for YOU!
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.
Key Roles & Responsibilities:
- Lead the design and development of scalable and secure software solutions using Java.
- Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
- Provide technical leadership, mentoring, and guidance to the development team.
- Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
- Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
- Design and implement security analytics, automation workflows and ITSM integrations.
- Drive continuous improvements in engineering processes, tools, and technologies.
- Troubleshoot complex technical issues and lead incident response for critical production systems.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-10 years of software development experience, with expertise in Java.
- Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
- In-depth understanding of microservices architecture, APIs, and distributed systems.
- Experience with containerization and orchestration tools like Docker and Kubernetes.
- Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
- Strong problem-solving skills and ability to work in an agile, fast-paced environment.
- Excellent communication and leadership skills, with a track record of mentoring engineers.
Preferred Qualifications:
- Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
- Knowledge of zero-trust security models and secure API development.
- Hands-on experience with machine learning or AI-driven security analytics.

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack PHP Developer.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with other developers to ensure seamless integration backend and frontend elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer with focus on PHP and web technologies.
Strong experience with PHP 8 and 7, Symfony Framework, AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Java, Python, Go, Kotlin, Rust or similar are welcome.
Experienced with test driven development and QA Tools like f.e. PHPstan, Deptrac.
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Full-stack development, PHP 8, PHP 7, Symfony Framework, AWS, Azure, GCP, GitLab, Angular, React, Java, Python, Go, Kotlin, Rust, Test-driven development, QA tools, PHPStan, Deptrac, Problem-solving, Debugging, Communication, Collaboration, System architecture, Technical design, Unit testing, Code review, Deployment, Scalability.


Job Description: Machine Learning Engineer – LLM and Agentic AI
Location: Ahmedabad
Experience: 4+ years
Employment Type: Full-Time
________________________________________
About Us
Join a forward-thinking team at Tecblic, where innovation meets cutting-edge technology. We specialize in delivering AI-driven solutions that empower businesses to thrive in the digital age. If you're passionate about LLMs, machine learning, and pushing the boundaries of Agentic AI, we’d love to have you on board.
________________________________________
Key Responsibilities
• Research and Development: Research, design, and fine-tune machine learning models, with a focus on Large Language Models (LLMs) and Agentic AI systems.
• Model Optimization: Fine-tune and optimize pre-trained LLMs for domain-specific use cases, ensuring scalability and performance.
• Integration: Collaborate with software engineers and product teams to integrate AI models into customer-facing applications and platforms.
• Data Engineering: Perform data preprocessing, pipeline creation, feature engineering, and exploratory data analysis (EDA) to prepare datasets for training and evaluation.
• Production Deployment: Design and implement robust model deployment pipelines, including monitoring and managing model performance in production.
• Experimentation: Prototype innovative solutions leveraging cutting-edge techniques like reinforcement learning, few-shot learning, and generative AI.
• Technical Mentorship: Mentor junior team members on best practices in machine learning and software engineering.
________________________________________
Requirements
Core Technical Skills:
• Proficiency in Python for machine learning and data science tasks.
• Expertise in ML frameworks and libraries like PyTorch, TensorFlow, Hugging Face, Scikit-learn, or similar.
• Solid understanding of Large Language Models (LLMs) such as GPT, T5, BERT, or Bloom, including fine-tuning techniques.
• Experience working on NLP tasks such as text classification, entity recognition, summarization, or question answering.
• Knowledge of deep learning architectures, such as transformers, RNNs, and CNNs.
• Strong skills in data manipulation using tools like Pandas, NumPy, and SQL.
• Familiarity with cloud services like AWS, GCP, or Azure, and experience deploying ML models using tools like Docker, Kubernetes, or serverless functions.
Additional Skills (Good to Have):
• Exposure to Agentic AI (e.g., autonomous agents, decision-making systems) and practical implementation.
• Understanding of MLOps tools (e.g., MLflow, Kubeflow) to streamline workflows and ensure production reliability.
• Experience with generative AI models (GANs, VAEs) and reinforcement learning techniques.
• Hands-on experience in prompt engineering and few-shot/fine-tuned approaches for LLMs.
• Familiarity with vector databases like Pinecone, Weaviate, or FAISS for efficient model retrieval.
• Version control (Git) and familiarity with collaborative development practices.
General Skills:
• Strong analytical and mathematical background, including proficiency in linear algebra, statistics, and probability.
• Solid understanding of algorithms and data structures to solve complex ML problems.
• Ability to handle and process large datasets using distributed frameworks like Apache Spark or Dask (optional but useful).
________________________________________
Soft Skills:
• Excellent problem-solving and critical-thinking abilities.
• Strong communication and collaboration skills to work with cross-functional teams.
• Self-motivated, with a continuous learning mindset to keep up with emerging technologies.


Job Responsibilities:
- Design, develop, test, and maintain high-performance web applications and backend services using Python.
- Build scalable, secure, and reliable backend systems and APIs.
- Optimize and debug existing codebases to enhance performance and maintainability.
- Collaborate closely with cross-functional teams to gather requirements and deliver high-quality solutions.
- Mentor junior developers, conduct code reviews, and uphold best coding practices.
- Write clear, comprehensive technical documentation for internal and external use.
- Stay current with emerging technologies, tools, and industry trends to continually improve development processes.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of hands-on experience in Python development.
- Strong expertise with Python web frameworks like Django and/or Flask.
- In-depth understanding of software design principles, architecture, and design patterns.
- Proven experience working with both SQL and NoSQL databases.
- Solid debugging and problem-solving capabilities.
- Effective communication and collaboration skills, with a team-first mindset.
Technical Skills:
- Programming: Python (Advanced)
- Web Frameworks: Django, Flask
- Databases: PostgreSQL, MySQL, MongoDB, Redis
- Version Control: Git
- API Development: RESTful APIs
- Containerization & Orchestration: Docker, Kubernetes
- Cloud Platforms: AWS or Azure (hands-on experience preferred)
- DevOps: CI/CD pipelines (e.g., Jenkins, GitHub Actions)
About SAP Fioneer
Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.
A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.
Product Technology Stack
- Languages: PowerShell, MgGraph, Git
- Storage & Databases: Azure Storage, Azure Databases
Role Overview
As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.
Key Responsibilities
Architecture & Design
- Design and document architecture blueprints and solution patterns for Azure-based applications.
- Implement hierarchical organizational governance using Azure Management Groups.
- Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.
Development & Implementation
- Build closed-loop, data-driven DevOps architectures using Azure Insights.
- Apply code-driven administration practices with PowerShell, MgGraph, and Git.
- Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
- Develop IAM standards with RBAC and EntraID.
Leadership & Collaboration
- Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
- Support the delivery of SaaS solutions on Azure.
Required Qualifications & Skills
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 8+ years of experience in cloud solutions architecture and DevOps engineering.
- Extensive expertise in Azure services, core web technologies, and security best practices.
- Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
- Strong understanding of IAM, security best practices, and governance models in Azure.
- Experience working in Scrum-based environments with backlog management.
- Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.
Benefits
- Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
- Flexible work environment encouraging creativity and innovation.
- Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.
Diversity & Inclusion
At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.


Key Responsibilities:
- Design, build, and maintain scalable, real-time data pipelines using Apache Flink (or Apache Spark).
- Work with Apache Kafka (mandatory) for real-time messaging and event-driven data flows.
- Build data infrastructure on Lakehouse architecture, integrating data lakes and data warehouses for efficient storage and processing.
- Implement data versioning and cataloging using Apache Nessie, and optimize datasets for analytics with Apache Iceberg.
- Apply advanced data modeling techniques and performance tuning using Apache Doris or similar OLAP systems.
- Orchestrate complex data workflows using DAG-based tools like Prefect, Airflow, or Mage.
- Collaborate with data scientists, analysts, and engineering teams to develop and deliver scalable data solutions.
- Ensure data quality, consistency, performance, and security across all pipelines and systems.
- Continuously research, evaluate, and adopt new tools and technologies to improve our data platform.
Skills & Qualifications:
- 3–6 years of experience in data engineering, building scalable data pipelines and systems.
- Strong programming skills in Python, Go, or Java.
- Hands-on experience with stream processing frameworks – Apache Flink (preferred) or Apache Spark.
- Mandatory experience with Apache Kafka for stream data ingestion and message brokering.
- Proficiency with at least one DAG-based orchestration tool like Airflow, Prefect, or Mage.
- Solid understanding and hands-on experience with SQL and NoSQL databases.
- Deep understanding of data lakehouse architectures, including internal workings of data lakes and data warehouses, not just usage.
- Experience working with at least one cloud platform, preferably AWS (GCP or Azure also acceptable).
- Strong knowledge of distributed systems, data modeling, and performance optimization.
Nice to Have:
- Experience with Apache Doris or other MPP/OLAP databases.
- Familiarity with CI/CD pipelines, DevOps practices, and infrastructure-as-code in data workflows.
- Exposure to modern data version control and cataloging tools like Apache Nessie.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Backend - Software Development Engineer II
Experience - 4+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Bangalore
Basic qualifications:
- Good problem solving skills
- Deep understanding of software development life cycle
- Excellent verbal and written communication skills
- Strong focus on quality of work delivered
- Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies
Required Technical Skills:
- Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
- Hands-on project experience with Nest.Js
- Strong experience with Express.Js framework
- Hands-on experience in data modeling and schema design in MongoDB
- Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Experience with microservice architecture
- Experience working with other Relational and NoSQL Databases
- Experience with technologies such as Kafka and Redis
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application
Write clean, efficient, and reusable code
Implement new features and functionality based on business requirements
Participate in system and application architecture discussions
Create technical designs and specifications for new features or enhancements
Write and execute unit tests to ensure code quality
Debug and resolve technical issues and software defects
Conduct code reviews to ensure adherence to best practices
Identify and fix vulnerabilities to ensure application integrity
Working with other developers to ensure seamless integration backend and frontend elements
Collaborating with DevOps teams for deployment and scaling
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, AWS, Azure, GCP, GitLab, Angular, React, Python, Go, Kotlin, Rust, Full-stack development, Software architecture, Unit testing, Debugging, Code reviews, DevOps collaboration, Microservices, Cloud computing, RESTful APIs, Frontend-backend integration, Problem-solving, Communication, Team collaboration, Software deployment, Application security, Technical documentation.
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Technical Team Lead to guide a local IT Services Management Team and also acting as a software developer. In this role, you will be responsible for the application management of a B2C application to meet the agreed Service Level Agreements (SLAs) and fulfil customer expectations.
Your Team will act as a on-call-duty team in the time between 6 pm to 8 am, 365 days a year. You will work together with the responsible Senior Project Manager in Germany.
We are seeking a hands-on leader who thrives in both team management and operational development. Whether you have experience in DevOps and Backend or Frontend, your expertise in both leadership and technical skills will be key to success in this position.
Responsibilities:
Problem Management & Incident Management activities: Identifying and resolving technical issues and errors that arise during application usage.
Release and Update Coordination: Planning and executing software updates, new versions, or system upgrades to keep applications up to date.
Change Management: Responsible for implementing and coordinating changes to the application, considering the impact on ongoing operations.
Requirements:
Education und Experience: A Bachelor’s or Master’s degree in a relevant field, with a minimum of 5 years of professional experience or equivalent work experience.
Skills & Expertise:
Proficient in ITIL service management frameworks.
Strong analytical and problem-solving abilities.
Experienced in project management methodologies (Agile, Kanban).
Leadership: Very good leadership skills with a customer orientated, proactive and results driven approach.
Communication: Excellent communication, presentation, and interpersonal skills, with the ability to engage and collaborate with stakeholders.
Language: English on a C2 Level.
Skills & Requirements
kubeAPI high Kustomize high docker/container high Debug Tools openSSL high Curl high Azure Devops, Pipeline, Repository, Deployment, ArgoCD, Certificates: Certificate Management / SSL, LetsEncrypt, Linux Shell, Keycloak.

Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange.
We are seeking talented Java Application Developers to join our dynamic DPS team. In this role, you will design and implement change requests for existing applications or develop new projects using Jakarta EE (Java Enterprise Technologies) and Angular for the frontend. Your responsibilities will include end-to-end process mapping within the HR application landscape, analyzing developed functionalities, and addressing potential issues.
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with other developers to ensure seamless integration backend and frontend elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, Jakarta EE, Angular, React, AWS, Azure, GCP, GitLab, Python, Go, Kotlin, Rust, Full-stack Development, Unit Testing, Debugging, Code Review, DevOps, Software Architecture, Microservices, HR Applications, Cloud Computing, Frontend Development, Backend Development, System Integration, Technical Design, Deployment, Problem-Solving, Communication, Collaboration.
IoT Validation Engineer
Job Type: Full-Time
Job Location: Noida
Experience: 5-8 years
Notice Period: 0-30 days
About the Role:
We are looking for an experienced Silicon Validation Engineer with expertise in Ethernet Validation, Embedded C (Protocol Understanding, Device Driver Development, and Hardware-Level Debugging). The role focuses on post-silicon validation for different IP blocks on leading-edge process technology at IP/SOC level.
Work Mode:
On-site in Noida.
Candidates should be willing to attend in-person discussions as required.
Key Responsibilities:
Post-Silicon Validation: Perform validation for different IP blocks at IP/SOC level on advanced process technologies.
Lab Equipment Utilization: Skilled use of standard oscilloscopes, logic analyzers, and other measurement instruments for debugging and validation.
Technical Leadership: Work independently and provide technical guidance to junior team members.
Cross-Team Collaboration: Interface with Design, Verification, Applications, Product and Test Engineering, Marketing, and other design teams to ensure smooth validation processes.
Validation Planning: Develop validation requirements and test plans for peripherals and full-chip SOC operation based on reference manuals and design requirements.
Debugging & Automation: Validate and debug silicon issues to support product release while automating validation setups and scripts to reduce validation cycle time.
Validation Setup: Bring up validation test boards and setups, collect data, and debug issues independently.
Mandatory Skills:
Ethernet Validation – Strong hands-on experience in Ethernet protocol validation.
Embedded C – Expertise in protocol understanding, device driver development, and hardware debugging.
Post-Silicon Validation – Extensive experience with validation and automation.
Lab Equipment Proficiency – Hands-on experience with oscilloscopes, logic analyzers, and other measurement instruments.
Preferred Qualifications:
Strong analytical skills with the ability to design test plans based on product requirements.
Hands-on experience in post-silicon validation, debugging, and automation of validation processes.
Familiarity with SOC-level validation and automation frameworks.
Job Title: Senior Automation Engineer (API & Cloud Testing)
Job Type: Full-TimeJob Location: Bangalore, Pune
Work Mode:Hybrid
Experience: 8+ years (Minimum 5 years in Automation)
Notice Period: 0-30 days
About the Role:
We are looking for an experienced Senior Automation Engineer to join our team. The ideal candidate should have extensive expertise in API testing, Node.js, Cypress, Postman/Newman, and cloud-based platforms (AWS/Azure/GCP). The role involves automating workflows in ArrowSphere, optimizing test automation pipelines, and ensuring software quality in an Agile environment. The selected candidate will work closely with teams in France, requiring strong communication skills.
Key Responsibilities:
Automate ArrowSphere Workflows: Develop and implement automation strategies for ArrowSphere Public API workflows to enhance efficiency.
Support QA Team: Guide and assist QA engineers in improving automation strategies.
Optimize Test Automation Pipeline: Design and maintain a high-performance test automation framework.
Minimize Test Flakiness: Identify root causes of flaky tests and implement solutions to improve software reliability.
Ensure Software Quality: Actively contribute to maintaining the software’s high standards and cloud service innovation.
Mandatory Skills:
API Testing: Strong knowledge of API testing methodologies.
Node.js: Experience in automation with Cypress, Postman, and Newman.
Cloud Platforms: Working knowledge of AWS, Azure, or GCP (certification is a plus).
Agile Methodologies: Hands-on experience working in an Agile environment.
Technical Communication: Ability to interact with international teams effectively.
Technical Skills:
Cypress: Expertise in front-end automation with Cypress, ensuring scalable and reliable test scripts.
Postman & Newman: Experience in API testing and test automation integration within CI/CD pipelines.
Jenkins: Ability to set up and maintain CI/CD pipelines for automation.
Programming: Proficiency in Node.js (PHP knowledge is a plus).
AWS Architecture: Understanding of AWS services for development and testing.
Git Version Control: Experience with Git workflows (branching, merging, pull requests).
Scripting & Automation: Knowledge of Bash/Python for scripting and automating tasks.
Problem-Solving: Strong debugging skills across front-end, back-end, and database.
Preferred Qualifications:
Cloud Certification (AWS, Azure, or GCP) is an added advantage.
Experience working with international teams, particularly in Europe.
Job Title: ServiceNow ITOM Developer
Location: Hyderabad, India
Experience: 6 - 8 years
Job Summary:
We are seeking an experienced ServiceNow ITOM Developer with 6+ years of hands-on experience in ITOM development. The ideal candidate should have strong expertise in ServiceNow ITOM Suite, CMDB, and exposure to cloud infrastructure such as AWS, Azure, Google Cloud, or Oracle.
Key Responsibilities:
- Design, develop, and implement ServiceNow ITOM solutions, including Discovery, Service Mapping, Event Management, Cloud Management, and Orchestration.
- Configure and maintain CMDB (Configuration Management Database), ensuring data integrity and compliance with ITIL best practices.
- Develop and implement custom workflows, automation, and integrations with external systems.
- Work closely with stakeholders to gather requirements, design solutions, and troubleshoot issues related to ITOM functionalities.
- Optimize ServiceNow ITOM modules to enhance performance and efficiency.
- Implement and maintain discovery mechanisms for on-prem and cloud infrastructure (AWS, Azure, Google Cloud, Oracle).
- Ensure best practices, security, and compliance standards are followed while developing solutions.
- Provide technical expertise and guidance to junior developers and cross-functional teams.
Required Skills & Experience:
- 6+ years of hands-on experience in ServiceNow Development, specifically in ITOM.
- Strong expertise in ServiceNow ITOM Suite, including Discovery, Service Mapping, Orchestration, and Event Management.
- In-depth knowledge of CMDB architecture, CI relationships, data modeling, and reconciliation processes.
- Experience integrating ServiceNow ITOM with cloud platforms (AWS, Azure, Google Cloud, Oracle).
- Proficiency in JavaScript, REST/SOAP APIs, and scripting within the ServiceNow platform.
- Strong understanding of ITIL processes and best practices related to ITOM.
- Experience working with MID Servers, probes, sensors, and identification rules.
- Ability to troubleshoot and resolve complex issues related to ServiceNow ITOM configurations.
Nice-to-Have Skills:
- ServiceNow Certified Implementation Specialist – ITOM certification.
- Experience in CI/CD pipeline integration with ServiceNow.
- Knowledge of ServiceNow Performance Analytics and Reporting.
- Hands-on experience with Terraform or other Infrastructure as Code (IaC) tools.
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange.
We are seeking talented Java Application Developers to join our dynamic DPS team. In this role, you will design and implement change requests for existing applications or develop new projects using Jakarta EE (Java Enterprise Technologies) and Angular for the frontend. Your responsibilities will include end-to-end process mapping within the HR application landscape, analyzing developed functionalities, and addressing potential issues.
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with other developers to ensure seamless integration backend and frontend elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, Jakarta EE, Angular, React, AWS, Azure, GCP, GitLab, Python, Go, Kotlin, Rust, Full-stack Development, Unit Testing, Debugging, Code Review, DevOps, Software Architecture, Microservices, HR Applications, Cloud Computing, Frontend Development, Backend Development, System Integration, Technical Design, Deployment, Problem-Solving, Communication, Collaboration.
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application
Write clean, efficient, and reusable code
Implement new features and functionality based on business requirements
Participate in system and application architecture discussions
Create technical designs and specifications for new features or enhancements
Write and execute unit tests to ensure code quality
Debug and resolve technical issues and software defects
Conduct code reviews to ensure adherence to best practices
Identify and fix vulnerabilities to ensure application integrity
Working with other developers to ensure seamless integration backend and frontend elements
Collaborating with DevOps teams for deployment and scaling
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, AWS, Azure, GCP, GitLab, Angular, React, Python, Go, Kotlin, Rust, Full-stack development, Software architecture, Unit testing, Debugging, Code reviews, DevOps collaboration, Microservices, Cloud computing, RESTful APIs, Frontend-backend integration, Problem-solving, Communication, Team collaboration, Software deployment, Application security, Technical documentation.


Job Purpose
We are looking for an energetic and self-starter software developer to join our product development practice as a Senior Software Engineer (SSE).
You will get to work with some of the best and knowledgeable tech talent in the financial world and you will build next generation digital services and platforms that will lead the transformation goals for our customers. You will work closely with the engineering, UX, product and test automation communities, as part of the agile team, to lead product design and development and to help the Digital Service Product Owner to deliver and maximize value.
You will drive engineering and architecture best practices for writing and encouraging others to write secure code and dev-ops process while getting opportunities for learning new business domain and topics, to work with industry SMEs and to learn new technology and behavioral skills.
Key Responsibilities
As a Full-stack Developer
- 8+ year’s professional experience in enterprise software design and development in an N-tier architecture environment;
- Understanding of 12-factor app framework is highly desirable
- Must have experience building web applications using .NET core 3.x (.NET 5.0 is better), Web API, HTML5, React OR other JS-based frameworks like Angular
- Must have experience with tools such as Jira, Github, Confluence (or other wiki), SonarQube (or similar), OWASP ZAP (or similar) and Snyk (or similar)
- Experience with data visualization libraries /framework like D3js, Plotly, HighCharts etc. will be an advantage
- Must have experience with SOA and Web Service standards (REST & JSON/SOAP & WSDL/WS-I Basic Profile), and IIS
- Understand the business requirements from the product owner(s)
- Design and implement the system from scratch & build enhancements, features request using modern application frameworks using C# and React with .NET Core, Web API, AWS services etc.
- Participate in both development & maintenance tasks
- Independently troubleshoot difficult and complex issues on production and other environments
As a Technical Lead in the pod
- Must have experience of working in an automated CI/CD environment and with fast moving teams using Scrum/Agile; Experience with AWS and other cloud providers is highly desirable
- Must have extensive experience with object oriented design principles. Ability to articulate the pros and cons of design/implementation options
- Participate in design review and peer code review
- Work collaboratively in a global setting, should be eager to learn new technologies
- Responsible for extending and maintaining existing codebase with focus on quality, re-usability, maintainability and consistency
- Coach teams on best practices and architecture design
As member of the Engineering community
- Must have extensive experience with object oriented design principles.
- Ability to articulate the pros and cons of design/implementation options
- Good understanding and knowledge of areas including but not limited to requirement gathering, designing, development, testing, maintenance, quality control etc.
- Stay up-to-date on latest developments in technology
- Learn and share learnings with the community
Behavioral Competencies
- A self-starter, excellent planner and executor and above all, a good team player
- Excellent communication skills and inter-personal skills are a must
- Must have organizational skills, including multi-task capability, priority setting and meeting deadlines
- Ability to build collaborative relationships and effectively leverage networks to mobilize resources
- Liking and initiative to learn business domain is highly desirable
- Likes dynamic and constantly evolving environment and requirements
Dynatrace Infrastructure Engineer
Job Summary:
We are seeking an experienced Dynatrace Engineer to install, configure, and maintain Dynatrace environments for monitoring application performance and user experiences. The ideal candidate should have hands-on expertise in Application Performance Monitoring (APM) tools and the ability to troubleshoot performance issues effectively.
Experience Requirements:
- Total Experience: 3 to 7 years
- Relevant Experience: 3 to 5 years in Dynatrace administration, setup, configuration, and maintenance
Key Responsibilities:
- Install, configure, and manage Dynatrace for application performance monitoring.
- Deploy and optimize Dynatrace environments for efficient monitoring and alerting.
- Analyze application performance, troubleshoot performance issues, and ensure seamless user experience.
- Work with Application Performance Monitoring (APM) tools to identify and resolve performance bottlenecks.
- Collaborate with development and infrastructure teams to integrate Dynatrace into existing systems.
- Ensure high availability, scalability, and compliance with organizational monitoring standards.
Key Requirements:
- Hands-on experience in Dynatrace setup, configuration, and maintenance.
- Strong understanding of application performance monitoring and optimization.
- Expertise in troubleshooting application performance issues.
- Familiarity with cloud environments and virtualization platforms.
- Ability to work in a dynamic environment and support cross-functional teams.
Preferred Qualifications:
- Experience with other APM tools (e.g., New Relic, AppDynamics, Splunk, etc.).
- Knowledge of cloud platforms (AWS, Azure, GCP) and containerized environments.
- Strong scripting skills (PowerShell, Python, Bash) for automation and monitoring enhancement.
Storage Engineer (EMC/Dell EMC/HP)
Job Description:
We are looking for skilled professionals with expertise in EMC Symmetrix PowerMax, Unity, XtremIO, VNX PowerStore storage technologies, and Dell EMC storage solutions, including SAN, NAS, HP EVA, and IB Flash storage. The ideal candidate should have hands-on experience managing and optimizing enterprise storage infrastructures. The candidate should be available for a face-to-face interview at the IBM location and be ready for immediate reporting post-offer and background verification.
Experience Requirements:
We are hiring for multiple experience levels, as detailed below:
4 years of total experience with 3 years of relevant experience & 8+ years of total experience with 7 years of relevant experience in Storage Engineering.
Key Responsibilities:
Design, implement, and manage enterprise storage solutions.
Troubleshoot and resolve issues related to EMC Symmetrix PowerMax, Unity, XtremIO, VNX PowerStore storage technologies.
Optimize performance and reliability of SAN, NAS, HP EVA, and IB Flash storage.
Monitor and manage storage infrastructure using advanced analytics and monitoring tools.
Ensure compliance with security and data management policies.
Collaborate with IT teams to ensure seamless deployment and integration.
Provide technical support and documentation for storage-related incidents and requests.
Required Skills:
Expertise in EMC Symmetrix PowerMax, Unity, XtremIO, VNX PowerStore storage technologies.
Hands-on experience with SAN, NAS, HP EVA, and IB Flash storage.
Strong troubleshooting and problem-solving skills in enterprise storage environments.
Experience in storage performance tuning and capacity planning.
Knowledge of data security best practices in storage infrastructures.
Ability to work in a fast-paced environment and manage multiple priorities.
Preferred Qualifications:
Relevant industry certifications (e.g., EMC Storage Specialist, Dell EMC Proven Professional).
Experience in enterprise-level storage deployments.
Exposure to cloud-based storage solutions (AWS, Azure, or hybrid cloud storage environments).
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
Mandatory Skills:
- AZ-104 (Azure Administrator) experience
- CI/CD migration expertise
- Proficiency in Windows deployment and support
- Infrastructure as Code (IaC) in Terraform
- Automation using PowerShell
- Understanding of SDLC for C# applications (build/ship/run strategy)
- Apache Kafka experience
- Azure web app
Good to Have Skills:
- AZ-400 (Azure DevOps Engineer Expert)
- AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
- Apache Pulsar
- Windows containers
- Active Directory and DNS
- SAST and DAST tool understanding
- MSSQL database
- Postgres database
- Azure security
Work Mode: Hybrid (2 days WFO)
We are looking for a Data Engineer who is a self-starter to work in a diverse and fast-paced environment within our Enterprise Data team. This is an individual contributor role that is responsible for designing and developing of data solutions that are strategic for the business and built on the latest technologies and patterns,regional and global level by utilizing in-depth knowledge of data, infrastructure, technologies and data engineering experience.
Responsibilities
· Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements
· Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data
· Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development
· Develop good understanding of how data will flow & stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR etc
· Unify, enrich, and analyze variety of data to derive insights and opportunities
· Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
· Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate
· Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform
· Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org
· Mentor other members in the team and organization and contribute to organizations’ growth.
Soft Skills
· Independent and able to manage, prioritize & lead workloads
· Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams
· Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Technical Skills
· 8+ years’ work experience and bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science.
· Hands-on engineer who is curious about technology, should be able to quickly adopt to change and one who understands the technologies supporting areas such as Cloud Computing (AWS, Azure(preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc.
· 5 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search, Azure Data factory and Azure synapse analytics, Git Integration with Azure DevOps, etc.
· Experience with designing & developing data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities
· Experience with building, testing and enhancing data curation pipelines and integrating data from a wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
· Experience with maintaining the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensuring high availability of the platform; monitoring workload demands; working with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more application
· 3+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools
We are seeking a highly skilled and motivated Senior Full stack Developer to join our dynamic team. As a Senior Full stack Developer, you will be responsible for designing, developing, and maintaining software solutions that drive our business forward. If you are passionate about technology, possess a strong background in .NET Core, JavaScript, WebAPI, Angular(2+) versions, Azure SQL, and other relevant software tools, and are excited about the opportunity to work on challenging projects, we want to hear from you.
Responsibilities:
Collaborate with cross-functional teams to understand project requirements and deliver high-quality software solutions.
Design and develop robust and scalable software applications using .NET Core, JavaScript, WebAPI, and Angular(2+) versions.
Write clean, maintainable, and efficient code while adhering to best practices and coding standards.
Perform code reviews and provide constructive feedback to team members.
Troubleshoot and debug software issues, identify bottlenecks, and implement effective solutions.
Optimize application performance and ensure scalability.
Collaborate with front-end and back-end developers to integrate user-facing elements with server-side logic.
Stay up-to-date with emerging technologies and trends in the software development industry.
Mentor junior team members and provide technical guidance.
Qualifications:
Bachelor's degree in Computer Science, Software Engineering, or a related field (or equivalent work experience).
5+ years Experience as a Full stack Developer with expertise in .NET Core, JavaScript, WebAPI, Angular(2+) versions, and Azure SQL.
Strong proficiency in database design, including Azure SQL or similar relational database technologies.
Solid understanding of software architecture and design patterns.
Proficiency in front-end technologies such as HTML5, CSS3, and JavaScript.
Experience with RESTful APIs and web services.
Familiarity with source code management tools (e.g., Git).
Excellent problem-solving skills and attention to detail.
Strong communication and collaboration skills.
Ability to work independently and as part of a team.
Continuous learner with a passion for staying current with industry trends and technologies.
Preferred Qualifications:
Master's degree in Computer Science or a related field.
Certification in relevant technologies or frameworks.
Experience with cloud platforms (e.g., Azure, Azure Data Factory, Azure B2C, Azure Storage).
Knowledge of DevOps practices and tools.
Previous experience in Agile/Scrum development methodologies.
What You Will be Doing:
● Develop and maintain software that is scalable, secure, and efficient
● Collaborate with Technical Architects & Business Analysts
● Architect and design software solutions that meet project requirements
● Mentor and train junior developers to improve their skills and knowledge
● Conduct code reviews ensuring the code is maintainable, readable, and efficient
● Research and evaluate new technologies to improve the processes
● Effective communication skills, particularly in documenting and explaining code and technical concepts.
Skills We Are Looking For:
● 5+ years extensive hands-on experience with NodeJS and Typescript
● Strong understanding of RESTful API design and implementation.
● Comfortable with debugging, performance tuning, and optimizing Node.js applications.
● Strong problem-solving abilities and attention to detail.
● Experience with authentication and authorization protocols, such as OAuth, JWT and session management.
● Understanding of security best practices in backend development, including data encryption and vulnerability mitigation.
Bonus Skills
● Experience with server-side frameworks such as Express.js or NestJS.
● Familiarity with cloud platforms (e.g., AWS, Azure, (preferred) Google Cloud) and their services for backend deployment.
● Familiarity with NoSQL databases (Mongo preferred), and the ability to design and optimize database queries.
Why You’ll Love It Here
● Innovative Culture - We believe in pushing boundaries
● Impactful Work - You won’t just write code, you will help build the future
● Collaborative Environment - We believe that everyone has a voice that matters
● Work Life Balance - Our flexible work environment encourages you to have space to
recharge
Interview & Work Location: Kochi, Coimbatore
Experience- 3-5 years
Notice Period - immediate to 30 Days
Who we are;
Orion delivers game changing business transformation & digital product development with agility at scale. With a maturity and scale of 25+ years in the industry, Orion has 6200 + associates working across 12 major delivery centers across the globe! Follow us on the links below to know more!
Key Accountability;
• Performs implementation of high level design and specifications.
• Develops low level design, class diagrams, flow charts.
• Performs Unit testing of individually developed components and fix errors, if any.
• Assist in preparation of design, technical reference & operational documentation, packaging document in collaboration with the Lead.
• Strives for technology innovation in terms of exploring new and efficient ways of writing code.
• Is responsible for writing simple, reusable code by following coding standards.
• Ensure proper documentation of implementation and reviews.
• Ensure timely follow ups and escalations with leads with respect to any process gaps/risks/impediments.
Key Competencies;
• Strong knowledge of .netcore, SQL Server, Angular 6+
• Basic understanding of Azure Cloud and Azure functions
• Fair understanding of object-oriented programming (OOP)
• Fair understanding of WCF / Web Services / JQuery / CSS / Ajax/HTML 5/ JavaScript.
• Fair understanding of Entity Framework.
• Knowledge of design patterns will be an added advantage.
• Clear communication skills are of high importance
• Ability to quickly learn new concepts and software is necessary
• Candidate should be a self-motivated, independent, detail oriented, responsible team player and exhibit
• exceptional relationship management skills
• Passionate about building high-quality systems with software implementation best practices
Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent practical experience).
Benefits:
• Competitive salary and performance-based incentives.
• Flexible work environment (Remote/Hybrid options available).
• Health insurance and other perks.
• Learning and development opportunities.
• Collaborative and innovative work culture.
Job Summary:
We are seeking a highly skilled and experienced Azure Active Directory Specialist with a strong background in ADFS (Active Directory Federation Services), DFS (Distributed File System), and Group Policy Objects (GPO). This position will be responsible for managing, configuring, and troubleshooting enterprise-level Active Directory environments, integrating with Azure AD for cloud-based solutions, and ensuring seamless access and security across on-premises and cloud environments.
Share cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
Job Title : AI Engineer / Gen AI Engineer (LLM/NLP)
Location : Remote
Duration : Long-term Contract
Openings : 2
Job Description :
We are looking for two AI Engineers with expertise in Generative AI, LLMs, and NLP. One of them must have a strong Azure background and be well-versed in Azure AI services.
Key Requirements :
- Experience in Generative AI, LLMs, and NLP
- Azure AI Expertise (For one of the roles):
- Azure AI Search
- Azure AI Document Intelligence
- Azure AI Foundry
- Development & deployment of RAG-based chatbots and AI agents using Azure AI
- Programming : Python (Intermediate to Expert Level)

Position: .NET C# Developer
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 5-7 years
Notice period: 0-30 days
Shift timing: General Shift
Work Mode: 5 Days work from EIC office
Education Required: Bachelor’s / Masters / PhD : BE/B tech
Must have skills: .NET Core, C#, microservices
Good to have skills: RDBMS, cloud platforms like AWS and Azure
Mandatory Skills
5-7 years of experience in software development using C# and NET Core
Hands-on experience in building Microservices with a focus on scalability and reliability.
Expertise in Docker for containerization and Kubernetes for orchestration and management of containerized applications.
Strong working knowledge of Cosmos DB (or similar NoSQL databases) and experience in designing distributed databases.
Familiarity with CI/CD pipelines, version control systems (like Git), and Agile development methodologies.
Proficiency in RESTful API design and development.
Experience with cloud platforms like Azure, AWS, or Google Cloud is a plus.
Excellent problem-solving skills and the ability to work independently and in a collaborative environment.
Strong communication skills, both verbal and written.
Key Responsibilities
Design, develop, and maintain applications using C#, NET Core, and Microservices architecture.
Build, deploy, and manage containerized applications using Docker and Kubernetes.
Work with Cosmos DB for efficient database design, management, and querying in a cloud-native environment.
Collaborate with cross-functional teams to define application requirements and ensure timely delivery of features.
Write clean, scalable, and efficient code following best practices and coding standards.
Implement and integrate APIs and services with microservice architectures.
Troubleshoot, debug, and optimize applications for performance and scalability.
Participate in code reviews and contribute to improving coding standards and practices.
Stay up-to-date with the latest industry trends, technologies, and development practices.
Optional Skills
Experience with Azure Kubernetes Service (AKS) or Amazon Elastic Kubernetes Service (EKS)
Familiarity with Event-driven architecture, RabbitMQ, Kafka, or similar messaging systems.
Knowledge of DevOps practices and tools for continuous integration and deployment.
- Experience with front-end technologies like Angular or React is a plus.

position: Data Scientist
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 3 - 5 years
Notice period: 0-30 days
Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms
Education Required: Bachelor’s / Masters / PhD:
Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering
Bachelors with 5 years or Masters with 3 years
Mandatory Skills
- Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
- 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
- Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
- Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
- Some experience with open-source OCR models
- Proficiency in working with large datasets and experience with feature engineering techniques is a plus
Key Responsibilities
- Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
- Use agile software development processes to make iterative improvements to our back-end systems.
- Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.
Optional Skills
- Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
- Knowledge of computer vision techniques and experience with libraries like OpenCV
- Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
- Ability to work on a dynamic, research-oriented team that has concurrent projects.
- Working knowledge of Git/version control.
- Expertise in PyTorch, Tensor Flow, Keras.
- Excellent coding skills, especially in Python.
- Experience with Linux-Ubuntu based OS

Golang developer
Experience : 9 to 12 Years
Job Description-
We are looking for a Staff Software Engineer with expertise in building and supporting core SaaS platform technology including identity and access management (IAM), message bus, notifications, and related platform components to join our team. The ideal candidate will have a strong track record of providing technical leadership and implementing SaaS platform technologies.
As a Staff Software Engineer on our SaaS Platform Services Team, you will be a key player in designing, developing, and maintaining critical components of our platform. This role requires expertise in GoLang and developing, integrating with, and supporting shared platform services and technology such as Identify and Access Management (IAM), API Gateway, Kubernetes, and Kafka.
What you'll do:
- Help define and execute on the technical roadmap for our core SaaS platform technology
- Work closely with engineering teams to integrate these services with the rest of our platform.
- Help the engineering manager hire, train, and mentor engineers and maintain a high-performing engineering culture.
- Collaborate closely with both architecture and engineering teams to review project requirements, technical artifacts, and designs, and ensure that our platform meets the needs of our users.
- Design, develop, and maintain high-quality, scalable, and reliable software components using Golang.
- Collaborate with cross-functional teams to gather and refine requirements for platform services including IAM, API Gateway, notification services, and message bus.
- Architect, deploy, and manage containerized services leveraging Kubernetes.
- Implement best practices for code quality, security, and scalability.
You'll be expected to have:
- Bachelor's or higher degree in Computer Science, Software Engineering, or related field
- Minimum 7-10 years relevant experience in software development including extensive experience in GoLang programming language.
- Strong expertise in container technologies, with a focus on Kubernetes, experience with cloud platforms (e.g., AWS, GCP, Azure), and familiarity with CI/CD pipelines and DevOps practices.
- Proficiency in designing and implementing IAM solutions.
- Experience with developing and maintaining customer facing APIs.
- Deep understanding of Service Oriented Architecture and API standards such as REST and GraphQL.
- Solid understanding of microservices architecture and distributed systems.
- Strong problem-solving skills and ability to troubleshoot complex issues.
- Excellent written and verbal communication skills.
- Ability to work effectively both independently and in a collaborative team environment.
- Prior experience mentoring and providing technical guidance to junior engineers.
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.