50+ DevOps Jobs in Chennai | DevOps Job openings in Chennai
Apply to 50+ DevOps Jobs in Chennai on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.

Roles and Responsibilities:
- AWS Cloud Management: Design, deploy, and manage AWS cloud infrastructure. Optimize and maintain cloud resources for performance and cost efficiency. Monitor and ensure the security of cloud-based systems.
- Automated Provisioning: Develop and implement automated provisioning processes for infrastructure deployment. Utilize tools like Terraform and Packer to automate and streamline the provisioning of resources.
- Infrastructure as Code (IaC): Champion the use of Infrastructure as Code principles. Collaborate with development and operations teams to define and maintain IaC scripts for infrastructure deployment and configuration.
- Collaboration and Communication: Work closely with cross-functional teams to understand project requirements and provide DevOps expertise. Communicate effectively with team members and stakeholders regarding infrastructure changes, updates, and improvements.
- Continuous Integration/Continuous Deployment (CI/CD): Implement and maintain CI/CD pipelines to automate software delivery processes. Ensure reliable and efficient deployment of applications through the development lifecycle.
- Performance Monitoring and Optimization: Implement monitoring solutions to track system performance, troubleshoot issues, and optimize resource utilization. Proactively identify opportunities for system and process improvements.
Mandatory Skills:
- Proven experience as a DevOps Engineer or similar role, with a focus on AWS.
- Strong proficiency in automated provisioning and cloud management.
- Experience with Infrastructure as Code tools, particularly Terraform and Packer.
- Solid understanding of CI/CD pipelines and version control systems.
- Strong scripting skills (e.g., Python, Bash) for automation tasks.
- Excellent problem-solving and troubleshooting skills.
- Good interpersonal and communication skills for effective collaboration.
Secondary Skills:
- AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect).
- Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Knowledge of microservices architecture and serverless computing.
- Familiarity with monitoring and logging tools (e.g., CloudWatch, ELK stack).
Job Description:
We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.
Key Responsibilities:
- Lead and mentor backend development teams.
- Design and develop scalable backend applications using Java and Spring Boot.
- Ensure high standards of code quality through best practices such as SOLID principles and clean code.
- Participate in pair programming, code reviews, and continuous integration processes.
- Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
- Collaborate with cross-functional teams and clients for successful delivery.
Required Skills & Experience:
- 9–12+ years of experience in backend development (Up to 17 years may be considered).
- Strong programming skills in Java and backend frameworks such as Spring Boot.
- Experience in designing and building large-scale, custom-built, scalable applications.
- Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
- Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
- Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
- Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
- Experience working in a product engineering environment is a plus.
- Startup experience or working in fast-paced, high-impact teams is highly desirable.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
Job Summary
We are seeking a skilled Snowflake Developer to design, develop, migrate, and optimize Snowflake-based data solutions. The ideal candidate will have hands-on experience with Snowflake, SQL, and data integration tools to build scalable and high-performance data pipelines that support business analytics and decision-making.
Key Responsibilities:
Develop and implement Snowflake data warehouse solutions based on business and technical requirements.
Design, develop, and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and processing.
Write and optimize complex SQL queries for data retrieval, performance enhancement, and storage optimization.
Collaborate with data architects and analysts to create and refine efficient data models.
Monitor and fine-tune Snowflake query performance and storage optimization strategies for large-scale data workloads.
Ensure data security, governance, and access control policies are implemented following best practices.
Integrate Snowflake with various cloud platforms (AWS, Azure, GCP) and third-party tools.
Troubleshoot and resolve performance issues within the Snowflake environment to ensure high availability and scalability.
Stay updated on Snowflake best practices, emerging technologies, and industry trends to drive continuous improvement.
Qualifications:
Education: Bachelor’s or master’s degree in computer science, Information Systems, or a related field.
Experience:
6+ years of experience in data engineering, ETL development, or similar roles.
3+ years of hands-on experience in Snowflake development.
Technical Skills:
Strong proficiency in SQL, Snowflake Schema Design, and Performance Optimization.
Experience with ETL/ELT tools like dbt, Talend, Matillion, or Informatica.
Proficiency in Python, Java, or Scala for data processing.
Familiarity with cloud platforms (AWS, Azure, GCP) and integration with Snowflake.
Experience with data governance, security, and compliance best practices.
Strong analytical, troubleshooting, and problem-solving skills.
Communication: Excellent communication and teamwork abilities, with a focus on collaboration across teams.
Preferred Skills:
Snowflake Certification (e.g., SnowPro Core or Advanced).
Experience with real-time data streaming using tools like Kafka or Apache Spark.
Hands-on experience with CI/CD pipelines and DevOps practices in data environments.
Familiarity with BI tools like Tableau, Power BI, or Looker for data visualization and reporting.
Snowflake Architect
Job Summary: We are looking for a Snowflake Architect to lead the design, architecture, and migration of customer data into the Snowflake DB. This role will focus on creating a consolidated platform for analytics, driving data modeling and migration efforts while ensuring high-performance and scalable data solutions. The ideal candidate should have extensive experience in Snowflake, cloud data warehousing, data engineering, and best practices for optimizing large-scale architectures.
Key Responsibilities:
Architect and implement Snowflake data warehouse solutions based on technical and business requirements.
Define and enforce best practices for performance, security, scalability, and cost optimization in Snowflake.
Design and build ETL/ELT pipelines for data ingestion and transformation.
Collaborate with stakeholders to understand data requirements and create efficient data models.
Optimize query performance and storage strategies for large-scale data workloads.
Work with data engineers, analysts, and business teams to ensure seamless data access.
Implement data governance, access controls, and security best practices.
Troubleshoot and resolve performance bottlenecks in Snowflake.
Stay updated on Snowflake features and industry trends to drive innovation.
Qualifications:
Bachelor’s or master’s degree in computer science, Information Systems, or related field.
10+ years of experience in data engineering or architecture.
5+ years of hands-on experience with Snowflake architecture, administration, and development.
Expertise in SQL, Snowflake Schema Design, and Performance Optimization.
Experience with ETL/ELT tools such as dbt, Talend, Matillion, or Informatica.
Proficiency in Python, Java, or Scala for data processing.
Knowledge of cloud platforms (AWS, Azure, GCP) and Snowflake integration.
Experience with data governance, security, and compliance best practices.
Strong problem-solving skills and the ability to work in a fast-paced environment.
Excellent communication and stakeholder management skills.
Preferred Skills:
Experience in the customer engagement or contact center industry.
Familiarity with DevOps practices, containerization (Docker, Kubernetes), and infrastructure-as-code.
Knowledge of distributed systems, performance tuning, and scalability.
Familiarity with security best practices and secure coding standards.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
Job description
Location: Chennai, India
Experience: 5+ Years
Certification: Kafka Certified (Mandatory); Additional Certifications are a Plus
Job Overview:
We are seeking an experienced DevOps Engineer specializing in GCP Cloud Infrastructure Management and Kafka Administration. The ideal candidate should have 5+ years of experience in cloud technologies, Kubernetes, and Kafka, with a mandatory Kafka certification.
Key Responsibilities:
Cloud Infrastructure Management:
· Manage and update Kubernetes (K8s) on GKE.
· Monitor and optimize K8s resources, including pods, storage, memory, and costs.
· Oversee the general monitoring and maintenance of environments using:
o OpenSearch / Kibana
o KafkaUI
o BGP
o Grafana / Prometheus
Kafka Administration:
· Manage Kafka brokers and ACLs.
· Hands-on experience in Kafka administration (preferably Confluent Kafka).
· Independently debug, optimize, and implement Kafka solutions based on developer and business needs.
Other Responsibilities:
· Perform random investigations to troubleshoot and enhance infrastructure.
· Manage PostgreSQL databases efficiently.
· Administer Jenkins pipelines, supporting CI/CD implementation and maintenance.
Required Skills & Qualifications:
· Kafka Certified Engineer (Mandatory).
· 5+ years of experience in GCP DevOps, Cloud Infrastructure, and Kafka Administration.
· Strong expertise in Kubernetes (K8s), Google Kubernetes Engine (GKE), and cloud environments.
· Hands-on experience with monitoring tools like Grafana, Prometheus, OpenSearch, and Kibana.
· Experience managing PostgreSQL databases.
· Proficiency in Jenkins pipeline administration.
· Ability to work independently and collaborate with developers and business stakeholders.
If you are passionate about DevOps, Cloud Infrastructure, and Kafka, and meet the above qualifications, we encourage you to apply!
6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.
• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.
• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.
• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.
• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms
• Experience with Agile software development and knowledge of best practices for agile Scrum team.
• Proficient with GIT version control
• Experience working with Linux and cloud compute platforms.
• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.
• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN


We are currently seeking skilled and motivated Senior Java Developers to join our dynamic and innovative development team. As a Senior Java Developer, you will be responsible for designing, developing, and maintaining high-performance, scalable Java applications.
Join DataCaliper and step into the vanguard of technological advancement, where your proficiency will shape the landscape of data management and drive businesses toward unparalleled success.
Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.
Company: Data caliper
Work location: Coimbatore
Experience: 3+ years
Joining time: Immediate – 4 weeks
Required skills:
-Good experience in Java/J2EE programming frameworks like Spring (Spring MVC, Spring Security, Spring JPA, Spring Boot, Spring Batch, Spring AOP).
-Deep knowledge in developing enterprise web applications using Java Spring
-Good experience in REST webservices.
-Understanding of DevOps processes like CI/CD
-Exposure to Maven, Jenkins, GIT, data formats json /xml, Quartz, log4j, logback
-Good experience in database technologies / SQL / PLSQL or any database experience
-The candidate should have excellent communication skills with an ability to interact with non-technical stakeholders as well.
Thank you


Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.

We are looking for Senior Software Engineers (.NET) with at least 6 to 10 years of experience for the role of a consultant for a period of 6 months with potential offer of permanent position at the end of 6 months. The client is a SAAS product company working in the US Loan mortgage servicing and origination space. The location of the position is Chennai, India.
The team is building the next-generation loan originations and servicing systems. The systems will be used by lenders and servicers in the consumer and mortgage lending markets in the United States. As an agile developer, you will be working on a delivery team using modern technologies, tools and frameworks to develop advanced, enterprise business components that can run on Cloud platforms. You will be provided with the best tools, resources and compensation to get the job done…and enjoy every minute of it. And if it couldn’t get any better….this role is HYDBRID
Role:
- Associate will serve as a dedicated member of our development Team. Troubleshoot and resolve software bugs and deployment issues
- Must be an excellent verbal and written communicator
- Must have a positive attitude and be a self-starter who is willing to work independently and learn
- Creative problem-solving ability, allowing the individual to identify solid solutions to challenging business issues
- A passion for success and willingness to go above and beyond to accomplish goals
- Collaborate with managers, other developers, Quality Assurance, Business Analysts. Client Service Representatives and clients to understand requirements and demonstrate progress
- Work on an agile team with both onshore and offshore team members to help plan, implement and support enterprise web applications
- Design, develop and test new system capabilities, including: web UI, REST API, Microservices, serverless components, cloud native
- Document and communicate technical designs, standards and processes as needed
- Perform unit and integration testing to ensure application quality
- Continuously enhance skills by learning and applying relevant technologies and patterns
Requirements
- Professional Degree in Computer Science, Information Technology, related domain.
- Bachelor’s degree in Computer Science or equivalent work experience
- Minimum of 6-10 years’ experience in fast-paced software industry with an in-depth understanding of web application development.
- Agile development methodology
- Experience with the ASP.NET MVC stack.
- Experience with Azure DevOps (ADO) and Visual Studio and GIT
- Experience with JavaScript, jQuery and AJAX
- Experience with Angular and ReactJS
- Experience with HTML5, CSS3 and responsive UI design.
- Extensive experience with full-lifecycle development (i.e. design, coding, testing, debugging, etc.).
- Experience with distributed systems, C#, ASP.NET, REST and SQL programming.
- Experience developing or working with web services (REST and Web API preferred)
- Exposure to Cloud Native Programming and components
What do you get:
- Working on the latest and greatest technology
- Part of new product development team right from the beginning
- Excellent compensation
- Remote/Hybrid workplace options, Group Medical Coverage, Group Personal Accidental, Group Term Life Insurance Benefits, Flexible Time Off, Food@Work, Career Pathing, Summer Fridays and much, much more!
Greetings from BTree Systems!!
Hope you are doing well! We have an exciting opportunity for you if you are doing freelance IT training.
We are currently hiring for freelancer technical (IT & Software) trainers, It would be the best opportunity for you to make a handful of side hustles.
Kindly check our website(https://www.btreesystems.com/) to check the current training we are providing, If you are already doing any of this training feel free to join us.
What we expect from you:
▪ Technical trainers should have more than 5 years of experience in the respective field.
▪ Ability to make students do individual toy projects on the respective skill.


We are seeking a highly skilled and experienced Senior Java Developer/Product Person to join our dynamic team. As a key member of our organization, you will be responsible for developing and delivering successful products in the healthcare industry. You will work on full end-to-end enterprise applications and leverage your expertise in Java development and AWS to drive innovation and create solutions that address our customers' needs.
Responsibilities:
- Collaborate with cross-functional teams to gather requirements, analyze business needs, and translate them into technical specifications.
- Design, develop, test, and deploy high-quality, scalable, and reliable Java applications.
- Take ownership of the product development lifecycle, from ideation to deployment and maintenance.
- Implement best practices for software development, code reviews, and documentation.
- Identify opportunities for improvement and propose innovative solutions to enhance product performance, functionality, and user experience.
- Stay up-to-date with industry trends, emerging technologies, and advancements in healthcare IT.
- Mentor and provide technical guidance to junior developers.
Requirements:
- Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.
- Proven track record of delivering successful products in the healthcare industry.
- Strong proficiency in Java programming and hands-on experience with Java frameworks and libraries.
- Extensive knowledge and experience with full end-to-end enterprise application development.
- Solid understanding of software development principles, design patterns, and best practices.
- Proficiency in cloud technologies, particularly AWS (Amazon Web Services).
- Familiarity with modern software development methodologies, such as Agile or DevOps.
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams.
- Self-motivated and able to work independently as well as in a team environment.
Preferred Qualifications:
- Experience with healthcare data standards and regulations, such as HL7, FHIR, HIPAA, or GDPR.
- Knowledge of other programming languages and technologies, such as Python, JavaScript, or containerization (Docker, Kubernetes).
Join our team and contribute to the development of innovative healthcare solutions that make a real impact on patient care and outcomes. We offer a competitive salary, a stimulating work environment, and opportunities for professional growth.
If you meet the above requirements and are passionate about leveraging your Java development expertise in the healthcare industry, we would love to hear from you. Please submit your resume and portfolio demonstrating your previous product development work.

Type, Location
Chennai, Tamil Nadu, India
Desired Experience
1+ years
Job Description
● Understanding of how to build performant, decoupled, testable, and maintainable code
● Sharing knowledge with teammates, and working collaboratively when you need help
● Advocate for improvements to product quality, security, and performance
● Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale web environment. Maintain and advocate for these standards through code review
Qualification:
● 1+ years of experience preferably in a tech startup
● Strong foundation in server-side programming languages like Angular, AWS and DevOps
● Experience with containerization (Docker etc.) and cloud technologies
● Experience with automation and building CI/CD pipelines
● Demonstrated capacity to clearly and concisely communicate complex
technical, architectural, and/or organizational problems and propose thorough
solutions
● Experience with performance and optimization problems and a demonstrated
ability to both diagnose and prevent these problems
● Comfort working in a highly agile, iterative software development process

About Apexon:
Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.
Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.
To know more about us please visit: https://www.apexon.com/" target="_blank">https://www.apexon.com/
Responsibilities:
- C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products.
- Good object-oriented programming concepts and practical knowledge.
- Strong programming skills in C# are required.
- Good knowledge of C# Automation is preferred.
- Good to have experience with the Robot framework.
- Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
- Good to have knowledge of Azure cloud.
- Take end-to-end ownership of test automation development, execution and delivery.
Good to have:
- Experience in tools like SharePoint, Azure DevOps
.
Other skills:
- Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges.
Job Purpose :
Working with the Tech Data Sales Team, the Presales Consultant is responsible for providing presales technical support to the Sales team and presenting tailored demonstrations or qualification discussions to customers and/or prospects. The Presales Consultant also assists the Sales Team with qualifying opportunities - in or out and helping expand existing opportunities through solid questioning. The Presales Consultant will be responsible on conducting Technical Proof of Concept, Demonstration & Presentation on the supported products & solution.
Responsibilities :
- Subject Matter Expert (SME) in the development of Microsoft Cloud Solutions (Compute, Storage, Containers, Automation, DevOps, Web applications, Power Apps etc.)
- Collaborate and align with business leads to understand their business requirement and growth initiatives to propose the required solutions for Cloud and Hybrid Cloud
- Work with other technology vendors, ISVs to build solutions use cases in the Center of Excellence based on sales demand (opportunities, emerging trends)
- Manage the APJ COE environment and Click-to-Run Solutions
- Provide solution proposal and pre-sales technical support for sales opportunities by identifying the requirements and design Hybrid Cloud solutions
- Create Solutions Play and blueprint to effectively explain and articulate solution use cases to internal TD Sales, Pre-sales and partners community
- Support in-country (APJ countries) Presales Team for any technical related enquiries
- Support Country's Product / Channel Sales Team in prospecting new opportunities in Cloud & Hybrid Cloud
- Provide technical and sales trainings to TD sales, pre-sales and partners.
- Lead & Conduct solution presentations and demonstrations
- Deliver presentations at Tech Data, Partner or Vendor led solutions events.
- Achieve relevant product certifications
- Conduct customer workshops that help accelerate sales opportunities
Knowledge, Skills and Experience :
- Bachelor's degree in information technology/Computer Science or equivalent experience certifications preferred
- Minimum of 7 years relevant working experience, ideally in IT multinational environment
- Track record on the assigned line cards experience is an added advantage
- IT Distributor and/or SI experience would also be an added advantage
- Has good communication skills and problem solving skills
- Proven ability to work independently, effectively in an off-site environment and under high pressure
What's In It For You?
- Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle.
- Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses.
- Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program.
- Diversity, Equity & Inclusion: It's not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities.
- Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program.
- Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives.
Don't meet every single requirement? Apply anyway.
At Tech Data, a TD SYNNEX Company, we're proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you're excited about working for our company and believe you're a good fit for this role, we encourage you to apply. You may be exactly the person we're looking for!

at Altimetrik


Senior .NET Cloud (Azure) Practitioner
Job Description Experience: 5-12 years (approx.)
Education: B-Tech/MCA
Mandatory Skills
- Strong Restful API, Micro-services development experience using ASP.NET CORE Web APIs (C#);
- Must have exceptionally good software design and programming skills in .Net Core (.NET 3.X, .NET 6) Platform, C#, ASP.net MVC, ASP.net Web API (RESTful), Entity Framework & LINQ
- Good working knowledge on Azure Functions, Docker, and containers
- Expertise in Microsoft Azure Platform - Azure Functions, Application Gateway, API Management, Redis Cache, App Services, Azure Kubernetes, CosmosDB, Azure Search, Azure Service Bus, Function Apps, Azure Storage Accounts, Azure KeyVault, Azure Log Analytics, Azure Active Directory, Application Insights, Azure SQL Database, Azure IoT, Azure Event Hubs, Azure Data Factory, Virtual Networks and networking.
- Strong SQL Server expertise and familiarity with Azure Cosmos DB, Azure (Blob, Table, queue) storage, Azure SQL etc
- Experienced in Test-Driven Development, unit testing libraries, testing frameworks.
- Good knowledge of Object Oriented programming, including Design Patterns
- Cloud Architecture - Technical knowledge and implementation experience using common cloud architecture, enabling components, and deployment platforms.
- Excellent written and oral communication skills, along with the proven ability to work as a team with other disciplines outside of engineering are a must
- Solid analytical, problem-solving and troubleshooting skills
Desirable Skills:
- Certified Azure Solution Architect Expert
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-900-exam-preparation-microsoft-azure-fundamentals-524%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760717910671%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TmO1fonUFFgb8LzUJHbL8IyOdQeUKdw6xHMM2asosiw%3D&reserved=0" target="_blank">Microsoft Certified: Azure – Fundamentals Exam AZ-900
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-104-exam-preparation-microsoft-azure-administrator-1-1332%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EQPg%2FXxTdiYUCKCAvItXy6TY89udGTIehQ0m9irkGRk%3D&reserved=0" target="_blank">Microsoft Certified: Azure Administrator – Associate Exam AZ-104
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcloudacademy.com%2Flearning-paths%2Faz-204-exam-preparation-developing-solutions-for-microsoft-azure-1208%2F&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=%2BnosXa4TdBUG4jrqP%2B0%2FOikDbQMBNqzuDpvGoUk0IE8%3D&reserved=0" target="_blank">Microsoft Certified: Azure Developer – Associate Exam AZ-204
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23devops-engineer&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NZyP%2F3Euh1SkHfB7896ovm0HDt0vA8UgfHUvTBN4SPM%3D&reserved=0" target="_blank">Microsoft Certified: DevOps Engineer Expert (AZ-400)
- https://ind01.safelinks.protection.outlook.com/?url=https%3A%2F%2Facloudguru.com%2Fblog%2Fengineering%2Fwhich-azure-certification-is-right-for-me%23solutions-architect&data=05%7C01%7C%7Ce2e481ab8ccc4c589ba408dad36c9f29%7Ce9cb3c8041564c39a7fe68fe427a3d46%7C1%7C0%7C638054760718066923%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=0infaLzjjPThGGkzwu50jJXikppcC8trnsLGtAoB7S4%3D&reserved=0" target="_blank">Microsoft Certified: Azure Solutions Architect Expert (AZ-305)
- Good understanding of software architecture, scalability, resilience, performance;
- Working knowledge of automation tools such as Azure DevOps, Azure Pipeline or Jenkins or similar
Roles & Responsibilities
- Defining best practices & standards for usage of libraries, frameworks and other tools being used;
- Architecture, design, and implementation of software from development, delivery, and releases.
- Breakdown complex requirements into independent architectural components, modules, tasks and strategies and collaborate with peer leadership through the full software development lifecycle to deliver top quality, on time and within budget.
- Demonstrate excellent communications with stakeholders regarding delivery goals, objectives, deliverables, plans and status throughout the software development lifecycle.
- Should be able to work with various stakeholders (Architects/Product Owners/Leadership) as well as team - Lead/ Principal/ Individual Contributor for Web UI/ Front End Development;
- Should be able to work in an agile, dynamic team environment;


- Bachelor’s degree preferably in Engineering or equivalent professional or military experience with 10-15 years of experience.
- 5+ years of large-scale software development or application engineering with recent coding experience in two or more modern programming languages such as:Java,JavaScript, C/C++, C#, Swift, Node.js, Python, Go, or Ruby
- Experience with Continuous Integration and Continuous Delivery (CI/CD)
- Helping customers architect scalable, highly available application solutions that leverage at least 2 cloud environments out of AWS, GCP, Azure.
- Architecting and developing customer applications to be cloud developed or re-engineered or optimized
- Working as a technical leader alongside customer business, development and Development teams with support to Infrastructure team
- Providing deep software development knowledge with respect cloud architecture,design patterns and programming
- Advising and implementing Cloud (AWS/GCP/Azure) best practices
- Working as both an application architect as well as development specialist in Cloud native Apps architecture, development to deployment phases.
- Implementing DevOps practices such as infrastructure as code, continuous integration and automated deployment
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Skills – Jboss, DevOps, ServiceNow, Windows Server.
JD - Application Maintenance -- Must have -- Installation and configuration of Custom/Standard Software e.g., FileZilla, JDK, OpenJDK Installation and configuration of JBOSS/Tomcat Server, Configuration of HTTPS certificate in JBOSS/Tomcat, Windows Event Viewer/IIS Log/ Windows Security /Active Directory, how to set Environment variable, Registry value, etc. Nice to Have --- Basics of Monitoring Knowledge of PowerShell, MS Azure Devops, Deploy and Configure application, how to check Last installed version of any software/patch, ServiceNow , ITIL , Incident Management , Change Management
- Hands on experience in AWS provisioning of AWS services like EC2, S3,EBS, AMI, VPC, ELB, RDS, Auto scaling groups, Cloud Formation.
- Good experience on Build and release process and extensively involved in the CICD using
Jenkins
- Experienced on configuration management tools like Ansible.
- Designing, implementing and supporting fully automated Jenkins CI/CD
- Extensively worked on Jenkins for continuous Integration and for end to end Automation for all Builds and Deployments.
- Proficient with Docker based container deployments to create shelf environments for dev teams and containerization of environment delivery for releases.
- Experience working on Docker hub, creating Docker images and handling multiple images primarily for middleware installations and domain configuration.
- Good knowledge in version control system in Git and GitHub.
- Good experience in build tools
- Implemented CI/CD pipeline using Jenkins, Ansible, Docker, Kubernetes ,YAML and Manifest
We are looking out for a technically driven "ML OPS Engineer" for one of our premium client
COMPANY DESCRIPTION:
Key Skills
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)

- 3to 4years of professional experience as a DevOps / System Engineer
- Command line experience with Linux including writing bash scripts
- Programming in Python, Java or similar
- Fluent in Python and Python testing best practices
- Extensive experience working within AWS and with it’s managed products (EC2, ECS, ECR,R53,SES, Elasticache, RDS,VPCs etc)
- Strong experience with containers (Docker, Compose, ECS)
- Version control system experience (e.g. Git)
- Networking fundamentals
- Ability to learn and apply new technologies through self-learning
Responsibilities
- As part of a team implement DevOps infrastructure projects
- Design and implement secure automation solutions for development, testing, and productionenvironments
- Build and deploy automation, monitoring, and analysis solutions
- Manage our continuous integration and delivery pipeline to maximize efficiency
- Implement industry best practices for system hardening and configuration management
- Secure, scale, and manage Linux virtual environments
- Develop and maintain solutions for operational administration, system/data backup, disasterrecovery, and security/performance monitoring


https://www.ynos.in/" target="_blank">YNOS is a next-generation funded startup founded by IIT Madras faculty and incubated at IIT Madras Incubation Cell. It is a digital platform for Entrepreneurs, Investors, Innovators and Eco-system enablers, providing actionable insights on the startup and investment landscape in India. We are passionate about solving tough problems using technology and data, making a difference
The Opening
We are presently seeking for our next enthusiastic, talented, and driven Python Backend Engineer to start right away. We'd want you to
- Be excited about building a next-generation intelligence platform
- Possess a can-do attitude and be open to new challenges
- Value working with a great team - self-assured, creative, and insightful individuals who work together to achieve amazing things
- Be willing to explore, learn, and contribute new ideas to the platform, thereby improving it
- Be high on self-belief and enthusiasm to work in a startup culture - small team, fast-paced work environment
If this is you, we'd love to hear from you!
As the Python Backend Engineer in https://www.ynos.in/" target="_blank">YNOS, you will
- Create reusable optimised code and libraries
- Deploy task management systems and automate routine tasks
- Build performant apps that adhere to the best practices, therefore increasing latency, performance, and scalability
- Improve the existing codebase while reducing technical debt
- Take charge of all elements of the application, including architecture, quality, and efficiency
Requirements
- Proficient understanding of Python language
- Expertise in developing web apps and APIs using Python frameworks like Flask with an overall grasp of client-server interactions
- Familiarity with task management systems and process automation
- Comfortable with using Command Line and Linux systems
- Experience and understanding of version control systems like git, svn, etc
- Knowledge of NoSQL databases such as MongoDB
Good to have
- Expertise in other backend frameworks viz. Django, NodeJS, Go, Rust etc
- Knowledge of data-modelling, data-wrangling & data-mining techniques
- Experience with data visualisation tools & libraries such as Plotly, Seaborn etc
- Exposure to Statistical and Machine Learning (ML) techniques, particularly in the field of Natural Language Processing (NLP)
- Familiarity with front-end tools & frameworks such as HTML, CSS, JS, React, Vue, and others
Work location, Job type & Salary
- Our office is located at https://respark.iitm.ac.in/" target="_blank">IIT Madras Research Park, Chennai, Tamil Nadu, surrounded by the beautiful IITM Campus!
- This is a full-time position and we’d like you to relocate to Chennai
- Expected salary range ₹6L - ₹8L per annum


2+ years of .NET Development experience
- Exceptional skills Experience with .NET, technologies (WPF, Web Forms, .NET Core). Also skilled in MS SQL and PostgreSQL, Bootstrap and DevOps.
- Experience with DevExpress, Elasticsearch.Net, VS Studio is preferred SolarSearch
- Great business process understanding and written and verbal communication skills
- GitHub and Git experience.
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Experience: 3-6 years
Location: chennai
Job summary
AEM, Java, Jenkins, DevOps
"Experience on Agile methodology.
- Hands on experience on AEM, JCR, Apache Sling, Apache Felix
- Develop AEM templates, components, fragments and experiences
- Work extensively in Core Java 1.8, JEE, AEM 6.2, AEM 6.1, AEM 6.0, CQ5.5, Apache CXF
- Translates business requirements into AEM specific implementation, specifications, designs, Authors content, etc.
- Make changes to AEM site content, assets, pages, workflow, etc.
- Design and build components, templates, dialogs, and workflows using the AEM architecture (Sling, CRX, OSGI, JCR)
- Diagnoses and solve technical problems related to content management such as search result accuracy, dynamic content linking, formatting, image scaling, internationalization, and personalization
- Need to work in J2EE Web Technologies using Servlets, JSP, spring, Hibernate, Java Beans, Collections, JDBC, JavaScript, XML, HTML, DHTML, and CSS.
- To work on agile methodology, rapid development and prototyping environment"
Techincal Skills
Good to have JDBC, ANSI SQL, Jenkins, Core Java, DevOps, Adobe AEM

Good Python developers / Data Engineers / Devops engineers
Exp: 1-8years
Work loc: Chennai. / Remote support
Job description
The ideal candidate is a self-motivated, multi-tasker, and demonstrated team player. You will be a lead developer responsible for the development of new software security policies and enhancements to security on existing products. You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
Responsibilities
- Consulting with management on the operational requirements of software solutions.
- Contributing expertise on information system options, risk, and operational impact.
- Mentoring junior software developers in gaining experience and assuming DevOps responsibilities.
- Managing the installation and configuration of solutions.
- Collaborating with developers on software requirements, as well as interpreting test stage data.
- Developing interface simulators and designing automated module deployments.
- Completing code and script updates, as well as resolving product implementation errors.
- Overseeing routine maintenance procedures and performing diagnostic tests.
- Documenting processes and monitoring performance metrics.
- Conforming to best practices in network administration and cybersecurity.
Qualifications
- Minimum of 2 years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructure such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail and other services provided by AWS.
- Experience Building a multi-region highly available auto-scaling infrastructure that optimises performance and cost. plan for future infrastructure as well as Maintain & optimise existing infrastructure.
- Conceptualise, architect and build automated deployment pipelines in a CI/CD environment like Jenkins.
- Conceptualise, architect and build a containerised infrastructure using Docker, Mesosphere or similar SaaS platforms.
- Conceptualise, architect and build a secured network utilising VPCs with inputs from the security team.
- Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status.
- Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability.
- Minimum 2 years’ experience in Ansible.
- Must have written playbook to automate provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have had prior experience automating deployments to production and lower environments.
- Experience with APM tools like New Relic and log management tools.
- Our entire platform is hosted on AWS, comprising of web applications, webservices, RDS, Redis and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS.
- Essential Functions System Architecture Process Design and Implementation
- Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Establishing and enforcing systems monitoring tools and standards
- Establishing and enforcing Risk Assessment policies and standards
- Establishing and enforcing Escalation policies and standards
Notice Period: Immediate to 60 Days
Below is the JD
Expert: Micro services. Core Java and Spring framework (MVC, Boot).
Practitioner: IT experience in Agile, Test Driven Development approach and software delivery best practice. Strong design and technical skills. Continuous integration/deployment pipelines.
Desirable: ReactJS, Experience with Databases. Preferably Oracle. Knowledge of working in cloud environment. Preferably PCF and AWS.
Joining location will be either Gurgaon, Chennai & Bangalore only.
Skills Mandatory Skills
JAVA Micro service Java8+, Spring Boot, Micro services Devops, CI/CD 40

technology based supply chain management
Bachelor's degree in information security, computer science, or related.
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
About Navis

technology based supply chain management

A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
• At least 4 years of hands-on experience with cloud infrastructure on GCP
• Hands-on-Experience on Kubernetes is a mandate
• Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
• Knowledge and hand-on-experience in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
• Knowledge and hand-on-experience on the various platforms (e.g. Gitlab, CircleCl and Spinnakar)
• Familiarity with monitoring and alerting tools (e.g. CloudWatch, ELK stack, Prometheus)
• Proven ability to work independently or as an integral member of a team
Preferable Skills:
• Familiarity with standard IT security practices such as encryption,
credentials and key management.
• Proven experience on various coding languages (Java, Python-) to
• support DevOps operation and cloud transformation
• Familiarity and knowledge of the web standards (e.g. REST APIs, web security mechanisms)
• Hands on experience with GCP
• Experience in performance tuning, services outage management and troubleshooting.
Attributes:
• Good verbal and written communication skills
• Exceptional leadership, time management, and organizational skill Ability to operate independently and make decisions with little direct supervision
Designation – Deputy Manager - TS
Job Description
- Total of 8/9 years of development experience Data Engineering . B1/BII role
- Minimum of 4/5 years in AWS Data Integrations and should be very good on Data modelling skills.
- Should be very proficient in end to end AWS Data solution design, that not only includes strong data ingestion, integrations (both Data @ rest and Data in Motion) skills but also complete DevOps knowledge.
- Should have experience in delivering at least 4 Data Warehouse or Data Lake Solutions on AWS.
- Should be very strong experience on Glue, Lambda, Data Pipeline, Step functions, RDS, CloudFormation etc.
- Strong Python skill .
- Should be an expert in Cloud design principles, Performance tuning and cost modelling. AWS certifications will have an added advantage
- Should be a team player with Excellent communication and should be able to manage his work independently with minimal or no supervision.
- Life Science & Healthcare domain background will be a plus
Qualifications
BE/Btect/ME/MTech
Requirements
You will make an ideal candidate if you have:
-
Experience of building a range of Services in a Cloud Service provider
-
Expert understanding of DevOps principles and Infrastructure as a Code concepts and techniques
-
Strong understanding of CI/CD tools (Jenkins, Ansible, GitHub)
-
Managed an infrastructure that involved 50+ hosts/network
-
3+ years of Kubernetes experience & 5+ years of experience in Native services such as Compute (virtual machines), Containers (AKS), Databases, DevOps, Identity, Storage & Security
-
Experience in engineering solutions on cloud foundation platform using Infrastructure As Code methods (eg. Terraform)
-
Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
-
Customer/stakeholder focus. Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
-
Good leadership and teamwork skills - Works collaboratively in an agile environment
-
Operational effectiveness - delivers solutions that align to approved design patterns and security standards
-
Excellent skills in at least one of following: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Experienced in full automation and configuration management
-
A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change
-
Set and achieved challenging short, medium and long term goals which exceeded the standards in their field
-
Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences
-
Built effective networks across business areas, developing relationships based on mutual trust and encouraging others to do the same
-
A successful track record of delivering complex projects and/or programmes, utilizing appropriate techniques and tools to ensure and measure success
-
A comprehensive understanding of risk management and proven experience of ensuring own/others' compliance with relevant regulatory processes
Essential Skills :
-
Demonstrable Cloud service provider experience - infrastructure build and configurations of a variety of services including compute, devops, databases, storage & security
-
Demonstrable experience of Linux administration and scripting preferably Red Hat
-
Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools
-
Experience working within an Agile environment
-
Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Node.JS
-
Server administration (either Linux or Windows)
-
Automation scripting (using scripting languages such as Terraform, Ansible etc.)
-
Ability to quickly acquire new skills and tools
Required Skills :
-
Linux & Windows Server Certification

Striim (pronounced “stream” with two i’s for integration and intelligence) was founded in 2012 with a simple goal of helping companies make data useful the instant it’s born.
Striim’s enterprise-grade, streaming integration with intelligence platform makes it easy to build continuous, streaming data pipelines – including change data capture (CDC) – to power real-time cloud integration, log correlation, edge processing, and streaming analytics
2 - 5 Years of Experience in any Programming any language (Polyglot Preferred ) & System Operations • Awareness of Devops & Agile Methodologies • Proficient in leveraging CI and CD tools to automate testing and deployment . • Experience in working in an agile and fast paced environment . • Hands on knowledge of at least one cloud platform (AWS / GCP / Azure). • Cloud networking knowledge: should understand VPC, NATs, and routers. • Contributions to open source is a plus. • Good written communication skills are a must. Contributions to technical blogs / whitepapers will be an added advantage.
Responsibilities for Data Engineer
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications for Data Engineer
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Hi ,
Greetings from ToppersEdge.com India Pvt Ltd
We have job openings for our Client. Kindly find the details below:
Work Location : Bengaluru(remote axis presently)later on they should relocate to Bangalore.
Shift Timings – general shift
Job Type – Permanent Position
Experience – 3-7 years
Candidate should be from Product Based Company only
Job Description
We are looking to expand our DevOps team. This team is responsible for writing scripts to set up infrastructure to support 24*7 availability of the Netradyne services. The team is also responsible for setting up monitoring and alerting, to troubleshoot any issues reported in multiple environments. The team is responsible for triaging of production issues and providing appropriate and timely response to customers.
Requirements
- B Tech/M Tech/MS in Computer Science or a related field from a reputed university.
- Total industry experience of around 3-7 years.
- Programming experience in Python, Ruby, Perl or equivalent is a must.
- Good knowledge and experience of configuration management tool (like Ansible, etc.)
- Good knowledge and experience of provisioning tools (like Terraform, etc.)
- Good knowledge and experience with AWS.
- Experience with setting up CI/CD pipelines.
- Experience, in individual capacity, managing multiple live SaaS applications with high volume, high load, low-latency and high availability (24x7).
- Experience setting up web servers like apache, application servers like Tomcat/Websphere and databases (RDBMS and NoSQL).
- Good knowledge of UNIX (Linux) administration tools.
- Good knowledge of security best practices and knowledge of relevant tools (Firewalls, VPN) etc.
- Good knowledge of networking concepts and UNIX administration tools.
- Ability to troubleshoot issues quickly is required.
Requirements:-
- Bachelor’s Degree or Master’s in Computer Science, Engineering,Software Engineering or a relevant field.
- Strong experience with Windows/Linux-based infrastructures, Linux/Unix administration.
- knowledge of Jira, Bitbucket, Jenkins, Xray, Ansible, Windows and .Net. as their Core Skill.
- Strong experience with databases such as SQL, MS SQL, MySQL, NoSQL.
- Knowledge of scripting languages such as Shell Scripting /Python/ PHP/Groovy, Bash.
- Experience with project management and workflow tools such as Agile, Jira / WorkFront etc.
- Experience with open-source technologies and cloud services.
- Experience in working with Puppet or Chef for automation and configuration.
- Strong communication skills and ability to explain protocol and processes with team and management.
- Experience in a DevOps Engineer role (or similar role)
- AExperience in software development and infrastructure development is a plus
Job Specifications:-
- Building and maintaining tools, solutions and micro services associated with deployment and our operations platform, ensuring that all meet our customer service standards and reduce errors.
- Actively troubleshoot any issues that arise during testing and production, catching and solving issues before launch.
- Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
- Deploy product updates as required while implementing integrations when they arise.
- Automate our operational processes as needed, with accuracy and in compliance with our security requirements.
- Specifying, documenting and developing new product features, and writing automating scripts. Manage code deployments, fixes, updates and related processes.
- Work with open-source technologies as needed.
- Work with CI and CD tools, and source control such as GIT and SVN.
- Lead the team through development and operations.
- Offer technical support where needed, developing software for our back-end systems.
Job Dsecription: (8-12 years)
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Deep understanding of Kernel, Networking and OS fundamentals
○ Strong experience in writing helm charts.
○ Deep understanding of K8s.
○ Good knowledge in service mesh.
○ Good Database understanding
Notice Period: 30 day max
Job Dsecription:
○ Develop best practices for team and also responsible for the architecture
○ solutions and documentation operations in order to meet the engineering departments quality and standards
○ Participate in production outage and handle complex issues and works towards Resolution
○ Develop custom tools and integration with existing tools to increase engineering Productivity
Required Experience and Expertise
○ Having a good knowledge of Terraform + someone who has worked on large TF code bases.
○ Deep understanding of Terraform with best practices & writing TF modules.
○ Hands-on experience of GCP and AWS and knowledge on AWS Services like VPC and VPC related services like (route tables, vpc endpoints, privatelinks) EKS, S3, IAM. Cost aware mindset towards Cloud services.
○ Deep understanding of Kernel, Networking and OS fundamentals
NOTICE PERIOD - Max - 30 days
We are looking for a coding Engineering Manager based in Chennai. This is a new role within the growing Engineering function, where you will be responsible for leading a team of talented professionals, assisting them to deliver complex technical solutions across a range of teams. The successful candidate will be comfortable directing, planning and coordinating the team, as well as developing overall concepts for new and existing products and processes. Reporting into the Head of Engineering for our Chennai team, this role is integral to the successful growth of the team as well as wider company performance.
What You’ll Be Doing:
Play the role of a servant leader in helping the team to excel in the areas of ownership and help to continuously raise the bar on performance.
Perform your responsibilities through delegation, empowerment and influence aimed at delivering long term outcomes for the team and business.
Build, manage and grow one or more engineering team(s), foster Agile practices and continuous improvement, enable engineers to build, test, deliver and operate high-level software products across the company
A hands-on technical approach to management, leveraging your software engineering experience to assist product needs and build
Work with our Product, DevOps, and Data Science functions on articulating roadmaps & ensure a high quality and timely implementation to support our mission and help eliminate identity fraud
Join a mission-orientated company, solving real-world fraud and identity based challenges
Facilitate career growth and success for all team members.
What You’ll Bring:
6+ years of demonstrated deep technical design and programming skills (in individual capacity) designing and delivering enterprise grade distributed systems in the following tooling; Web applications (Java/PHP/.NET/Python) or Mobile applications (SDK, Android/iOS)
2+ years prior experience managing one or more Agile engineering team(s); contributing to a track record of building complex and technical products, ideally within a start-up or fast scaling environment
Track record of having managed performance cycles of engineers of varying experience levels directly reporting to you.
Experience with building software with Agility and in conformity to security standards.
Willingness to “roll-up your sleeves and get your hands dirty” when required
Experience delivering and operating complex software solutions - understanding the end-to-end needs of what is involved within this process.

• Should have 5+ years of work experience in design, develop, Code and Unit Test web and desktop-based applications written in .Net framework starting from 4.x and above.
• Strong analytical skills to understand a given requirement and provide work estimates.
• Strong Object-Oriented Programming knowledge.
• Strong experience with Static Code Analyzers like Fortify.
• Should have good understanding of web servers such as IIS and Front-end such as HTML’s and Razor based engines.
• Strong debugging skills using .net front end and backend.
• Strong coding experience and thorough understanding of programming languages such as C#, VB.NET, ASP.Net, ADO.Net, JQuery, JavaScript, Traditional Web Services, WCF, Web API and other Scripting languages such as pythons.
• Strong working knowledge on various design patterns such as MVC, MVVM, DDD, Repository Pattern and any custom/hybrid framework as designed by the Architects.
• Should have a strong working knowledge of Azure DevOps.
• Strong knowledge and understanding of data sharing medium using JSON, XML and other media types.
• Strong knowledge on Entity Framework (6 and above) and other ORM such as Dapper.
• Strong knowledge and programming skills in Database such SQL SERVER, Oracle, My SQL and SQL Express. Additionally, nice to know-how knowledge in MS ACCESS.
• Strong knowledge and coding experience in REST based web services and service-oriented design patterns using WCF and other API’s.
• Should have used IDE such as Visual Studio and Visual Studio Code for Front-end development.
• 1+ years of building SPA web solutions using Angular 6/7/8 , BackBone, Bootstrap
• 5+ years building HTML5 complaint pages
• 3+ years of experience using TypeScript
• 3+ years of writing automated testing using Jamine or others
Day to Day job Duties: (what this person will do on a daily/weekly basis)
• Co-ordinate/mentor other Junior developers on a day to day basis.
• Understand the use cases/User Story, code and develop on a designed platform/pattern.
• Strict adherence to coding standards.
• Participate self-code review/peer reviews and correct errors wherever applicable before checking in the final code into the Branch/code repo.
• Create code documentations wherever applicable and as set guidelines by the team.
• Create and perform Unit Tests wherever applicable as set guidelines by the team.
• Provide feedback and assist in estimation planning.
• Merge code branches as and when required.
• Create and publish release documentations and application deployments as and when requested.
• Report out statuses to the leads onshore daily during the Stand-up calls.
• Additionally, update efforts on a given work item on everyday basis.
• Provide true estimates on work assigned prior development. Also ask questions/provide comments on User Stories/work items assigned.
• Be a team player and flexible towards availability in case of any urgent issues that need immediate attention.
• Plan out vacations in advance (min. 2 weeks of adv. Notice).
Nice to have(not a must) experience, skills
• Good understanding of Service Workers.
• Prior coding experience using FORTRAN.
• Experience on 3rd party tools like Spire.Pdf, PDF.Js.
• Knowledge of Rapid application development framework like DevExpress, Code on Time, HighCharts.
• Knowledge of code clean up tools like CodeMaid.
• Knowledge of Power BI and O365 Suites of applications.
• knowledge of SQL Data tools like SSIS and SSRS.
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure

- Extensive experience in Javascript / NodeJS in the back end
- Front end frameworks such as Bootstrap, Pug, Jquery
- Experience in web frameworks like ExpressJS, Webpack
- Experience in Nginx, Redis, Apache Kafka and MQTT
- Experience with MongoDB
- Experience with Version Control Systems like Git / Mercurial
- Sound knowledge in Software engineering best practices
- Sound knowledge in RestFul API Design
- Working knowledge of Automated testing tools
- Experience in maintaining production servers (Optional)
- Experience with Azure DevOps (Optional)
- Experience in digital payments or financial services industry is a plus.
- Participation in the processes of strategic project-planning meetings.
- Be involved and participate in the overall application lifecycle.
- Collaborate with External Development Teams.
- Define and communicate technical and design requirements, understanding workflows and write code as per requirements.
- Develop functional and sustainable web applications with clean codes.
- Focus on coding and debugging.
We are seeking a Security Program Manager to effectively drive Privacy & Security Programs in collaboration with cross functional teams. You will partner with engineering leadership, product management and development teams to deliver more secure products.
Roles & Responsibilities:
- Work with multiple stakeholders across various departments such as IT, Engineering, Business, Legal, Finance etc to implement controls defined in policies and processes.
- Manage projects with security and audit requirements with internal and external teams and serve as a liaison among all stakeholders.
- Managing penetration tests and security reviews for core applications and APIs.
- Identify, create and guide on privacy and security requirements considering applicable Data Protection Laws and implement them across software modules developed at Netmeds.
- Brainstorm with engineering teams to figure out how privacy and security controls can be applied to Netmeds tech stack.
- Coordination with Infra Teams and Dev Teams on DB and application hardening, standardization of server images / containerization.
- Assess vendors' security posture before onboarding them and after they qualify, review their security posture at a set frequency.
- Manage auditors and ensure compliance for ISO 27001 and other data privacy audits.
- Answer questions or resolve issues reported by the external security researchers & bug bounty hunters.
- Investigate privacy breaches.
- Educate employees on data privacy & security.
- Prioritize security requirements based on their severity of impact and product roadmap.
- Maintain a balance of security and business values across the organisation.
Required Skills:
- Web Application Security, Mobile Application Security, Web Application Firewall, DAST, SAST, Cloud Security (AWS), Docker Security, Manual Penetration Testing.
- Good hands-on experience in handling tools such as vulnerability scanners, Burp suite, patch management, web filtering & WAF.
- Familiar with cloud hosting technologies (ex. AWS, Azure). Understanding of IAM, RBAC, NACLs, and KMS.
- Experience in Log Management, Security Event Correlation, SIEM.
- Must have strong interpersonal skills and should be able to communicate complex ideas seamlessly in written and verbal communication.
Good to Have Skills:
- Online Fraud Prevention.
- Bug Bounty experience.
- Security Operations Center (SOC) management.
- Experience with Amazon AWS services (EC2, S3, VPC, RDS, Cloud watch).
- Experience / Knowledge on tools like Fortify and Nessus.
- Experience in handling logging tools on docker container images (ex. Fluentd).

We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification