Cutshort logo
DevOps Jobs in Bangalore (Bengaluru)

50+ DevOps Jobs in Bangalore (Bengaluru) | DevOps Job openings in Bangalore (Bengaluru)

Apply to 50+ DevOps Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest DevOps Job opportunities across top companies like Google, Amazon & Adobe.

icon
NeoGenCode Technologies Pvt Ltd
Shivank Bhardwaj
Posted by Shivank Bhardwaj
Remote, Bengaluru (Bangalore), Chennai, Kolkata, Pune
9 - 12 yrs
₹10L - ₹42L / yr
skill iconJava
skill iconAmazon Web Services (AWS)
CI/CD
06692
AWS CloudFormation
+3 more

Job Description:

We are looking for a Lead Java Developer – Backend with a strong foundation in software engineering and hands-on experience in designing and building scalable, high-performance backend systems. You’ll be working within our Digital Engineering Studios on impactful and transformative projects in a fast-paced environment.


Key Responsibilities:

  • Lead and mentor backend development teams.
  • Design and develop scalable backend applications using Java and Spring Boot.
  • Ensure high standards of code quality through best practices such as SOLID principles and clean code.
  • Participate in pair programming, code reviews, and continuous integration processes.
  • Drive Agile, Lean, and Continuous Delivery practices like TDD, BDD, and CI/CD.
  • Collaborate with cross-functional teams and clients for successful delivery.


Required Skills & Experience:

  • 9–12+ years of experience in backend development (Up to 17 years may be considered).
  • Strong programming skills in Java and backend frameworks such as Spring Boot.
  • Experience in designing and building large-scale, custom-built, scalable applications.
  • Sound understanding of Object-Oriented Design (OOD) and SOLID principles.
  • Hands-on experience with Agile methodologies, TDD/BDD, CI/CD pipelines.
  • Familiarity with DevOps practices, Docker, Kubernetes, and Infrastructure as Code.
  • Good understanding of cloud technologies – especially AWS, and exposure to GCP or Azure.
  • Experience working in a product engineering environment is a plus.
  • Startup experience or working in fast-paced, high-impact teams is highly desirable.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Kolkata
8 - 15 yrs
₹25L - ₹45L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconLeadership
Team leadership
+11 more

Job Title : Lead Java Developer (Backend)

Experience Required : 8 to 15 Years

Open Positions : 5

Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)

Work Mode : Open to Remote / Hybrid / Onsite

Notice Period : Immediate Joiner/30 Days or Less


About the Role :

  • We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
  • This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
  • This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.


Key Responsibilities :

  • Design, develop, and implement scalable backend systems using Java and Spring Boot.
  • Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
  • Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
  • Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
  • Guide and mentor team members, fostering technical excellence and ownership.
  • Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.

What We’re Looking For :

  • Proven experience in Java backend development (Spring Boot, Microservices).
  • 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
  • Familiarity with cloud platforms such as AWS, Azure, or GCP.
  • Good understanding of containerization and orchestration tools like Docker and Kubernetes.
  • Exposure to DevOps and Infrastructure as Code practices.
  • Strong problem-solving skills and the ability to design solutions from first principles.
  • Prior experience in product-based or startup environments is a big plus.

Ideal Candidate Profile :

  • A tech enthusiast with a passion for clean code and scalable architecture.
  • Someone who thrives in collaborative, transparent, and feedback-driven environments.
  • A leader who takes ownership beyond individual deliverables to drive overall team and project success.

Interview Process

  1. Initial Technical Screening (via platform partner)
  2. Technical Interview with Engineering Team
  3. Client-facing Final Round

Additional Info :

  • Targeting profiles from product/startup backgrounds.
  • Strong preference for candidates with under 1 month of notice period.
  • Interviews will be fast-tracked for qualified profiles.
Read more
Apprication pvt ltd

at Apprication pvt ltd

1 recruiter
Adam patel
Posted by Adam patel
Mumbai, Bengaluru (Bangalore), Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Hyderabad
8 - 11 yrs
₹9L - ₹13L / yr
DevOps
End-user training
backhend

Key Responsibilities:

  • Lead the development and delivery of high-quality, scalable web/mobile applications.
  • Design, build, Manage and mentor a team of Developers across backend, frontend, and DevOps.
  • Collaborate with cross-functional teams including Product, Design, and QA to ship fast and effectively.
  • Integrate third-party APIs and financial systems (e.g., payment gateways, fraud detection, etc.).
  • Troubleshoot production issues, optimize performance, and implement robust logging & monitoring.
  • Define and enforce best practices in Planning, coding, architecture, and agile development.
  • Identify and implement the right tools, frameworks, and technologies.
  • Own the system architecture and make key decisions on performance, security, and scalability.
  • Continuously monitor tech performance and drive improvements.

Requirements:

  • 8+ years of software development experience, with at least 2+ years in a leadership role.
  • Proven track record in managing Development teams and delivering consumer-facing products.
  • Strong knowledge of backend technologies (Node.js, Java, Python, etc.) and frontend frameworks (React, Angular, etc.).
  • Experience in cloud infrastructure (AWS/GCP), CI/CD pipelines, and containerization (Docker, Kubernetes).
  • Deep understanding of system design, REST APIs, microservices architecture, and database management.
  • Excellent communication and stakeholder management skills.
  • Experience in Any Ecommerce Online app a web Applications or any Payment Transaction Environment will be a plus point.


Read more
NeoGenCode Technologies Pvt Ltd
Akshay Patil
Posted by Akshay Patil
Bengaluru (Bangalore), Pune, Chennai
10 - 20 yrs
₹30L - ₹60L / yr
skill iconJava
skill iconSpring Boot
Microservices
Apache Kafka
skill iconAmazon Web Services (AWS)
+8 more

📍 Position : Java Architect

📅 Experience : 10 to 15 Years

🧑‍💼 Open Positions : 3+

📍 Work Location : Bangalore, Pune, Chennai

💼 Work Mode : Hybrid

📅 Notice Period : Immediate joiners preferred; up to 1 month maximum

🔧 Core Responsibilities :

  • Lead architecture design and development for scalable enterprise-level applications.
  • Own and manage all aspects of technical development and delivery.
  • Define and enforce best coding practices, architectural guidelines, and development standards.
  • Plan and estimate the end-to-end technical scope of projects.
  • Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
  • Mentor and lead individual contributors and small development teams.
  • Collaborate with cross-functional teams, including DevOps, Product, and QA.
  • Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.

🛠️ Required Technical Skills :

  • Strong hands-on expertise in Java, Spring Boot, Microservices architecture
  • Experience with Kafka or similar messaging/event streaming platforms
  • Proficiency in cloud platformsAWS and Azure (must-have)
  • Exposure to frontend technologies (nice-to-have)
  • Solid understanding of HLD, system architecture, and design patterns
  • Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
  • Agile/Lean development, Pair Programming, and Continuous Integration practices
  • Polyglot mindset is a plus (Scala, Golang, Python, etc.)

🚀 Ideal Candidate Profile :

  • Currently working in a product-based environment
  • Already functioning as an Architect or Principal Engineer
  • Proven track record as an Individual Contributor (IC)
  • Strong engineering fundamentals with a passion for scalable software systems
  • No compromise on code quality, craftsmanship, and best practices

🧪 Interview Process :

  1. Round 1: Technical pairing round
  2. Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
  3. Final Round: HR and offer discussion
Read more
Fractal Analytics

at Fractal Analytics

5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Hyderabad, Gurugram, Noida, Pune, Mumbai, Chennai, Coimbatore
4yrs+
Best in industry
Generative AI
skill iconMachine Learning (ML)
LLMOps
Large Language Models (LLM) tuning
Open-source LLMs
+15 more

Role description:

You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.


Required skills:

  • 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
  • Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
  • Should have worked on proprietary and open source large language models
  • Experience on LLM fine tuning, creating distilled model from hosted LLMs
  • Building data pipelines for model training
  • Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
  • Experience in GenAI application deployment on cloud and on-premise at scale for production
  • Experience in creating CI/CD pipelines
  • Working knowledge on Kubernetes
  • Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
  • Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
  • Experience in light weight UI development using streamlit or chainlit (optional)
  • Desired experience on open-source tools for ML development, deployment, observability and integration
  • Background on DevOps and MLOps will be a plus
  • Experience working on collaborative code versioning tools like GitHub/GitLab
  • Team player with good communication and presentation skills
Read more
Zenius IT Services Pvt Ltd

at Zenius IT Services Pvt Ltd

2 candid answers
Sunita Pradhan
Posted by Sunita Pradhan
Bengaluru (Bangalore), Chennai
5 - 10 yrs
₹10L - ₹15L / yr
snowflake
SQL
data intergration tool
ETL/ELT Pipelines
SQL Queries
+5 more

Job Summary


We are seeking a skilled Snowflake Developer to design, develop, migrate, and optimize Snowflake-based data solutions. The ideal candidate will have hands-on experience with Snowflake, SQL, and data integration tools to build scalable and high-performance data pipelines that support business analytics and decision-making.


Key Responsibilities:


Develop and implement Snowflake data warehouse solutions based on business and technical requirements.

Design, develop, and optimize ETL/ELT pipelines for efficient data ingestion, transformation, and processing.

Write and optimize complex SQL queries for data retrieval, performance enhancement, and storage optimization.

Collaborate with data architects and analysts to create and refine efficient data models.

Monitor and fine-tune Snowflake query performance and storage optimization strategies for large-scale data workloads.

Ensure data security, governance, and access control policies are implemented following best practices.

Integrate Snowflake with various cloud platforms (AWS, Azure, GCP) and third-party tools.

Troubleshoot and resolve performance issues within the Snowflake environment to ensure high availability and scalability.

Stay updated on Snowflake best practices, emerging technologies, and industry trends to drive continuous improvement.


Qualifications:

Education: Bachelor’s or master’s degree in computer science, Information Systems, or a related field.


Experience:


6+ years of experience in data engineering, ETL development, or similar roles.

3+ years of hands-on experience in Snowflake development.


Technical Skills:


Strong proficiency in SQL, Snowflake Schema Design, and Performance Optimization.

Experience with ETL/ELT tools like dbt, Talend, Matillion, or Informatica.

Proficiency in Python, Java, or Scala for data processing.

Familiarity with cloud platforms (AWS, Azure, GCP) and integration with Snowflake.

Experience with data governance, security, and compliance best practices.

Strong analytical, troubleshooting, and problem-solving skills.

Communication: Excellent communication and teamwork abilities, with a focus on collaboration across teams.


Preferred Skills:


Snowflake Certification (e.g., SnowPro Core or Advanced).

Experience with real-time data streaming using tools like Kafka or Apache Spark.

Hands-on experience with CI/CD pipelines and DevOps practices in data environments.

Familiarity with BI tools like Tableau, Power BI, or Looker for data visualization and reporting.

Read more
Zenius IT Services Pvt Ltd

at Zenius IT Services Pvt Ltd

2 candid answers
Sunita Pradhan
Posted by Sunita Pradhan
Bengaluru (Bangalore), Chennai
10 - 18 yrs
₹10L - ₹30L / yr
ETL/ELT Tools
Data Transformation Tool (DBT)
Talend
Informatica
matillion
+5 more

Snowflake Architect


Job Summary: We are looking for a Snowflake Architect to lead the design, architecture, and migration of customer data into the Snowflake DB. This role will focus on creating a consolidated platform for analytics, driving data modeling and migration efforts while ensuring high-performance and scalable data solutions. The ideal candidate should have extensive experience in Snowflake, cloud data warehousing, data engineering, and best practices for optimizing large-scale architectures.


Key Responsibilities:


Architect and implement Snowflake data warehouse solutions based on technical and business requirements.

Define and enforce best practices for performance, security, scalability, and cost optimization in Snowflake.

Design and build ETL/ELT pipelines for data ingestion and transformation.

Collaborate with stakeholders to understand data requirements and create efficient data models.

Optimize query performance and storage strategies for large-scale data workloads.

Work with data engineers, analysts, and business teams to ensure seamless data access.

Implement data governance, access controls, and security best practices.

Troubleshoot and resolve performance bottlenecks in Snowflake.

Stay updated on Snowflake features and industry trends to drive innovation.


Qualifications:


Bachelor’s or master’s degree in computer science, Information Systems, or related field.

10+ years of experience in data engineering or architecture.

5+ years of hands-on experience with Snowflake architecture, administration, and development.

Expertise in SQL, Snowflake Schema Design, and Performance Optimization.

Experience with ETL/ELT tools such as dbt, Talend, Matillion, or Informatica.

Proficiency in Python, Java, or Scala for data processing.

Knowledge of cloud platforms (AWS, Azure, GCP) and Snowflake integration.

Experience with data governance, security, and compliance best practices.

Strong problem-solving skills and the ability to work in a fast-paced environment.

Excellent communication and stakeholder management skills.


Preferred Skills:


Experience in the customer engagement or contact center industry.

Familiarity with DevOps practices, containerization (Docker, Kubernetes), and infrastructure-as-code.

Knowledge of distributed systems, performance tuning, and scalability.

Familiarity with security best practices and secure coding standards.

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
DevOps
CI/CD
skill iconGit
skill iconKubernetes
Ansible
+7 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance. 

Key Roles & Responsibilities:

  • Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
  • Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
  • Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
  • Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
  • Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
  • Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
  • Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
  • Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability. 


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
  • Strong expertise in CI/CD pipelines, version control (Git), and release automation.
  •  Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
  • Proficiency in Terraform, Ansible for infrastructure automation.
  • Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
  • Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
  • Strong scripting and automation skills in Python, Bash, or Go.


Preferred Qualifications  

  • Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
  • Exposure to serverless architectures and event-driven workflows.
  • Contributions to open-source DevOps projects. 
Read more
IT Solutions

IT Solutions

Agency job
via HR Central by Melrose Savia Pinto
Bengaluru (Bangalore)
4 - 8 yrs
₹15L - ₹28L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconPython
Deployment management
skill iconJenkins
+4 more

Location: Malleshwaram/MG Road

Work: Initially Onsite and later Hybrid



We are committed to becoming a true DevOps house and want your help. The role will


require close liaison with development and test teams to increase effectiveness of


current dev processes. Participation in an out-of-hours emergency support rotationally


will be required. You will be shaping the way that we use our DevOps tools and


innovating to deliver business value and improve the cadence of the entire dev team.


Required Skills:


Good knowledge of Amazon Web Services suite (EC2, ECS, Loadbalancing, VPC,


S3, RDS, Lambda, Cloudwatch, IAM etc)


• Hands on knowledge on container orchestration tools – Must have: AWS ECS and Good to have: AWS EKS


• Good knowledge on creating and maintaining the infrastructure as code using Terraform


• Solid experience with CI-CD tools like Jenkins, git and Ansible


• Working experience on supporting Microservices (Deploying, maintaining and


monitoring Java web-based production applications using docker container)


• Strong knowledge on debugging production issues across the services and


technology stack and application monitoring (we use Splunk & Cloudwatch)


• Experience with software build tools (maven, and node)


• Experience with scripting and automation languages (Bash, groovy,


JavaScript, python)


• Experience with Linux administration and CVEs scan - Amz Linux, Ubuntu


• 4+ years in AWS DevOps Engineer


Optional skills:


• Oracle/SQL database maintenance experience


• Elastic Search


• Serverless/container based approach


• Automated testing of infrastructure deployments


• Experience of performance testing & JVM tuning


• Experience of a high-volume distributed eCommerce environment


• Experience working closely with Agile development teams


• Familiarity with load testing tools & process


• Experience with nginx, tomcat and apache


• Experience with Cloudflare


Personal attributes


The successful candidate will be comfortable working autonomously and


independently.


They will be keen to bring an entire team to the next level of delivering business value.


A proactive approach to problem

Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore), Pune
8yrs+
Upto ₹50L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Windows Azure
DevOps
+2 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

We are seeking an experienced and highly skilled Technical Lead with a strong background in Java, SaaS architectures, firewalls and cybersecurity products, including SIEM and SOAR platforms. The ideal candidate will lead technical initiatives, design and implement scalable systems, and drive best practices across the engineering team. This role requires deep technical expertise, leadership abilities, and a passion for building secure and high-performing security solutions.

Key Roles & Responsibilities:

  • Lead the design and development of scalable and secure software solutions using Java.
  • Architect and build SaaS-based cybersecurity applications, ensuring high availability, performance, and reliability.
  • Provide technical leadership, mentoring, and guidance to the development team.
  • Ensure best practices in secure coding, threat modeling, and compliance with industry standards.
  • Collaborate with cross-functional teams including Product Management, Security, and DevOps to deliver high-quality security solutions.
  • Design and implement security analytics, automation workflows and ITSM integrations.
  •  Drive continuous improvements in engineering processes, tools, and technologies.
  • Troubleshoot complex technical issues and lead incident response for critical production systems.


Basic Qualifications:

  • A bachelor’s or master’s degree in computer science, electronics engineering or a related field
  • 8-10 years of software development experience, with expertise in Java.
  • Strong background in building SaaS applications with cloud-native architectures (AWS, GCP, or Azure).
  • In-depth understanding of microservices architecture, APIs, and distributed systems.
  • Experience with containerization and orchestration tools like Docker and Kubernetes.
  • Knowledge of DevSecOps principles, CI/CD pipelines, and infrastructure as code (Terraform, Ansible, etc.).
  • Strong problem-solving skills and ability to work in an agile, fast-paced environment.
  • Excellent communication and leadership skills, with a track record of mentoring engineers.

 

Preferred Qualifications:

  • Experience with cybersecurity solutions, including SIEM (e.g., Splunk, ELK, IBM QRadar) and SOAR (e.g., Palo Alto XSOAR, Swimlane).
  • Knowledge of zero-trust security models and secure API development.
  • Hands-on experience with machine learning or AI-driven security analytics.
Read more
Bengaluru (Bangalore)
8yrs+
Best in industry
Windows Azure
Microsoft Windows Azure
DevOps
CI/CD
skill iconJenkins
+3 more

About SAP Fioneer

Innovation is at the core of SAP Fioneer. We were spun out of SAP to drive agility, innovation, and delivery in financial services. With a foundation in cutting-edge technology and deep industry expertise, we elevate financial services through digital business innovation and cloud technology.

A rapidly growing global company with a lean and innovative team, SAP Fioneer offers an environment where you can accelerate your future.


Product Technology Stack

  • Languages: PowerShell, MgGraph, Git
  • Storage & Databases: Azure Storage, Azure Databases


Role Overview

As a Senior Cloud Solutions Architect / DevOps Engineer, you will be part of our cross-functional IT team in Bangalore, designing, implementing, and managing sophisticated cloud solutions on Microsoft Azure.


Key Responsibilities

Architecture & Design

  • Design and document architecture blueprints and solution patterns for Azure-based applications.
  • Implement hierarchical organizational governance using Azure Management Groups.
  • Architect modern authentication frameworks using Azure AD/EntraID, SAML, OpenID Connect, and Azure AD B2C.

Development & Implementation

  • Build closed-loop, data-driven DevOps architectures using Azure Insights.
  • Apply code-driven administration practices with PowerShell, MgGraph, and Git.
  • Deliver solutions using Infrastructure as Code (IaC), CI/CD pipelines, GitHub Actions, and Azure DevOps.
  • Develop IAM standards with RBAC and EntraID.

Leadership & Collaboration

  • Provide technical guidance and mentorship to a cross-functional Scrum team operating in sprints with a managed backlog.
  • Support the delivery of SaaS solutions on Azure.


Required Qualifications & Skills

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in cloud solutions architecture and DevOps engineering.
  • Extensive expertise in Azure services, core web technologies, and security best practices.
  • Hands-on experience with IaC, CI/CD, Git, and pipeline automation tools.
  • Strong understanding of IAM, security best practices, and governance models in Azure.
  • Experience working in Scrum-based environments with backlog management.
  • Bonus: Experience with Jenkins, Terraform, Docker, or Kubernetes.


Benefits

  • Work with some of the brightest minds in the industry on innovative projects shaping the financial sector.
  • Flexible work environment encouraging creativity and innovation.
  • Pension plans, private medical insurance, wellness cover, and additional perks like celebration rewards and a meal program.


Diversity & Inclusion

At SAP Fioneer, we believe in the power of innovation that every employee brings and are committed to fostering diversity in the workplace.

Read more
Xebia IT Architects

at Xebia IT Architects

2 recruiters
Vijay S
Posted by Vijay S
Bengaluru (Bangalore), Pune, Hyderabad, Chennai, Jaipur, Bhopal, Gurugram
5 - 11 yrs
₹30L - ₹40L / yr
skill iconScala
Microservices
CI/CD
DevOps
skill iconAmazon Web Services (AWS)
+2 more

Dear,


We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.


📌 Job Details:

  • Role: Senior Backend Engineer
  •  Shift: 1 PM – 10 PM
  • Work Mode: Hybrid (3 days a week) across Xebia locations
  • Notice Period: Immediate joiners or up to 30 days


🔹 Job Responsibilities:


✅ Design and develop scalable, reliable, and maintainable backend solutions

✅ Work on event-driven microservices architecture

✅ Implement REST APIs and optimize backend performance

✅ Collaborate with cross-functional teams to drive innovation

✅ Mentor junior and mid-level engineers


🔹 Required Skills:


✔ Backend Development: Scala (preferred), Java, Kotlin

✔ Cloud: AWS or GCP

✔ Databases: MySQL, NoSQL (Cassandra)

✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code

✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch

✔ Agile Methodologies: Scrum, Kanban


⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.


Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.


Best regards,

Vijay S

Assistant Manager - TAG

https://www.linkedin.com/in/vijay-selvarajan/

Read more
Client located in Bangalore locations

Client located in Bangalore locations

Agency job
Bengaluru (Bangalore)
4 - 8 yrs
₹15L - ₹16L / yr
skill iconMachine Learning (ML)
Generative AI
DevOps
PyTorch
TensorFlow
+2 more

Job Title: AI/ML Engineer (GenAI & System Modernization)

Experience: 4 to 8 years

Work Mode: Hybrid

Location: Bangalore

Job Overview:

We are seeking AI engineers passionate about Generative AI and AI-assisted modernization to build cutting-edge solutions. The candidate will work on in-house GenAI applications, support AI adoption across the organization, collaborate with vendors for POCs, and contribute to legacy system modernization using AI-driven automation.

Key Responsibilities:

  • Design & develop in-house GenAI applications for internal use cases.
  • Collaborate with vendors to evaluate and implement POCs for complex AI solutions.
  • Work on AI-powered tools for SDLC modernization and code migration (legacy to modern tech).
  • Provide technical support, training, and AI adoption strategies for internal users & teams.
  • Assist in integrating AI solutions into software development processes.

Must Have:

  • Bachelor’s / Master’s degree in Computer Science / Computer Engineering / Computer Applications / Information Technology OR AI/ML-related field.
  • Relevant certification in AI/ML is an added advantage.
  • Minimum of 2 successful AI/ML POCs or production deployments.
  • Prior experience in AI-powered automation, AI-based DevOps, or AI-assisted coding.
  • Proven track record of teamwork and successful project delivery.
  • Strong analytical and problem-solving skills with a continuous learning mindset.
  • A positive, can-do attitude with attention to detail.

AI/ML Expertise:

  • Strong hands-on Python programming experience.
  • Experience with Generative AI models and LLMs/SLMs (worked on real projects or POCs).
  • Hands-on experience in Machine Learning & Deep Learning.
  • Experience with AI/ML frameworks (e.g., PyTorch, TensorFlow, Hugging Face).
  • Understanding of MLOps pipelines and AI model deployment.

System Modernization & Enterprise Tech Familiarity:

  • Basic understanding of Java, Spring, React, Kotlin, and Groovy.
  • Experience and willingness to work on AI-driven migration projects (e.g., PL/SQL to Emery, JS to Groovy DSL).
  • Experience with code quality, AI-assisted code refactoring, and testing frameworks.

Enterprise Integration & AI Adoption:

  • Ability to integrate AI solutions into enterprise workflows.
  • Experience with API development & AI model integration in applications.
  • Familiarity with version control & collaboration tools (Git, CI/CD pipelines).
Read more
Gruve
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
Terraform
Ansible
AWS CloudFormation
+6 more

About the Company:

Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.

 

Why Gruve:

At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.

Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

 

Position summary:

Our Professional Services team seeks a Cloud Engineer with a focus on Public Clouds for professional services engagements. In this role, the candidate will be ensuring the success of our engagements by providing deployment, configuration, and operationalization of Cloud infrastructure as well as various other cloud technologies such as On-Prem, Openshift, and Hybrid Environments.

A successful candidate for this position does require a good understanding of the Public Cloud systems (AWS, Azure) as well as a working knowledge of systems technologies, common enterprise software (Linux, Windows, Active Directory), Cloud Technologies (Kubernetes, VMware ESXi ), and a good understand of cloud automation (Ansible, CDK, Terraform, CloudFormation). The ideal candidate has industry experience and is confident working in a cross-functional team environment that is global in reach.

Key Roles & Responsibilities:

  • Public Cloud: Lead Public Cloud deployments for our Cloud Engineering customers including setup, automation, configuration, documentation and troubleshooting. Redhat Openshift on AWS/Azure experience is preferred.
  • Automation: Develop and maintain automated testing systems to ensure uniform and reproducible deployments for common infrastructure elements using tools like Ansible, Terraform, and CDK.
  • Support: In this role the candidate may need to support the environment as part of the engagement through hand-off. Requisite knowledge of operations will be required
  • Documentation: The role can require significant documentation of the deployment and steps to maintain the system. The Cloud Engineer will also be responsible for all required documentation needed as required for customer hand-off.
  • Customer Skills: This position is customer facing and effective communication and customer service is essential. 

 

Basic Qualifications:

  • Bachelor's or Master's degree in computer programming or quality assurance.
  • 5-8 years as an IT Engineer or DevOps Engineer with automation skills and AWS or Azure Expierence. Preferably in Professional Services.
  • Proficiency in enterprise tools (Grafana, Splunk etc.), software (Windows, Linux, Databases, Active Directory, VMware ESXi, Kubernetes) and techniques (Knowledge of Best Practices).
  • Demonstratable Proficiency with Automation Packages (Ansible, Git, CDK, Terraform, Cloudformation, Python)


Preferred Qualifications  

  • Exceptional communication and interpersonal skills.
  • Strong ownership abilities, attention to detail.
Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore)
5 - 12 yrs
₹15L - ₹25L / yr
skill iconAmazon Web Services (AWS)
IaC
Linux/Unix
circleci
DevOps
+3 more

Job Overview:

We are seeking a highly skilled AWS Cloud SRE and Operations Engineer to join our cloud infrastructure team. The ideal candidate will be responsible for ensuring the reliability, availability, and performance of our AWS cloud infrastructure while automating and optimizing operational processes. You will play a key role in maintaining robust cloud environments, monitoring systems, troubleshooting issues, and enhancing the overall scalability and security of cloud-based applications.

Responsibilities:

  1. AWS Infrastructure Management: Design, implement, and manage scalable, secure, and reliable AWS infrastructure to support cloud-based applications.
  2. Site Reliability Engineering (SRE): Ensure the high availability, performance, and scalability of cloud environments through effective monitoring, automation, and incident response.
  3. Operations & Monitoring: Implement and maintain comprehensive monitoring solutions (e.g., CloudWatch, Datadog, Prometheus) to ensure visibility into the health and performance of applications and infrastructure.
  4. Automation: Automate operational tasks and processes, including provisioning, configuration management, deployment, and scaling of cloud resources using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation.
  5. Incident Response & Troubleshooting: Manage and respond to incidents, troubleshoot performance bottlenecks, and conduct root cause analysis to ensure system reliability and uptime.
  6. CI/CD Pipeline Support: Collaborate with development teams to optimize Continuous Integration/Continuous Deployment (CI/CD) pipelines and ensure smooth, automated deployments and rollbacks.
  7. Security & Compliance: Implement best practices for cloud security, including identity and access management (IAM), network security, encryption, and compliance with industry regulations.
  8. Backup & Disaster Recovery: Implement and manage backup strategies, disaster recovery plans, and high-availability architectures to ensure data integrity and system continuity.
  9. Performance Optimization: Continuously analyze and improve the performance of cloud resources, applications, and network traffic to optimize cost, speed, and availability.
  10. Capacity Planning: Monitor and plan for future capacity needs, scaling resources based on demand and application performance requirements.
  11. Documentation: Create and maintain detailed technical documentation, including architecture diagrams, operational procedures, and incident reports.
  12. Collaboration: Work closely with cross-functional teams, including development, QA, and security, to ensure smooth operations, deployments, and troubleshooting processes.

Qualifications:

  1. Education: Bachelor’s degree in Computer Science, Information Technology, or related fields.
  2. Experience:
  3. 3-5 years of experience as a Cloud SRE, Operations Engineer, or DevOps Engineer with a strong focus on AWS services with Experience in managing large-scale, distributed cloud environments.
  4. Technical Skills:
  5. Deep understanding of AWS services (EC2, S3, RDS, Lambda, ECS, EKS, VPC, Route 53, IAM, etc.).
  6. Experience with Infrastructure as Code (IaC) tools such as Terraform, AWS CloudFormation, or Ansible.
  7. Strong knowledge of Linux/Unix system administration and shell scripting.
  8. Proficiency in automation using Python, Bash, or other scripting languages.
  9. Experience with monitoring and logging tools (e.g., AWS CloudWatch, Datadog, ELK stack, Prometheus, Grafana).
  10. Hands-on experience with CI/CD tools (Jenkins, GitLab CI, CircleCI, etc.).
  11. Solid understanding of networking concepts, including DNS, load balancing, VPN, and firewalls.
  12. Problem-Solving: Strong troubleshooting and problem-solving skills for addressing cloud infrastructure and application issues.
  13. Communication: Excellent verbal and written communication skills, with the ability to collaborate effectively with teams and stakeholders.

Preferred:

  1. AWS certification (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.).
  2. Experience with containerization and orchestration tools like Docker and Kubernetes.
  3. Knowledge of security best practices and regulatory compliance (e.g., GDPR, HIPAA).
  4. Familiarity with GitOps, service mesh technologies, and serverless architectures.
  5. Understanding of Agile methodologies and working in a DevOps environment.


Read more
Gipfel & Schnell Consultings Pvt Ltd
Bengaluru (Bangalore)
5 - 12 yrs
Best in industry
DevOps
azure
Terraform
Powershell
Apache Kafka
+1 more

Mandatory Skills:


  • AZ-104 (Azure Administrator) experience
  • CI/CD migration expertise
  • Proficiency in Windows deployment and support
  • Infrastructure as Code (IaC) in Terraform
  • Automation using PowerShell
  • Understanding of SDLC for C# applications (build/ship/run strategy)
  • Apache Kafka experience
  • Azure web app


Good to Have Skills:


  • AZ-400 (Azure DevOps Engineer Expert)
  • AZ-700 Designing and Implementing Microsoft Azure Networking Solutions
  • Apache Pulsar
  • Windows containers
  • Active Directory and DNS
  • SAST and DAST tool understanding
  • MSSQL database
  • Postgres database
  • Azure security
Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore)
4 - 7 yrs
₹25L - ₹30L / yr
DevOps
skill iconAmazon Web Services (AWS)
skill iconDocker
skill iconKubernetes
Bitbucket
+14 more

 

DevOps Engineer

Bangalore / Full-Time

 

Job Description

At, we build Enterprise-Scale AI/ML-powered products for Manufacturing, Sustainability and Supply Chain. We are looking for a DevOps Engineer to help us deploying product updates, identifying production issues and implement integrations that meet customer needs. By joining our team, you will take part in various projects, involves working with clients to successfully implement and integrate a  products, software, or systems into their existing infrastructure or  cloud. 

 

What You'll Do

•      Collaborate with stakeholders to gather and analyze customer needs, ensuring that DevOps strategies align with business objectives.

•      Deploy and manage various development, testing, and automation tools, alongside robust IT infrastructure to support our software lifecycle.

•      Configure and maintain the necessary tools and infrastructure to support continuous integration, continuous deployment (CI/CD), and other DevOps processes.

•      Establish and document processes for development, testing, release, updates, and support to streamline DevOps operations.

•      Manage the deployment of software updates and bug fixes, ensuring minimal downtime and seamless integration.

•      Develop and implement tools aimed at minimizing errors and enhancing the overall customer experience.

•      Promote and develop automated solutions wherever possible to increase efficiency and reduce manual intervention.

•      Evaluate, select, and deploy appropriate CI/CD tools that best fit the project requirements and organizational goals.

•      Drive ongoing enhancements by building and maintaining robust CI/CD pipelines, ensuring seamless integration, development, and deployment cycles.

•      Integrate requested features and services as per customer requirements to enhance product functionality.

•      Conduct thorough root cause analyses for production issues, implementing effective solutions to prevent recurrence.

•      Investigate and resolve technical problems promptly to maintain system stability and performance.

•      Offer expert technical support, including GitOps for automated Kubernetes deployments, Jenkins pipeline automation, VPS setup, and more, ensuring smooth and reliable operations.

 

 

Requirements & Skills

•      Bachelor’s degree in computer science, MCA or equivalent practical experience

•      4 to 6 years of hands-on experience as a DevOps Engineer 

 

•      Proven experience with cloud platforms such as AWS or Azure, including services like EC2, S3, RDS and Kubernetes Service (EKS).

•      Strong understanding of networking concepts, including VPNs, firewalls, and load balancers.

•      Proficiency in setting up and managing CI/CD pipelines using tools like Jenkins, Bitbucket Pipeline, or similar

•      Experience with configuration management tools such as Ansible, Chef, or Puppet.

•      Skilled in using IaC tools like Terraform,  AWS CloudFormation, or similar.

•      Strong knowledge of Docker and Kubernetes for container management and orchestration.

•      Expertise in using Git and managing repositories on platforms like GitHub, GitLab, or Bitbucket.

•      Ability to build and maintain automated scripts and tools to enhance DevOps processes.

•      Experience with monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to ensure system reliability and performance.

•      Experience with GitOps practices for managing Kubernetes deployments using Flux2 or similar.

•      Proficiency in scripting languages such as Python, Yaml, Bash, or PowerShell.

•      Strong analytical skills to perform root cause analysis and troubleshoot complex technical issues.

•      Excellent teamwork and communication skills to work effectively with cross-functional teams and stakeholders.

•      Ability to thrive in a fast-paced environment and adapt to changing priorities and technologies.

•      Eagerness to stay updated with the latest DevOps trends, tools, and best practices.

 

Nice to have:

•      AWS Certified DevOps Engineer

•      Azure DevOps Engineer Expert

•      Certified Kubernetes Administrator (CKA)

•      Understanding of security compliance standards (e.g., GDPR, HIPAA) and best practices in DevOps.

•      Experience with cost management and optimization strategies in cloud environments.

•      Knowledge of incident management and response tools and processes.

  


 

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Chennai, Bengaluru (Bangalore), Hyderabad, Kolkata, Pune, Mumbai, Delhi, Noida, Kochi (Cochin), Bhubaneswar
8 - 12 yrs
₹8L - ₹26L / yr
skill iconDocker
skill iconKubernetes
DevOps
cicd
skill iconJenkins
+5 more

6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.

• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.

• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.

• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.

• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms

• Experience with Agile software development and knowledge of best practices for agile Scrum team.

• Proficient with GIT version control

• Experience working with Linux and cloud compute platforms.

• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.

• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.

Read more
Rigel Networks Pvt Ltd
Minakshi Soni
Posted by Minakshi Soni
Bengaluru (Bangalore), Pune, Mumbai, Chennai
8 - 12 yrs
₹8L - ₹10L / yr
skill iconAmazon Web Services (AWS)
Terraform
Amazon Redshift
Redshift
Snowflake
+16 more

Dear Candidate,


We are urgently Hiring AWS Cloud Engineer for Bangalore Location.

Position: AWS Cloud Engineer

Location: Bangalore

Experience: 8-11 yrs

Skills: Aws Cloud

Salary: Best in Industry (20-25% Hike on the current ctc)

Note:

only Immediate to 15 days Joiners will be preferred.

Candidates from Tier 1 companies will only be shortlisted and selected

Candidates' NP more than 30 days will get rejected while screening.

Offer shoppers will be rejected.


Job description:

 

Description:

 

Title: AWS Cloud Engineer

Prefer BLR / HYD – else any location is fine

Work Mode: Hybrid – based on HR rule (currently 1 day per month)


Shift Timings 24 x 7 (Work in shifts on rotational basis)

Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.

Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting



Experience and Skills Requirements:


Experience:

8 years of experience in a technical role working with AWS


Mandatory

Technical troubleshooting and problem solving

AWS management of large-scale IaaS PaaS solutions

Cloud networking and security fundamentals

Experience using containerization in AWS

Working Data warehouse knowledge Redshift and Snowflake preferred

Working with IaC – Terraform and Cloud Formation

Working understanding of scripting languages including Python and Shell

Collaboration and communication skills

Highly adaptable to changes in a technical environment

 

Optional

Experience using monitoring and observer ability toolsets inc. Splunk, Datadog

Experience using Github Actions

Experience using AWS RDS/SQL based solutions

Experience working with streaming technologies inc. Kafka, Apache Flink

Experience working with a ETL environments

Experience working with a confluent cloud platform


Certifications:


Minimum

AWS Certified SysOps Administrator – Associate

AWS Certified DevOps Engineer - Professional



Preferred


AWS Certified Solutions Architect – Associate


Responsibilities:


Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.


The following is a list of expected responsibilities:


To manage and support a customer’s AWS platform

To be technical hands on

Provide Incident and Problem management on the AWS IaaS and PaaS Platform

Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner

Actively monitor an AWS platform for technical issues

To be involved in the resolution of technical incidents tickets

Assist in the root cause analysis of incidents

Assist with improving efficiency and processes within the team

Examining traces and logs

Working with third party suppliers and AWS to jointly resolve incidents


Good to have:


Confluent Cloud

Snowflake




Best Regards,

Minakshi Soni

Executive - Talent Acquisition (L2)

Rigel Networks

Worldwide Locations: USA | HK | IN 

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Pune, Mumbai, Bengaluru (Bangalore)
4 - 9 yrs
Best in industry
Rancher
skill iconKubernetes
K8s
DevOps
Puppet

Job Summary:

We are seeking a skilled DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for managing, maintaining, and troubleshooting Rancher clusters, with a strong emphasis on Kubernetes operations. This role requires expertise in automation through shell scripting and proficiency in configuration management tools like Puppet and Ansible. Candidates should be highly self-motivated, capable of working on a rotating schedule, and committed to owning tasks through to delivery.

Key Responsibilities:

  • Set up, operate, and maintain Rancher and Kubernetes (K8s) clusters, including on bare-metal environments.
  • Perform upgrades and manage the lifecycle of Rancher clusters.
  • Troubleshoot and resolve Rancher cluster issues efficiently.
  • Write, maintain, and optimize shell scripts to automate Kubernetes-related tasks.
  • Work collaboratively with the team to implement best practices for system automation and orchestration.
  • Utilize configuration management tools like Puppet and Ansible (preferred but not mandatory).
  • Participate in a rotating schedule, with the ability to work until 1 AM as required.
  • Take ownership of tasks, ensuring timely delivery with high-quality standards.

Key Requirements:

  • Strong expertise in Rancher and Kubernetes operations and maintenance.
  • Experience in setting up and managing Kubernetes clusters on bare-metal systems is highly desirable.
  • Proficiency in shell scripting for task automation.
  • Familiarity with configuration management tools like Puppet and Ansible (good to have).
  • Strong troubleshooting skills for Kubernetes and Rancher environments.
  • Ability to work effectively in a rotating schedule and flexible hours.
  • Strong ownership mindset and accountability for deliverables.


Read more
TheCodeWork

at TheCodeWork

2 recruiters
Ashish Singh
Posted by Ashish Singh
Bengaluru (Bangalore)
5 - 6 yrs
₹10L - ₹12L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+10 more

Job Overview:We are seeking a skilled Full Stack Developer with 5+ years of experience to join our dynamic team. The ideal candidate will have expertise in building robust back-end systems with Python and Django, creating Restful APIs using Django Rest Framework (DRF), and developing responsive, user-friendly front-end interfaces with React.js. Proficiency in HTML and CSS is a must, ensuring seamless design implementation and excellent user experiences.


Key Responsibilities:

Back-end Development:

● Design, develop, and maintain scalable web applications using Python and Django.

● Create and manage REST APIs using Django Rest Framework(DRF).

● Ensure application security, performance, and scalability.

● Optimize database queries and manage data models.


Front-end Development:

● Develop responsive user interfaces using React.js, HTML, and CSS.

● Collaborate with UX/UI designers to implement engaging user experiences.

● Integrate front-end with back-end services via REST APIs.

● Debug and resolve cross-browser and platform compatibility issues.


Collaboration and Deployment:

Work closely with cross-functional teams, including designers, QA, and Dev-ops.

● Participate in code reviews to maintain high-quality code standards.

● Deploy, monitor, and maintain applications in a production environment.


Required Skills and Qualifications:

● Strong proficiency in Python and Django.

● Hands-on experience with Django Rest Framework (DRF) for API development.

● Solid understanding of React.js, including hooks and state management.

● Expertise in HTML, CSS, and modern front-end development practices

● Proficiency in integrating front-end and back-end systems.

● Strong knowledge of Restful API design principles and best practices.

● Experience with version control systems like Git.

● Familiarity with deployment tools and cloud services (e.g., AWS, Docker) is a plus.

● Excellent problem-solving and debugging skills.


Preferred Qualifications:

● Knowledge of front-end libraries like Material UI or Tailwind CSS.

● Experience with unit testing frameworks (e.g., Pytest, Jest).

● Familiarity with Agile methodologies and tools like Jira.

● Exposure to CI/CD pipelines and DevOps practices.


About The Company:(under Debsin Technologies Private Limited)specialising in assisting startups and businesses with implementing technology across their services. We cover everything from MVP development to custom web and app development, cloud migration, and micro services.

Our mission is to bridge the gap between your idea and product with our MVP solutions. We offer comprehensive services, including an MVP program and full product development, dedicated to addressing the challenges faced by early-stage startups and entrepreneurs. Proudly, we are ISO 9001:2015 certified for our Quality Control System.


Read more
Client based at Bangalore location.

Client based at Bangalore location.

Agency job
Bengaluru (Bangalore)
8 - 11 yrs
₹20L - ₹40L / yr
skill iconMachine Learning (ML)
Cloud Computing
Fullstack Developer
skill iconKubernetes
skill iconPython
+14 more

 

 

Job Title: Solution Architect

Work Location: Tokyo

Experience: 7-10 years

Number of Positions: 3

Job Description:

We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.

Responsibilities:

  • Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
  • Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
  • Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
  • Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
  • Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
  • Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
  • Contribute to the development of technical documentation and roadmaps.
  • Stay up-to-date with emerging technologies and propose enhancements to the solution design process.

Key Skills & Requirements:

  • Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
  • Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
  • Solid experience with Kubernetes for container orchestration and deployment.
  • Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
  • Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
  • Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
  • Strong experience in designing scalable architectures and applications from the ground up.
  • Experience with DevOps and automation tools for CI/CD pipelines.
  • Excellent problem-solving skills and ability to work in a fast-paced environment.
  • Strong communication skills and ability to collaborate with cross-functional teams.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.

Preferred Skills:

  • Experience with microservices architecture and containerization.
  • Knowledge of distributed systems and high-performance computing.
  • Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
  • Familiarity with Agile methodologies and Scrum.
  • Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.

 

Read more
Data Caliper
Sweety Silvester
Posted by Sweety Silvester
Remote, Chennai, Coimbatore, Pondicherry, Bengaluru (Bangalore)
3 - 10 yrs
₹5L - ₹14L / yr
skill iconVue.js
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconJavascript
+13 more

We are currently seeking skilled and motivated Senior Java Developers to join our dynamic and innovative development team. As a Senior Java Developer, you will be responsible for designing, developing, and maintaining high-performance, scalable Java applications.

 


Join DataCaliper and step into the vanguard of technological advancement, where your proficiency will shape the landscape of data management and drive businesses toward unparalleled success.


Please find below our job description, if interested apply / reply sharing your profile to connect and discuss.


Company: Data caliper

URL: https://datacaliper.com/

Work location: Coimbatore

Experience: 3+ years

Joining time: Immediate – 4 weeks


Required skills:

-Good experience in Java/J2EE programming frameworks like Spring (Spring MVC, Spring Security, Spring JPA, Spring Boot, Spring Batch, Spring AOP).

-Deep knowledge in developing enterprise web applications using Java Spring

-Good experience in REST webservices.

-Understanding of DevOps processes like CI/CD

-Exposure to Maven, Jenkins, GIT, data formats json /xml, Quartz, log4j, logback

-Good experience in database technologies / SQL / PLSQL or any database experience

-The candidate should have excellent communication skills with an ability to interact with non-technical stakeholders as well.


Thank you

Read more
Rtbrick
Deepa Patkar
Posted by Deepa Patkar
Bengaluru (Bangalore)
2 - 6 yrs
₹10L - ₹25L / yr
cicd
DevOps
Ansible
skill iconPython
Shell Scripting
+1 more


We are looking for multiple hands-on software engineers to handle CI/CD build and packaging engineering to facilitate RtBrick Full Stack (RBFS) software packages for deployment on various hardware platforms. You will be part of a high-performance team responsible for platform and infrastructure


Requirements

1. About 2-6 years of industry experience in Linux administration with an emphasis on automation

2. Experience with CI/CD tooling framework and cloud deployments

3. Experience With Software Development Tools like Git, Gitlab, Jenkins, Cmake, GNU build tools & Ansible

4. Proficient in Python and Shell scripting. Experience with Go-lang is excellent to have

5. Experience with Linux Apt Package Management, Web server, optional Open Network Linux (ONL), infrastructure like boot, pxe, IPMI, APC

6. Experience with Open Networking Linux (ONL) is highly desirable. SONIC build experience will be a plus

 

Responsibilities

CI/CD- Packaging

Knowledge of compilation, packaging and repository usage in various flavors of Linux.

Expertise in Linux system administration and internals is essential. Ability to build custom images with container, Virtual Machine environment, modify bootloader, reduce image and optimize containers for low power consumption.


Linux Administration

Install and configure Linux systems, including back-end database and scripts, perform system maintenance by reviewing error logs, create systems backup, and build Linux modules and packages for software deployment. Build packages in Open Network Linux and SONIC distributions in the near future.

Read more
Hyderabad, Bengaluru (Bangalore)
10 - 15 yrs
₹25L - ₹30L / yr
openshift
skill iconKubernetes
DevOps
Red Hat Linux

RedHat OpenShift (L2/L3 Expetise)

1. Setup OpenShift Ingress Controller (And Deploy Multiple Ingress) 

2. Setup OpenShift Image Registry

3. Very good knowledge of OpenShift Management Console to help the application teams to manage their pods and troubleshooting.

4. Expertise in deployment of artifacts to OpenShift cluster and configure customized scaling capabilities

5. Knowledge of Logging of PODS in OpenShift Cluster for troubleshooting.

 

 

2. Architect:

- Suggestions on architecture setup

- Validate architecture and let us know pros and cons and feasibility.

- Managing of Multi Location Sharded Architecture

- Multi Region Sharding setup

 

3. Application DBA:

- Validate and help with Sharding decisions at collection level

- Providing deep analysis on performance by looking at execution plans

- Index Suggestions

- Archival Suggestions and Options

               

4. Collaboration

Ability to plan and delegate work by providing specific instructions.

 

 


 

Read more
Scoutflo
Mumbai, Bengaluru (Bangalore)
2 - 6 yrs
₹8L - ₹15L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconMongoDB
+7 more

Scoutflo is a platform that automates complex infrastructure requirements for Kubernetes Infrastructure.


Job Description:


  1. In-depth knowledge of full-stack development principles and best practices.
  2. Expertise in building web applications with strong proficiency in languages like
  3. Node.js, React, and Go.
  4. Experience developing and consuming RESTful & gRPC API Protocols.
  5. Familiarity with CI/CD workflows and DevOps processes.
  6. Solid understanding of cloud platforms and container orchestration
  7. technologies
  8. Experience with Kubernetes pipelines and workflows using tools like Argo CD.
  9. Experience with designing and building user-friendly interfaces.
  10. Excellent understanding of distributed systems, databases, and APIs.
  11. A passion for writing clean, maintainable, and well-documented code.
  12. Strong problem-solving skills and the ability to work independently as well as
  13. collaboratively.
  14. Excellent communication and interpersonal skills.
  15. Experience with building self-serve platforms or user onboarding experiences.
  16. Familiarity with Infrastructure as Code (IaC) tools like Terraform.
  17. A strong understanding of security best practices for Kubernetes deployments.
  18. Grasp on setting up Network Architecture for distributed systems.

Must have:

1) Experience with managing Infrastructure on AWS/GCP or Azure

2) Managed Infrastructure on Kubernetes

Read more
Wissen Technology

at Wissen Technology

4 recruiters
Sukanya Mohan
Posted by Sukanya Mohan
Bengaluru (Bangalore)
4 - 7 yrs
Best in industry
skill iconJava
Spring
DevOps

Job Title: Devops+Java Engineer

Location: Bangalore

Mode of work- Hybrid (3 days work from office)

 

Job Summary: We are looking for a skilled Java+ DevOps Engineer to help enhance and maintain our infrastructure and applications. The ideal candidate will have a strong background in Java development combined with expertise in DevOps practices, ensuring seamless integration and deployment of software solutions. You will collaborate with cross-functional teams to design, develop, and deploy robust and scalable solutions.

 

Key Responsibilities:

  • Develop and maintain Java-based applications and microservices.
  • Implement CI/CD pipelines to automate the deployment process.
  • Design and deploy monitoring, logging, and alerting systems.
  • Manage cloud infrastructure using tools such as AWS, Azure, or GCP.
  • Ensure security best practices are followed throughout all stages of development and deployment.
  • Troubleshoot and resolve issues in development, test, and production environments.
  • Collaborate with software engineers, QA analysts, and product teams to deliver high-quality solutions.
  • Stay current with industry trends and best practices in Java development and DevOps.



Required Skills and Experience:

  • Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent work experience).
  • Proficient in Java programming language and frameworks (Spring, Hibernate, etc.).
  • Strong understanding of DevOps principles and experience with DevOps tools (e.g., Jenkins, Git, Docker, Kubernetes).
  • Knowledge of containerization and orchestration technologies (Docker, Kubernetes).
  • Familiarity with monitoring and logging tools (ELK stack, Prometheus, Grafana).
  • Solid understanding of CI/CD pipelines and automated testing frameworks.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
Read more
Molecular Connections

at Molecular Connections

4 recruiters
Molecular Connections
Posted by Molecular Connections
Remote, Bengaluru (Bangalore), Mumbai
5 - 10 yrs
₹7L - ₹15L / yr
skill iconGo Programming (Golang)
skill iconRuby on Rails (ROR)
skill iconRuby
skill iconPython
skill iconJava
+8 more

Responsibilities:

- Design, develop, and implement robust and efficient backend services using microservices architecture principles.

-  Write clean, maintainable, and well-documented code using C# and the .NET framework.

-  Develop and implement data access layers using Entity Framework.

-  Utilize Azure DevOps for version control, continuous integration, and continuous delivery (CI/CD) pipelines.

-  Design and manage databases on Azure SQL.

-  Perform code reviews and participate in pair programming to ensure code quality.

-  Troubleshoot and debug complex backend issues.

-  Optimize backend performance and scalability to ensure a smooth user experience.

-  Stay up-to-date with the latest advancements in backend technologies and cloud platforms.

-  Collaborate effectively with frontend developers, product managers, and other stakeholders.

-  Clearly communicate technical concepts to both technical and non-technical audiences.

Qualifications:

-  Strong understanding of microservices architecture principles and best practices.

-  In-depth knowledge of C# programming language and the .NET framework (ASP.NET MVC/Core, Web API).

-  Experience working with Entity Framework for data access.

-  Proficiency with Azure DevOps for CI/CD pipelines and version control (Git).

-  Experience with Azure SQL for database design and management.

-  Experience with unit testing and integration testing methodologies.

-  Excellent problem-solving and analytical skills.

-   Ability to work independently and as part of a team.

-   Strong written and verbal communication skills.

-   A passion for building high-quality, scalable, and secure software applications.

Read more
CodeCraft Technologies Private Limited
Priyanka Praveen
Posted by Priyanka Praveen
Bengaluru (Bangalore), Mangalore
7 - 12 yrs
Best in industry
CI/CD
skill iconGitHub
DevOps

Position: SRE/ DevOps

Experience: 6-10 Years

Location: Bengaluru/Mangalore

 

CodeCraft Technologies is a multi-award-winning creative engineering company offering design and technology solutions on mobile, web and cloud platforms.

 

We are seeking a highly skilled and motivated Site Reliability Engineer (SRE) to join our dynamic team. As an SRE, you will play a crucial role in ensuring the reliability, availability, and performance of our systems and applications. You will work closely with the development team to build and maintain scalable infrastructure, implement best practices in CI/CD, and contribute to the overall stability of our technology stack.

 

 

Roles and Responsibilities:

·       CI/CD and DevOps:

o  Implement and maintain robust Continuous Integration/Continuous Deployment (CI/CD) pipelines to ensure efficient and reliable software delivery.

o  Collaborate with development teams to integrate DevOps principles into the software development lifecycle.

o  Experience with pipelines such as Github actions, GitLab, Azure DevOps,CircleCI is a plus.

·       Test Automation:

o  Develop and maintain automated testing frameworks to validate system functionality, performance, and reliability.

o  Collaborate with QA teams to enhance test coverage and improve overall testing efficiency.

·       Logging/Monitoring:

o  Design, implement, and manage logging and monitoring solutions to proactively identify and address potential issues.

o  Respond to incidents and alerts to ensure system uptime and performance.

·       Infrastructure as Code (IaC):

o  Utilize Terraform (or other tools) to define and manage infrastructure as code, ensuring scalability, security, and consistency across environments.

·       Elastic Stack:

o  Implement and manage Elastic Stack (ELK) for log and data analysis to gain insights into system performance and troubleshoot issues effectively.

·       Cloud Platforms:

o  Work with cloud platforms such as AWS, GCP, and Azure to deploy and manage scalable and resilient infrastructure.

o  Optimize cloud resources for cost efficiency and performance.

·       Vulnerability Management:

o  Conduct regular vulnerability assessments and implement measures to address and remediate identified vulnerabilities.

o  Collaborate with security teams to ensure a robust security posture.

·       Security Assessment:

o  Perform security assessments and audits to identify and address potential security risks.

o  Implement security best practices and stay current with industry trends and emerging threats.

o  Experience with tools such as GCP Security Command Center, and AWS Security Hub is a plus.

·       Third-Party Hardware Providers:

o  Collaborate with third-party hardware providers to integrate and support hardware components within the infrastructure.


Desired Profile:

·       The candidate should be willing to work in the EST time zone, i.e. from 6 PM to 2 AM.

·       Excellent communication and interpersonal skills

·       Bachelor’s Degree

·       Certifications related to this field shall be an added advantage.


Read more
Sizzle

at Sizzle

1 recruiter
Vijay Koduri
Posted by Vijay Koduri
Bengaluru (Bangalore)
3 - 6 yrs
₹6L - ₹14L / yr
DevOps
Ansible
gitlab
CI/CD
gitops
+4 more

You will be responsible for:

  • Managing all DevOps and infrastructure for Sizzle
  • We have both cloud and on-premise servers
  • Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
  • Optimize the pipeline to ensure ultra fast processing
  • Work closely with management team on infrastructure upgrades


You should have the following qualities:

  • 3+ years of experience in DevOps, and CI/CD
  • Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
  • Strong background in Linux system administration
  • Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
  • Deep expertise in Python including multiprocessing / multithreaded applications
  • Performance profiling including memory, CPU, GPU profiling
  • Error handling and building robust scripts that will be expected to run for weeks to months at a time
  • Deploying to production servers and monitoring and maintaining the scripts
  • DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
  • Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
  • Expertise in AWS infrastructure, networking, availability


Optional but beneficial to have:

  • Experience with running Nvidia GPU / CUDA-based tasks
  • Experience with image processing in python (e.g. openCV, Pillow, etc)
  • Experience with PostgreSQL and MongoDB (Or SQL familiarity)
  • Excited about working in a fast-changing startup environment
  • Willingness to learn rapidly on the job, try different things, and deliver results
  • Bachelors or Masters degree in computer science or related field
  • Ideally a gamer or someone interested in watching gaming content online


Skills:

DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration


Seniority: We are looking for a mid to senior level engineer


Salary: Will be commensurate with experience. 


Who Should Apply:

If you have the right experience, regardless of your seniority, please apply.

Work Experience: 3 years to 6 years


Read more
Opportunity to work on Product Development

Opportunity to work on Product Development

Agency job
Bengaluru (Bangalore)
6 - 12 yrs
₹2L - ₹15L / yr
Agile/Scrum
Systems Development Life Cycle (SDLC)
JIRA Agile
Project Management
PMP
+12 more

The Technical Project Manager is responsible for managing projects to make sure the proposed plan adheres to the timeline, budget, and scope. Their duties include planning projects in detail, setting schedules for all stakeholders, and executing each step of the project for our proprietary product, with some of the World’s biggest brands across the BFSI domain. The role is cross-functional and requires the individual to own and push through projects that touch upon business, operations, technology, marketing, and client experience. 


• 5-7 years of experience in technical project management.

• Professional Project Management Certification from accredited intuition is mandatory.

• Proven experience overseeing all elements of the project/product lifecycle.

• Working knowledge of Agile and Waterfall methodologies.

• Prior experience in Fintech, Blockchain, and/or BFSI domain will be an added advantage.

• Demonstrated understanding of Project Management processes, strategies, and methods.

• Strong sense of personal accountability regarding decision-making and supervising department team.

• Collaborate with cross-functional teams and stakeholders to define project requirements and scope.

Read more
Recro

at Recro

1 video
32 recruiters
Amrita Singh
Posted by Amrita Singh
Bengaluru (Bangalore)
2.5 - 6 yrs
₹5L - ₹20L / yr
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
MySQL
+5 more

Key Responsibilities: 

  • Rewrite existing APIs in NodeJS. 
  • Remodel the APIs into Micro services-based architecture. 
  • Implement a caching layer wherever possible. 
  • Optimize the API for high performance and scalability. 
  • Write unit tests for API Testing.
  • Automate the code testing and deployment process.


Skills Required: 

  • At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds. 
  • Excellent hands-on experience using MySQL or any other SQL Database. 
  • Good knowledge of MongoDB or any other NoSQL Database. 
  • Good knowledge of Redis, its data types, and their use cases. 
  • Experience with graph-based databases like GraphQL and Neo4j. 
  • Experience developing and deploying REST APIs. 
  • Good knowledge of Unit Testing and available Test Frameworks. 
  • Good understanding of advanced JS libraries and frameworks. 
  • Experience with Web sockets, Service Workers, and Web Push Notifications. 
  • Familiar with NodeJS profiling tools. 
  • Proficient understanding of code versioning tools such as Git. 
  • Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms. 
  • Should be a fast learner and a go-getter — without any fear of trying out new things Preferences. 
  • Experience building a large scale social or location-based app.
Read more
Recro

at Recro

1 video
32 recruiters
Mohit Arora
Posted by Mohit Arora
Bengaluru (Bangalore), Delhi, Gurugram, Noida
2.5 - 7 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
GraphQL
+4 more

Key Responsibilities: 

  • Rewrite existing APIs in NodeJS. 
  • Remodel the APIs into Micro services-based architecture. 
  • Implement a caching layer wherever possible. 
  • Optimize the API for high performance and scalability. 
  • Write unit tests for API Testing.
  • Automate the code testing and deployment process.


Skills Required: 

  • At least 2 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds. 
  • Excellent hands-on experience using MySQL or any other SQL Database. 
  • Good knowledge of MongoDB or any other NoSQL Database. 
  • Good knowledge of Redis, its data types, and their use cases. 
  • Experience with graph-based databases like GraphQL and Neo4j. 
  • Experience developing and deploying REST APIs. 
  • Good knowledge of Unit Testing and available Test Frameworks. 
  • Good understanding of advanced JS libraries and frameworks. 
  • Experience with Web sockets, Service Workers, and Web Push Notifications. 
  • Familiar with NodeJS profiling tools. 
  • Proficient understanding of code versioning tools such as Git. 
  • Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms. 
  • Should be a fast learner and a go-getter — without any fear of trying out new things Preferences. 
  • Experience building a large scale social or location-based app.


Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
We are 17-year-old Multinational Company headquartered in Ba

We are 17-year-old Multinational Company headquartered in Ba

Agency job
via Merito by Jinita Sumaria
Bengaluru (Bangalore)
5 - 10 yrs
Best in industry
skill icon.NET
ASP.NET
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconJavascript
+9 more

About The Company


The client is 17-year-old Multinational Company headquartered in Bangalore, Whitefield, and having another delivery center in Pune, Hinjewadi. It also has offices in US and Germany and are working with several OEM’s and Product Companies in about 12 countries and is a 200+ strong team worldwide. 



Join us as a Senior Software Engineer within our Web Application Development team, based out of Pune to deliver end-to-end customized application development. 

We expect you to participate & contribute to every stage of project right from interacting with internal customers/stakeholders, understanding their requirements, and proposing them the solutions which will be best fit to their expectations. You will be part of local team you will have chance to be part of Global Projects delivery with the possibility of working On-site (Belgium) if required.You will be most important member of highly motivated Application development team leading the Microsoft Technology stack enabling the team members to deliver “first time right” application delivery.


Principal Duties and Responsibilities


• You will be responsible for the technical analysis of requirements and lead the project from Technical perspective

• You should be a problem solver and provide scalable and efficient technical solutions

• You guarantee an excellent and scalable application development in an estimated timeline

• You will interact with the customers/stakeholders and understand their requirements and propose the solutions

• You will work closely with the ‘Application Owner’ and carry the entire responsibility of end-to-end processes/development

• You will make technical & functional application documentation, release notes that will facilitate the aftercare of the application Knowledge, Skills and Qualifications

• Education: Master’s degree in computer science or equivalent

• Experience: Minimum 5- 10 years


Required Skills


• Strong working knowledge of C#, Angular 2+, SQL Server, ASP.Net Web API

• Good understanding on OOPS, SOLID principals, Development practices

• Good understanding of DevOps, Git, CI/CD

• Experience with development of client and server-side applications

• Excellent English communication skills (written, oral), with good listening capabilities

• Exceptionally good Excellent technical analytical, debugging, and problem-solving skills

• Has a reasonable balance between getting the job done vs technical debt

• Enjoys producing top quality code in a fast-moving environment

• Effective team player working in a team; willingness to put the needs of the team over their own


Preferred Skills  

   

• Experience with product development for the Microsoft Azure platform

• Experience with product development life cycle would be a plus

• Experience with agile development methodology (Scrum)

• Functional analysis skills and experience (Use cases, UML) is an asset

Read more
Apexon

at Apexon

3 recruiters
Siva Kumar
Posted by Siva Kumar
Bengaluru (Bangalore), Chennai, Pune, Hyderabad, Mumbai, Ahmedabad
4 - 6 yrs
Best in industry
skill iconC#
Test Automation (QA)
Automation
MS SharePoint
DevOps
+5 more

About Apexon:

Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. For over 17 years, Apexon has been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving our clients’ toughest technology problems, and a commitment to continuous improvement. We focus on three broad areas of digital services: User Experience (UI/UX, Commerce); Engineering (QE/Automation, Cloud, Product/Platform); and Data (Foundation, Analytics, and AI/ML), and have deep expertise in BFSI, healthcare, and life sciences.

Apexon is backed by Goldman Sachs Asset Management and Everstone Capital.

 

To know more about us please visit:  https://www.apexon.com/" target="_blank">https://www.apexon.com/

 

 

Responsibilities:

  • C# Automation engineer with 4-6 years of experience to join our engineering team and help us develop and maintain various software/utilities products. 
  • Good object-oriented programming concepts and practical knowledge. 
  • Strong programming skills in C# are required. 
  • Good knowledge of C# Automation is preferred. 
  • Good to have experience with the Robot framework.
  • Must have knowledge of API (REST APIs), and database (SQL) with the ability to write efficient queries.
  • Good to have knowledge of Azure cloud. 
  • Take end-to-end ownership of test automation development, execution and delivery. 

Good to have: 

  • Experience in tools like SharePoint, Azure DevOps

.

Other skills:    

  • Strong analytical & logical thinking skills. Ability to think and act rationally when faced with challenges. 

 

Read more
Tech Data
Loyson Masacrenhas
Posted by Loyson Masacrenhas
Bengaluru (Bangalore), Mumbai, Pune, Chennai
7 - 12 yrs
Best in industry
Presales
Solution architecture
DevOps
Microsoft Windows Azure

Job Purpose :


Working with the Tech Data Sales Team, the Presales Consultant is responsible for providing presales technical support to the Sales team and presenting tailored demonstrations or qualification discussions to customers and/or prospects. The Presales Consultant also assists the Sales Team with qualifying opportunities - in or out and helping expand existing opportunities through solid questioning. The Presales Consultant will be responsible on conducting Technical Proof of Concept, Demonstration & Presentation on the supported products & solution.


Responsibilities :

  • Subject Matter Expert (SME) in the development of Microsoft Cloud Solutions (Compute, Storage, Containers, Automation, DevOps, Web applications, Power Apps etc.)
  • Collaborate and align with business leads to understand their business requirement and growth initiatives to propose the required solutions for Cloud and Hybrid Cloud
  • Work with other technology vendors, ISVs to build solutions use cases in the Center of Excellence based on sales demand (opportunities, emerging trends)
  • Manage the APJ COE environment and Click-to-Run Solutions
  • Provide solution proposal and pre-sales technical support for sales opportunities by identifying the requirements and design Hybrid Cloud solutions
  • Create Solutions Play and blueprint to effectively explain and articulate solution use cases to internal TD Sales, Pre-sales and partners community
  • Support in-country (APJ countries) Presales Team for any technical related enquiries
  • Support Country's Product / Channel Sales Team in prospecting new opportunities in Cloud & Hybrid Cloud
  • Provide technical and sales trainings to TD sales, pre-sales and partners.
  • Lead & Conduct solution presentations and demonstrations
  • Deliver presentations at Tech Data, Partner or Vendor led solutions events.
  • Achieve relevant product certifications
  • Conduct customer workshops that help accelerate sales opportunities


Knowledge, Skills and Experience :

  • Bachelor's degree in information technology/Computer Science or equivalent experience certifications preferred
  • Minimum of 7 years relevant working experience, ideally in IT multinational environment
  • Track record on the assigned line cards experience is an added advantage
  • IT Distributor and/or SI experience would also be an added advantage
  • Has good communication skills and problem solving skills
  • Proven ability to work independently, effectively in an off-site environment and under high pressure


What's In It For You?

  • Elective Benefits: Our programs are tailored to your country to best accommodate your lifestyle.
  • Grow Your Career: Accelerate your path to success (and keep up with the future) with formal programs on leadership and professional development, and many more on-demand courses.
  • Elevate Your Personal Well-Being: Boost your financial, physical, and mental well-being through seminars, events, and our global Life Empowerment Assistance Program.
  • Diversity, Equity & Inclusion: It's not just a phrase to us; valuing every voice is how we succeed. Join us in celebrating our global diversity through inclusive education, meaningful peer-to-peer conversations, and equitable growth and development opportunities.
  • Make the Most of our Global Organization: Network with other new co-workers within your first 30 days through our onboarding program.
  • Connect with Your Community: Participate in internal, peer-led inclusive communities and activities, including business resource groups, local volunteering events, and more environmental and social initiatives.


Don't meet every single requirement? Apply anyway.


At Tech Data, a TD SYNNEX Company, we're proud to be recognized as a great place to work and a leader in the promotion and practice of diversity, equity and inclusion. If you're excited about working for our company and believe you're a good fit for this role, we encourage you to apply. You may be exactly the person we're looking for!

Read more
Dori AI

at Dori AI

5 recruiters
Nitin Gupta
Posted by Nitin Gupta
Bengaluru (Bangalore)
3 - 8 yrs
₹3L - ₹13L / yr
DevOps
skill iconDocker
PyTorch
Bash
Perl
+1 more
As a DevOps Engineer and Architect in Dori AI, you will be responsible for streamlining and executing Site Reliability Engineering and DevOps activities with a charter to reduce cost while improving observability, scalability, and reliability.  In this role, you will also work closely with the Service Development team and contribute to the Service Platform design.

The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments

Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience

Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.

As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
Read more
CodeCraft Technologies Private Limited
Agency job
via Bullhorn Consultants by Sai Kiran R
Bengaluru (Bangalore)
7 - 12 yrs
₹1L - ₹15L / yr
Shell Scripting
skill iconPython
Ansible
Terraform
DevOps

Roles and Responsibilities:

• Gather and analyse cloud infrastructure requirements

• Automating system tasks and infrastructure using a scripting language (Shell/Python/Ruby

preferred), with configuration management tools (Ansible/ Puppet/Chef), service registry and

discovery tools (Consul and Vault, etc), infrastructure orchestration tools (Terraform,

CloudFormation), and automated imaging tools (Packer)

• Support existing infrastructure, analyse problem areas and come up with solutions

• An eye for monitoring – the candidate should be able to look at complex infrastructure and be

able to figure out what to monitor and how.

• Work along with the Engineering team to help out with Infrastructure / Network automation needs.

• Deploy infrastructure as code and automate as much as possible

• Manage a team of DevOps


Desired Profile:

• Understanding of provisioning of Bare Metal and Virtual Machines

• Working knowledge of Configuration management tools like Ansible/ Chef/ Puppet, Redfish.

• Experience in scripting languages like Ruby/ Python/ Shell Scripting

• Working knowledge of IP networking, VPN's, DNS, load balancing, firewalling & IPS concepts

• Strong Linux/Unix administration skills.

• Self-starter who can implement with minimal guidance

• Hands-on experience setting up CICD from SCRATCH in Jenkins

• Experience with Managing K8s infrastructure

Read more
codersbrain

at codersbrain

1 recruiter
Aishwarya Hire
Posted by Aishwarya Hire
Pune, Bengaluru (Bangalore), Gurugram
4 - 6 yrs
₹6L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
Cloud Computing
  • Public clouds, such as AWS, Azure, or Google Cloud Platform
  • Automation technologies, such as Kubernetes or Jenkins
  • Configuration management tools, such as Puppet or Chef
  • Scripting languages, such as Python or Ruby


Read more
codersbrain

at codersbrain

1 recruiter
Aishwarya Hire
Posted by Aishwarya Hire
Pune, Bengaluru (Bangalore), Gurugram
5 - 7 yrs
₹6L - ₹10L / yr
DevOps
skill iconKubernetes
skill iconDocker
Windows Azure
  • Recommend a migration and consolidation strategy for DevOps tools
  • Design and implement an Agile work management approach
  • Make a quality strategy
  • Design a secure development process
  • Create a tool integration strategy
Read more
xoxoday

at xoxoday

2 recruiters
Agency job
via Jobdost by Sathish Kumar
Bengaluru (Bangalore)
8 - 12 yrs
₹45L - ₹65L / yr
skill iconJavascript
SQL
NOSQL Databases
skill iconNodeJS (Node.js)
skill iconReact Native
+8 more

What is the role?

Expected to manage the product plan, engineering, and delivery of Xoxoday Plum. Plum is a rewarding and incentives infrastructure for businesses. It's a unified integrated suite of products to handle various rewarding use cases for consumers, sales, channel partners, and employees. 31% of the total tech team is aligned towards this product and comprises of 32 members within Plum Tech, Quality, Design, and Product management. The annual FY 2019-20 revenue for Plum was $ 40MN and is showing high growth potential this year as well. The product has a good mix of both domestic and international clientele and is expanding. The role will be based out of our head office in Bangalore, Karnataka however we are open to discuss the option of remote working with 25 - 50% travel.

Key Responsibilities

  • Scope and lead technology with the right product and business metrics.
  • Directly contribute to product development by writing code if required.
  • Architect systems for scale and stability.
  • Serve as a role model for our high engineering standards and bring consistency to the many codebases and processes you will encounter.
  • Collaborate with stakeholders across disciplines like sales, customers, product, design, and customer success.
  • Code reviews and feedback.
  • Build simple solutions and designs over complex ones, and have a good intuition for what is lasting and scalable.
  • Define a process for maintaining a healthy engineering culture ( Cadence for one-on-ones, meeting structures, HLDs, Best Practices In development, etc).

What are we looking for?

  • Manage a senior tech team of more than 5 direct and 25 indirect developers.
  • Should have experience in handling e-commerce applications at scale.
  • Should have at least 7+ years of experience in software development, agile processes for international e-commerce businesses.
  • Should be extremely hands-on, full-stack developer with modern architecture.
  • Should exhibit skills to build a good engineering team and culture.
  • Should be able to handle the chaos with product planning, prioritizing, customer-first approach.
  • Technical proficiency
  • JavaScript, SQL, NoSQL, PHP
  • Frameworks like React, ReactNative, Node.js, GraphQL
  • Databases technologies like ElasticSearch, Redis, MySql, Cassandra, MongoDB, Kafka
  • Dev ops to manage and architect infra - AWS, CI/CD (Jenkins)
  • System Architecture w.r.t Microservices, Cloud Development, DB Administration, Data Modeling
  • Understanding of security principles and possible attacks and mitigate them.

Whom will you work with?

You will lead the Plum Engineering team and work in close conjunction with the Tech leads of Plum with some cross-functional stake with other products. You'll report to the co-founder directly.

What can you look for?

A wholesome opportunity in a fast-paced environment with scale, international flavour, backend, and frontend. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners, or consumers for better business results.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Read more
Careator Technologies Pvt Ltd
Badal Singh
Posted by Badal Singh
Bengaluru (Bangalore)
5 - 10 yrs
₹12L - ₹15L / yr
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)
DevOps
skill iconGit
skill iconGitHub
+4 more

    Focussed on delivering scalable performant database platforms that underpin our customer data services in a dynamic and fast-moving agile engineering environment.

·    Experience with different types of enterprise application databases (PostgreSQL a must)

·    Familiar with developing in a Cloud environment (AWS RDS, DMS & DevOps highly desirable).

·    Proficient in using SQL to interrogate, analyze and report on customer data and interactions on live systems and in testing environments.

·    Proficient in using PostgreSQL PL/pgSQL

·    Experienced in delivering deployments and infrastructure as code with automation tools such as Jenkins, Terraform, Ansible, etc.

·    Comfortable using code hosting platforms for version control and collaboration. (git, github, etc)

·    Exposed to and have an opportunity to master automation and learn to use technologies and tools like Oracle, PostgreSQL, AWS, Terraform, GitHub, Nexus, Jenkins, Packer, Bash Scripting, Python, Groovy, and Ansible

·    Comfortable leading complex investigations into service failures and data abnormalities that touch your applications.

·    Experience with Batch and ETL methodologies.

·    Confident in making technical decisions and acting on them (within reason) when under pressure. 

·    Calm dealing with stakeholders and easily be able to translate complex technical scenarios to non-technical individuals.

·    Managing incidents, problems, and change in line with best practice

·    Expected to lead and inspire others in your team and department, drive engineering best practice and compliance, strategic direction, and encourage collaboration and transparency.

 

Read more
Bengaluru (Bangalore)
1 - 8 yrs
₹8L - ₹14L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+8 more
In this role, you will be part of a growing, global team of data engineers, who collaborate in DevOps mode, in order to enable Merck business with state-of-the-art technology to leverage data as an asset and to take better informed decisions.

The Merck Data Engineering Team is responsible for designing, developing, testing, and supporting automated end-to-end data pipelines and applications on Merck’s data management and global analytics platform (Palantir Foundry, Hadoop, AWS and other components).

The Foundry platform comprises multiple different technology stacks, which are hosted on Amazon Web Services (AWS) infrastructure or on-premise Merck’s own data centers. Developing pipelines and applications on Foundry requires:

• Proficiency in SQL / Java / Python (Python required; all 3 not necessary)
• Proficiency in PySpark for distributed computation
• Familiarity with Postgres and ElasticSearch
• Familiarity with HTML, CSS, and JavaScript and basic design/visual competency
• Familiarity with common databases (e.g. JDBC, mySQL, Microsoft SQL). Not all types required

This position will be project based and may work across multiple smaller projects or a single large project utilizing an agile project methodology.

Roles & Responsibilities:
• Develop data pipelines by ingesting various data sources – structured and un-structured – into Palantir Foundry
• Participate in end to end project lifecycle, from requirements analysis to go-live and operations of an application
• Acts as business analyst for developing requirements for Foundry pipelines
• Review code developed by other data engineers and check against platform-specific standards, cross-cutting concerns, coding and configuration standards and functional specification of the pipeline
• Document technical work in a professional and transparent way. Create high quality technical documentation
• Work out the best possible balance between technical feasibility and business requirements (the latter can be quite strict)
• Deploy applications on Foundry platform infrastructure with clearly defined checks
• Implementation of changes and bug fixes via Merck's change management framework and according to system engineering practices (additional training will be provided)
• DevOps project setup following Agile principles (e.g. Scrum)
• Besides working on projects, act as third level support for critical applications; analyze and resolve complex incidents/problems. Debug problems across a full stack of Foundry and code based on Python, Pyspark, and Java
• Work closely with business users, data scientists/analysts to design physical data models
Read more
Chennai, Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹10L - ₹25L / yr
skill icon.NET
ASP.NET
skill iconC#
Microsoft Windows Azure
SQL
+13 more

Senior .NET Cloud (Azure) Practitioner

Job Description Experience: 5-12 years (approx.)

Education: B-Tech/MCA

 

Mandatory Skills

  • Strong Restful API, Micro-services development experience using ASP.NET CORE Web APIs (C#);
  • Must have exceptionally good software design and programming skills in .Net Core (.NET 3.X, .NET 6) Platform, C#, ASP.net MVC, ASP.net Web API (RESTful), Entity Framework & LINQ
  • Good working knowledge on Azure Functions, Docker, and containers
  • Expertise in Microsoft Azure Platform - Azure Functions, Application Gateway, API Management, Redis Cache, App Services, Azure Kubernetes, CosmosDB, Azure Search, Azure Service Bus, Function Apps, Azure Storage Accounts, Azure KeyVault, Azure Log Analytics, Azure Active Directory, Application Insights, Azure SQL Database, Azure IoT, Azure Event Hubs, Azure Data Factory, Virtual Networks and networking.
  • Strong SQL Server expertise and familiarity with Azure Cosmos DB, Azure (Blob, Table, queue) storage, Azure SQL etc
  • Experienced in Test-Driven Development, unit testing libraries, testing frameworks.
  • Good knowledge of Object Oriented programming, including Design Patterns
  • Cloud Architecture - Technical knowledge and implementation experience using common cloud architecture, enabling components, and deployment platforms.
  • Excellent written and oral communication skills, along with the proven ability to work as a team with other disciplines outside of engineering are a must
  • Solid analytical, problem-solving and troubleshooting skills

Desirable Skills:

 

Roles & Responsibilities

  • Defining best practices & standards for usage of libraries, frameworks and other tools being used;
  • Architecture, design, and implementation of software from development, delivery, and releases.
  • Breakdown complex requirements into independent architectural components, modules, tasks and strategies and collaborate with peer leadership through the full software development lifecycle to deliver top quality, on time and within budget.
  • Demonstrate excellent communications with stakeholders regarding delivery goals, objectives, deliverables, plans and status throughout the software development lifecycle.
  • Should be able to work with various stakeholders (Architects/Product Owners/Leadership) as well as team - Lead/ Principal/ Individual Contributor for Web UI/ Front End Development;
  • Should be able to work in an agile, dynamic team environment;

 

Read more
Recro

at Recro

1 video
32 recruiters
Jisha  Emmanuel
Posted by Jisha Emmanuel
Bengaluru (Bangalore)
3 - 5 yrs
₹12L - ₹18L / yr
skill iconNodeJS (Node.js)
skill iconJavascript
API
Microservices
MySQL
+9 more

Key Responsibilities: 

  • Rewrite existing APIs in NodeJS. 
  • Remodel the APIs into Micro services-based architecture. 
  • Implement a caching layer wherever possible. 
  • Optimize the API for high performance and scalability. 
  • Write unit tests for API Testing.
  • Automate the code testing and deployment process.

 

Skills Required: 

  • At least 3 years of experience developing Backends using NodeJS — should be well versed with its asynchronous nature & event loop, and know its quirks and workarounds. 
  • Excellent hands-on experience using MySQL or any other SQL Database. 
  • Good knowledge of MongoDB or any other NoSQL Database. 
  • Good knowledge of Redis, its data types, and their use cases. 
  • Experience with graph-based databases like GraphQL and Neo4j. 
  • Experience developing and deploying REST APIs. 
  • Good knowledge of Unit Testing and available Test Frameworks. 
  • Good understanding of advanced JS libraries and frameworks. 
  • Experience with Web sockets, Service Workers, and Web Push Notifications. 
  • Familiar with NodeJS profiling tools. 
  • Proficient understanding of code versioning tools such as Git. 
  • Good knowledge of creating and maintaining DevOps infrastructure on cloud platforms. 
  • Should be a fast learner and a go-getter — without any fear of trying out new things Preferences. 
  • Experience building a large scale social or location-based app.
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
YOptima Media Solutions Pvt Ltd
Lavanya Prabhakar
Posted by Lavanya Prabhakar
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹40L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
DevOps
Fullstack Developer
+1 more

YOptima is a well capitalized digital startup pioneering full funnel marketing via programmatic media. YOptima is trusted by leading marketers and agencies in India and is expanding its footprint globally.

 

We are expanding our tech team and looking for a prolific Staff Engineer to lead our tech team as a leader (without necessarily being a people manager). Our tech is hosted on Google cloud and the stack includes React, Node.js, AirFlow, Python, Cloud SQL, BigQuery, TensorFlow.

 

If you have hands-on experience and passion for building and running scalable cloud-based platforms that change the lives of the customers globally and drive industry leadership, please read on.

  1. You have 6+ years of quality experience in building scalable digital products/platforms with experience in full stack development, big data analytics and Devops.
  2. You are great at identifying risks and opportunities, and have the depth that comes with willingness and capability to be hands-on. Do you still code? Do you love to code? Do you love to roll up your sleeves and debug things?
  3. Do you enjoy going deep into that part of the 'full stack' that you are not an expert of?

Responsibilities:

  • You will help build a platform that supports large scale data, with multi-tenancy and near real-time analytics.
  • You will lead and mentor a team of data engineers and full stack engineers to build the next generation data-driven marketing platform and solutions.
  • You will lead exploring and building new tech and solutions that solve business problems of today and tomorrow.

Qualifications:

  • Bachelor’s or Master’s degree in Computer Science or equivalent discipline.
  • Excellent computer systems fundamentals, DS/Algorithms and problem solving skills.
  • Experience in conceiving, designing, architecting, developing and operating full stack, data-driven platforms using Big data and cloud tech in GCP/AWS environments.

What you get: Opportunity to build a global company. Amazing learning experience. Transparent work culture. Meaningful equity in the business.

At YOptima, we value people who are driven by a higher sense of responsibility, bias for action, transparency, persistence with adaptability, curiosity and humility. We believe that successful people have more failures than average people have attempts. And that success needs the creative mindset to deal with ambiguities when you start, the courage to handle rejections and failure and rise up, and the persistence and humility to iterate and course correct.

  1. We look for people who are initiative driven, and not interruption driven. The ones who challenge the status quo with humility and candor.
  2. We believe startup managers and leaders are great individual contributors too, and that there is no place for context free leadership.
  3. We believe that the curiosity and persistence to learn new skills and nuances, and to apply the smartness in different contexts matter more than just academic knowledge.

 

Location:

  1. Brookefield, Bangalore
  2. Jui Nagar, Navi Mumbai
Read more
Kwalee

at Kwalee

3 candid answers
1 video
Zoheb Ahmed
Posted by Zoheb Ahmed
Bengaluru (Bangalore)
1 - 7 yrs
Best in industry
DevOps
Nginx
skill iconPython
Perl
Chef
+3 more
  • Job Title - DevOps Engineer

  • Reports Into - Lead DevOps Engineer

  • Location - India


A Little Bit about Kwalee….

Kwalee is one of the world’s leading multiplatform game developers and publishers, with well over 900 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Airport Security and Makeover Studio 3D. We also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope, Die by the Blade and Scathe.


What’s In It For You?

  • Hybrid working - 3 days in the office, 2 days remote/ WFH is the norm

  • Flexible working hours - we trust you to choose how and when you work best

  • Profit sharing scheme - we win, you win 

  • Private medical cover - delivered through BUPA

  • Life Assurance - for long term peace of mind

  • On site gym - take care of yourself

  • Relocation support - available

  • Quarterly Team Building days - we’ve done Paintballing, Go Karting & even Robot Wars

  • Pitch and make your own games on https://www.kwalee.com/blog/inside-kwalee/what-are-creative-wednesdays/">Creative Wednesdays! 


Are You Up To The Challenge?

As a DevOps Engineer you have a passion for automation, security and building reliable expandable systems. You develop scripts and tools to automate deployment tasks and monitor critical aspects of the operation, resolve engineering problems and incidents. Collaborate with architects and developers to help create platforms for the future.


Your Team Mates

The DevOps team works closely with game developers, front-end and back-end server developers making, updating and monitoring application stacks in the cloud.Each team member has specific responsibilities with their own projects to manage and bring their own ideas to how the projects should work.  Everyone strives for the most efficient, secure and automated delivery of application code and supporting infrastructure.


What Does The Job Actually Involve?

  • Find ways to automate tasks and monitoring systems to continuously  improve our systems.

  • Develop scripts and tools to make our infrastructure resilient and efficient.

  • Understand our applications and services and keep them running smoothly.


Your Hard Skills

  • Minimum 1 years of experience on a dev ops engineering role

  • Deep experience with Linux and Unix systems

  • Networking basics knowledge (named, nginx, etc)

  • Some coding experience  (Python, Ruby, Perl, etc.)

  • Experience with common automation tools (Ex. Chef, Terraform, etc)

  • AWS experience is a plus

  • A creative mindset motivated by challenges and constantly striving for the best 


Your Soft Skills

Kwalee has grown fast in recent years but we’re very much a family of colleagues. We welcome people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances, and all we ask is that you collaborate, work hard, ask questions and have fun with your team and colleagues. 

We don’t like egos or arrogance and we love playing games and celebrating success together. If that sounds like you, then please apply.


A Little More About Kwalee

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts.

Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle.

We have an amazing team of experts collaborating daily between our studios in Leamington Spa, Lisbon, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, Cyprus, the Philippines and many more places around the world. We’ve recently acquired our first external studio, TicTales, which is based in France. 

We have a truly global team making games for a global audience, and it’s paying off: - Kwalee has been voted the Best Large Studio and Best Leadership Team at the TIGA Awards (Independent Game Developers’ Association) and our games have been downloaded in every country on earth - including Antarctica!

Read more
Unscript AI

at Unscript AI

2 candid answers
1 recruiter
Ritwika Chowdhury
Posted by Ritwika Chowdhury
Bengaluru (Bangalore)
8 - 15 yrs
Best in industry
skill iconNodeJS (Node.js)
skill iconKubernetes
Team Management
skill iconDocker
skill iconAmazon Web Services (AWS)
+4 more

Do you love leading a team of engineers, coding up new products, and making sure that they work well together? If so, this is the job for you.

As an Engineering Manager in Unscript, you'll be responsible for managing a team of engineers who are focused on developing new products. You'll be able to apply your strong engineering background as well as your experience with large-scale development projects in the past.

You'll also be able to act as Product Owner (we know it's not your job but you'll have to do this :) ) and make sure that the team is working towards the right goals. 

Being the Engineering Manager at Unscript means owning up to all things—from technical issues to product decisions—and being comfortable with taking responsibility for everything from hiring and training new hires, to making sure you get the best out of every individual.


About Us:


UnScript uses AI to create videos that were never shot. Our technology saves brands thousands of dollars spent in hiring influencers/actors, shooting videos with them. UnScript was founded by distinguished alums from IIT, with exemplary backgrounds in business and technology. UnScript has raised two rounds of funding from global VCs with Peter Thiel(Co-founder, Paypal) and Ried Hoffman ( Co-founder, Linkedin) as investors. 


Required Qualifications:


  • B.Tech or higher in Computer Science from a premier institute. (We are willing to waive this requirement if you are an exceptional programmer).

  • Building scalable & performant web systems with clear focus on reusable modules.

  • You are comfortable in a high paced environment and can respond to urgent (and at times ambiguous) requests

  • Ability to translate fuzzy business problems into technical problems & come up with design, estimates, planning, execution & deliver the solution independently.

  • Knowledge in AWS or other cloud Infra.

The Team:

Unscript was started by Ritwika Chowdhury. Our team brings experience from other foremost institutions like IIT Kharagpur, Microsoft Research, IIT Bombay, IIIT, BCG etc. We are thrilled to be backed by some of the world's largest VC firms and angel investors. 

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort