50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) | Google Cloud Platform (GCP) Job openings in Bangalore (Bengaluru)
Apply to 50+ Google Cloud Platform (GCP) Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.

Job Summary
We are seeking a skilled Infrastructure Engineer with 3 to 5 years of experience in Kubernetes to join our team. The ideal candidate will be responsible for managing, scaling, and securing our cloud infrastructure, ensuring high availability and performance. You will work closely with DevOps, SREs, and development teams to optimize our containerized environments and automate deployments.
Key Responsibilities:
- Deploy, manage, and optimize Kubernetes clusters in cloud and/or on-prem environments.
- Automate infrastructure provisioning and management using Terraform, Helm, and CI/CD pipelines.
- Monitor system performance and troubleshoot issues related to containers, networking, and storage.
- Ensure high availability, security, and scalability of Kubernetes workloads.
- Manage logging, monitoring, and alerting using tools like Prometheus, Grafana, and ELK stack.
- Optimize resource utilization and cost efficiency within Kubernetes clusters.
- Implement RBAC, network policies, and security best practices for Kubernetes environments.
- Work with CI/CD pipelines (Jenkins, ArgoCD, GitHub Actions, etc.) to streamline deployments.
- Collaborate with development teams to containerize applications and enhance performance.
- Maintain disaster recovery and backup strategies for Kubernetes workloads.
Required Skills & Qualifications:
- 3 to 5 years of experience in infrastructure and cloud technologies.
- Strong hands-on experience with Kubernetes (K8s), Helm, and container orchestration.
- Experience with cloud platforms (AWS, GCP, Azure) and managed Kubernetes services (EKS, GKE, AKS).
- Proficiency in Terraform, Ansible, or other Infrastructure as Code (IaC) tools.
- Knowledge of Linux system administration, networking, and security.
- Experience with Docker, container security, and runtime optimizations. Hands-on experience in monitoring, logging, and observability tools.
- Scripting skills in Bash, Python, or Go for automation.
- Good understanding of CI/CD pipelines and deployment automation.
- Strong troubleshooting skills and experience handling production incidents
We’re looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization.
Responsibilities:
- Lead the design of data warehouses, lakes, and ETL workflows.
- Collaborate with teams to gather requirements and build scalable solutions.
- Ensure data governance, security, and optimal performance of systems.
- Mentor junior engineers and drive end-to-end project delivery.
Requirements:
- 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects.
- Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms.
- Expertise in big data tools (e.g., Apache Spark, Kafka).
- Excellent communication skills and leadership abilities.
Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.
Role: Senior Software Engineer - Backend
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Senior Backend Engineer with a minimum of 3 years of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that power our applications. You will work closely with cross-functional teams to ensure seamless integration between frontend and backend components, leveraging your expertise to architect scalable, secure, and high-performance solutions. As a senior team member, you will mentor junior developers and lead technical initiatives to drive innovation and excellence.
Annual Compensation: 12-18 LPA
Responsibilities:
- Lead the design, development, and maintenance of scalable and efficient backend systems and APIs.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases and NoSQL databases.
- Mentor and guide junior developers, fostering a culture of knowledge sharing and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and provide technical leadership in architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 3 years of proven experience as a Backend Engineer, with a strong portfolio of product-building projects.
- Strong proficiency in backend development using Java, Python, and JavaScript, with experience in building scalable and high-performance applications.
- Experience with popular backend frameworks and libraries for Java (e.g., Spring Boot) and Python (e.g., Django, Flask).
- Strong expertise in SQL and NoSQL databases (e.g., MySQL, MongoDB) with a focus on data modeling and scalability.
- Practical experience with caching mechanisms (e.g., Redis) to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.
Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customers technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Min two years of hands-on experience in NestJs
- Strong experience with Express.Js framework
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (Preferable X), payments, push notifications, authentication etc…
- Hands-on experience with Redis, Kafka, or X
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG
About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As an well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Staff Engineer – DevOps with 8-12 years of experience in designing, implementing, and optimizing CI/CD pipelines, cloud infrastructure, and automation frameworks. The ideal candidate will have expertise in Kubernetes, Terraform, CI/CD, Security, Observability, and Cloud Platforms (AWS, Azure, GCP). You will play a key role in scaling and securing our infrastructure, improving developer productivity, and ensuring high availability and performance.
Key Roles & Responsibilities:
- Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, ArgoCD, and Tekton.
- Deploy and manage Kubernetes clusters (EKS, AKS, GKE) and containerized workloads.
- Automate infrastructure provisioning using Terraform, Ansible, Pulumi, or CloudFormation.
- Implement observability and monitoring solutions using Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Ensure security best practices in DevOps, including IAM, secrets management, container security, and vulnerability scanning.
- Optimize cloud infrastructure (AWS, Azure, GCP) for performance, cost efficiency, and scalability.
- Develop and manage GitOps workflows and infrastructure-as-code (IaC) automation.
- Implement zero-downtime deployment strategies, including blue-green deployments, canary releases, and feature flags.
- Work closely with development teams to optimize build pipelines, reduce deployment time, and improve system reliability.
Basic Qualifications:
- A bachelor’s or master’s degree in computer science, electronics engineering or a related field
- 8-12 years of experience in DevOps, Site Reliability Engineering (SRE), or Infrastructure Automation.
- Strong expertise in CI/CD pipelines, version control (Git), and release automation.
- Hands-on experience with Kubernetes (EKS, AKS, GKE) and container orchestration.
- Proficiency in Terraform, Ansible for infrastructure automation.
- Experience with AWS, Azure, or GCP services (EC2, S3, IAM, VPC, Lambda, API Gateway, etc.).
- Expertise in monitoring/logging tools such as Prometheus, Grafana, ELK, OpenTelemetry, or Datadog.
- Strong scripting and automation skills in Python, Bash, or Go.
Preferred Qualifications
- Experience in FinOps Cloud Cost Optimization) and Kubernetes cluster scaling.
- Exposure to serverless architectures and event-driven workflows.
- Contributions to open-source DevOps projects.
Architect
Experience - 12+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate architects eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
● Relevant experience of 12+ years building high-performance applications with at least 3+ years as an architect.
● Good problem solving skills
● Strong mentoring capabilities
● Good understanding of software development life cycle
● Strong experience in system design and architecture
● Strong focus on quality of work delivered
● Excellent verbal and written communication skills
Required Technical Skills
● Extensive hands-on experience building high-performance applications using Node.Js (Javascript/Typescript) and .NET/ Golang / Java / Python.
● Strong experience with appropriate framework(s).
● Wellversed in monolithic and microservices architecture.
● Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
● Experience working with 3rd party integrations ranging from authentication, cloud services, etc.
● Hands-on experience with Kafka or RabbitMQ.
● Handsonexperience with CI/CD pipelines and atleast 1 cloud provider- AWS / GCP / Azure
● Strong experience writing and maintaining clear documentation
Good to have skills:
● Experience working with frontend technologies - React.Js or Vue.Js or Angular.
● Extensive experience consulting with customers directly for defining architecture or system design.
● Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies

Apply Link - https://tally.so/r/wv0lEA
Key Responsibilities:
- Software Development:
- Design, implement, and optimise clean, scalable, and reliable code across [backend/frontend/full-stack] systems.
- Contribute to the development of micro services, APIs, or UI components as per the project requirements.
- System Architecture:
- Collaborate and design and enhance system architecture.
- Analyse and identify opportunities for performance improvements and scalability.
- Code Reviews and Mentorship:
- Conduct thorough code reviews to ensure code quality, maintainability, and adherence to best practices.
- Mentor and support junior developers, fostering a culture of learning and growth.
- Agile Collaboration:
- Work within an Agile/Scrum framework, participating in sprint planning, daily stand-ups, and retrospectives.
- Collaborate with Carbon Science, Designer, and other stakeholders to translate requirements into technical solutions.
- Problem-Solving:
- Investigate, troubleshoot, and resolve complex issues in production and development environments.
- Contribute to incident management and root cause analysis to improve system reliability.
- Continuous Improvement:
- Stay up-to-date with emerging technologies and industry trends.
- Propose and implement improvements to existing codebases, tools, and development processes.
Qualifications:
Must-Have:
- Experience: 2–5 years of professional software development experience in [specify languages/tools, e.g., Java, Python, JavaScript, etc.].
- Education: Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
- Technical Skills:
- Strong proficiency in [programming languages/frameworks/tools].
- Experience with cloud platforms like AWS, Azure, or GCP.
- Knowledge of version control tools (e.g., Git) and CI/CD pipelines.
- Understanding of data structures, algorithms, and system design principles.
Nice-to-Have:
- Experience with containerisation (e.g., Docker) and orchestration tools (e.g., Kubernetes).
- Knowledge of database technologies (SQL and NoSQL).
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent written and verbal communication skills.
- Ability to work in a fast-paced environment and manage multiple priorities effectively.

Integrated Technology Solutions for the Entertainment & Leisure Industry
Required Skills:
- AWS
- Azure experience
- Micro services
- Docker
- Kubernetes Containers
- Serverless architecture
- Architected Cloud projects
- Good communication skills
- Minimum 2 years’ experience as an architect.
Job Summary:
As a Technical Architect, you will be responsible for designing, developing, and overseeing the implementation of technical solutions that meet the business needs of the organization. You will work closely with engineering teams to ensure that the architecture is scalable, secure, cost- effective, and aligned with the industry’s best practices. This is an excellent opportunity for someone with deep technical expertise and a passion for shaping the architecture of complex systems.
Key Responsibilities:
- Solution Design & Architecture: Lead the design and implementation of high-performance, scalable, and secure software architectures. Select appropriate technologies, frameworks, and platforms that align with business requirements and goals.
- Collaboration with Stakeholders: Work closely with product managers, business analysts, and development teams to understand the technical and business requirements. Translate those requirements into efficient, effective technical solutions.
- Guiding Development Teams: Provide technical leadership to development teams, ensuring the solution is implemented according to architectural principles and best practices. Offer mentorship and guidance to junior developers and architects.
- Technical Leadership: Provide technical leadership to development teams, ensuring the solution is implemented according to architectural principles and best practices. Offer mentorship and guidance to junior developers and architects.
- System Integration: Define how the application will integrate with other systems, services, or third-party tools. Implement API design and integration strategies for data exchange between various components and external systems. Oversee data flow, and design middleware or message brokers where necessary for smooth interaction between subsystems.
- Technology Evaluation & Integration: Evaluate and select new technologies, tools, and frameworks that improve system efficiency, maintainability, and scalability. Oversee the integration of systems and third- party services.
- Performance Optimization: Design and implement systems for optimal performance, including high availability, disaster recovery, and load balancing. Conduct performance tuning, troubleshoot bottlenecks, and recommend optimization strategies.
- Security & Compliance: Ensure that systems meet security best practices, and compliance standards (e.g., GDPR, HIPAA). Implement robust security protocols, data protection strategies, and threat mitigation methods.
- Documentation & Knowledge Sharing: Maintain up-to-date architecture documentation and ensure knowledge is shared across the technical teams. Promote a culture of continuous improvement and documentation within the team.
- Code Reviews & Quality Assurance: Participate in code reviews to ensure that the development follows architectural guidelines and best practices. Advocate for clean, maintainable, and high-quality code.
- Cost Management: Design cost-effective solutions that optimize resource usage and minimize operational costs, particularly for cloud-based architectures.
Qualifications & Skills:
- Education:
o Bachelor's degree in Engineering, or a related field. PMP, or similar project management certification is a plus.
- Experience:
o 10+ years of experience in software development, with at least 3-4 years in technical architecture or senior technical role.
o Proven experience designing and implementing complex, distributed systems.
- Technical Expertise:
o Strong experience with cloud platforms (AWS, Azure, Google Cloud).
o In-depth knowledge of system architecture patterns (microservices, serverless, event-driven, etc.).
o Expertise in modern programming languages (Java, C#, Python, JavaScript, etc.) and frameworks.
o Experience with databases (relational, NoSQL) and data management strategies.
- Soft Skills:
o Strong communication and interpersonal skills to work effectively with
stakeholders across the organization.
o Leadership and mentoring abilities to guide and inspire development teams.
o Problem-solving mindset with the ability to troubleshoot and resolve complex technical issues.
Overview: We’re seeking a dynamic and results-oriented Field Sales Manager focused on selling innovative cloud-native technology solutions, including modernization, analytics, AI/ML, and Generative AI, specifically within India's vibrant startup ecosystem. If you’re motivated by fast-paced environments, adept at independently generating opportunities, and excel at closing deals, we'd love to connect with you.
Role Description: In this role, you'll independently identify and engage promising startups, execute strategic go-to-market plans, and build meaningful relationships across the AWS startup ecosystem. You’ll work closely with internal pre-sales and solutions teams to position and propose cloud-native solutions effectively, driving significant customer outcomes.
Key Responsibilities:
Identify, prospect, and generate qualified pipeline opportunities independently and through collaboration with the AWS startup ecosystem.
• Conduct comprehensive discovery meetings to qualify potential opportunities, aligning customer needs with our cloud-native solutions
• Collaborate closely with the pre-sales team to develop compelling proposals, presentations, and solution demonstrations.
• Lead end-to-end sales processes, from prospecting and qualification to negotiation and deal closure.
• Build and nurture strong relationships within the startup community, including founders, CTOs, venture capitalists, accelerators, and AWS representatives.
• Stay informed about emerging trends, competitive offerings, and market dynamics within cloud modernization, analytics, AI/ML, and Generative AI.
• Maintain accurate CRM updates, track sales metrics, and regularly report performance and pipeline status to leadership.
Qualifications & Experience:
• BE/BTech/MCA/ME/MTech Only
3-6 years of proven experience in technology field sales, ideally in cloud solutions, analytics, AI/ML, or digital transformation.
• Prior experience selling technology solutions directly to startups or growth-stage companies.
• Demonstrated ability to independently manage end-to-end sales cycles with strong results.
• Familiarity and understanding of AWS ecosystem and cloud-native architectures are highly preferred.
• Excellent relationship-building skills, along with exceptional communication, negotiation, and presentation abilities.
• Ability and willingness to travel as needed to customer sites, industry events, and partner meetings.

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG


Key Responsibilities:
- Design, build, and maintain scalable, real-time data pipelines using Apache Flink (or Apache Spark).
- Work with Apache Kafka (mandatory) for real-time messaging and event-driven data flows.
- Build data infrastructure on Lakehouse architecture, integrating data lakes and data warehouses for efficient storage and processing.
- Implement data versioning and cataloging using Apache Nessie, and optimize datasets for analytics with Apache Iceberg.
- Apply advanced data modeling techniques and performance tuning using Apache Doris or similar OLAP systems.
- Orchestrate complex data workflows using DAG-based tools like Prefect, Airflow, or Mage.
- Collaborate with data scientists, analysts, and engineering teams to develop and deliver scalable data solutions.
- Ensure data quality, consistency, performance, and security across all pipelines and systems.
- Continuously research, evaluate, and adopt new tools and technologies to improve our data platform.
Skills & Qualifications:
- 3–6 years of experience in data engineering, building scalable data pipelines and systems.
- Strong programming skills in Python, Go, or Java.
- Hands-on experience with stream processing frameworks – Apache Flink (preferred) or Apache Spark.
- Mandatory experience with Apache Kafka for stream data ingestion and message brokering.
- Proficiency with at least one DAG-based orchestration tool like Airflow, Prefect, or Mage.
- Solid understanding and hands-on experience with SQL and NoSQL databases.
- Deep understanding of data lakehouse architectures, including internal workings of data lakes and data warehouses, not just usage.
- Experience working with at least one cloud platform, preferably AWS (GCP or Azure also acceptable).
- Strong knowledge of distributed systems, data modeling, and performance optimization.
Nice to Have:
- Experience with Apache Doris or other MPP/OLAP databases.
- Familiarity with CI/CD pipelines, DevOps practices, and infrastructure-as-code in data workflows.
- Exposure to modern data version control and cataloging tools like Apache Nessie.
Overview:
We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).
Responsibilities:
- Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
- Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
- Build and optimize high-performance data pipelines for real-time and batch data processing.
- Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Write efficient, clean, and reusable code for ETL processes and data workflows.
- Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
- Implement data governance practices and ensure security and compliance of data processes.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
- Optimize and automate data workflows for continuous improvement.
- Maintain up-to-date documentation of data pipeline architectures and processes.
Requirements:
Technical Skills:
- Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
- ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
- Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
- SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
- Programming: Proficient in Python, Java, or Scala for building and automating data processes.
- Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.
Experience:
- Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
- Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
- Experience with data migration from on-premises to cloud environments (Teradata to GCP).
- Familiarity with Data Lake concepts and technologies.
- Experience with version control systems like Git and working in Agile environments.
- Knowledge of CI/CD and automation processes in data engineering.
Soft Skills:
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
- Ability to work collaboratively in a fast-paced, cross-functional team environment.
- Strong attention to detail and ability to prioritize tasks.
Preferred Qualifications:
- Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
- Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
- Familiarity with data governance frameworks and data privacy regulations.
- Certifications in Google Cloud or Teradata are a plus.
Benefits:
- Competitive salary and performance-based bonuses.
- Health, dental, and vision insurance.
- 401(k) with company matching.
- Paid time off and flexible work schedules.
- Opportunities for professional growth and development.
Backend - Software Development Engineer II
Experience - 4+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Bangalore
Basic qualifications:
- Good problem solving skills
- Deep understanding of software development life cycle
- Excellent verbal and written communication skills
- Strong focus on quality of work delivered
- Relevant experience of 4+ years building high-performance backend applications with, at least 2 or more projects implemented using the required technologies
Required Technical Skills:
- Extensive hands-on experience building high-performance web back-ends using Node.Js. Having 3+ hands-on experience in Node.JS and Javascript/Typescript and minimum
- Hands-on project experience with Nest.Js
- Strong experience with Express.Js framework
- Hands-on experience in data modeling and schema design in MongoDB
- Experience integrating with any 3rd party services such as cloud SDKs, payments, push notifications, authentication etc…
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
- Experience with microservice architecture
- Experience working with other Relational and NoSQL Databases
- Experience with technologies such as Kafka and Redis
- Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.
Oracle Database Administrator (DBA)
Job Summary:
We are seeking an experienced Oracle Database Administrator (DBA) to manage, deploy, and optimize Oracle databases across cloud and on-premises environments. The ideal candidate should have strong expertise in Oracle database technologies, high availability solutions, backup and recovery strategies, and performance tuning.
Key Responsibilities:
- Install, configure, and maintain Oracle databases for production, testing, and staging environments.
- Deploy and manage Oracle databases in cloud environments such as AWS (RDS, EC2), Azure (SQL Database for Oracle), and Google Cloud (Cloud SQL, Compute Engine).
- Perform database upgrades, patching, and migrations across various Oracle versions (9i, 10g, 11g, 12c, 19c, and 21c).
- Implement and manage Oracle RAC, Data Guard, RMAN, ASM, and GoldenGate for high availability and disaster recovery solutions.
- Perform performance tuning, query optimization, and index management to ensure database efficiency.
- Conduct on-call rotation duties in a 24/7 environment, responding to escalated incidents and troubleshooting complex production issues.
- Manage database security, access controls, and compliance with industry best practices.
- Work with development and infrastructure teams to optimize database design and performance.
- Automate administrative tasks using PL/SQL, Shell Scripting, and other scripting tools.
Required Skills & Experience:
For L3 (Senior Level) Role:
- Experience: 10+ years in Oracle DBA roles.
- Expertise in managing large-scale Oracle databases in cloud-native and hybrid environments.
- Strong experience with Linux-based Oracle installations, upgrades, and patching.
- Hands-on experience with performance tuning, high-availability solutions, and troubleshooting production issues.
- Ability to handle high-priority incidents and ensure quick resolutions.
For L2 (Mid-Level) Role:
- Experience: 5+ years as an Oracle DBA.
- Must have experience in at least one cloud platform: AWS, Azure, or GCP.
- Proficiency with at least one or two additional database engines, such as IBM DB2, MongoDB, MySQL, MSSQL, or PostgreSQL.
- Strong knowledge of backup and recovery strategies using RMAN.
For L1 (Junior Level) Role:
- Experience: 2+ years in Oracle DBA roles.
- Familiarity with Oracle cloud deployments and multi-database environments.
- Basic understanding of Oracle RAC, Data Guard, and database security best practices.
Preferred Qualifications:
- Oracle certifications (OCP, OCM) are highly desirable.
- Experience in database automation and scripting for administration tasks.
- Familiarity with ITIL processes, Change Advisory Board (CAB) participation, and security compliance.
- Strong problem-solving and analytical skills.
SQL Server Database Administrator (DBA)
Job Description:
We are seeking a skilled SQL Server Database Administrator (DBA) responsible for ensuring the performance, integrity, and security of databases. The ideal candidate will have expertise in database performance tuning, development, administration, and maintenance while implementing high availability and disaster recovery solutions.
Experience: 2+ to 7+ years (based on role level).
Key Responsibilities:
Database Administration: Install, upgrade, and manage SQL Servers.
Performance Optimization: Monitor and fine-tune database performance.
Security & Compliance: Ensure database security, access control, and compliance.
Backup & Recovery: Configure and maintain database backup, disaster recovery, and high-availability solutions.
Automation: Write PowerShell/Unix shell scripts for automating routine tasks.
Database Development: Design and optimize stored procedures, queries, triggers, and views.
Troubleshooting: Work on incidents, change tickets, and problem tickets.
Cloud & Infrastructure: Prior experience working with AWS, Microsoft Azure, and GCP is preferred.
Documentation: Maintain proper documentation for database changes and configurations.
Collaboration: Work closely with developers, system admins, and stakeholders.
Required Skills & Qualifications:
Strong expertise in SQL Server tools and database management.
In-depth knowledge of database performance, security, backup, and recovery.
Proficiency in T-SQL scripting and automation (PowerShell, Unix shell scripting).
MCSE/MCSA certification (preferred).
Working knowledge of Linux and Windows Server infrastructures.
Hands-on experience in configuring SSIS, SSRS, SSAS.
Experience working with MySQL, PostgreSQL, Oracle, or MongoDB (any 2 preferred).
Ability to attend Change Advisory Board (CAB) meetings and document changes.
Urgent Hiring: Senior Java Developers |Bangalore (Hybrid) 🚀
We are looking for experienced Java professionals to join our team! If you have the right skills and are ready to make an impact, this is your opportunity!
📌 Role: Senior Java Developer
📌 Experience: 6 to 9 Years
📌 Education: BE/BTech/MCA (Full-time)
📌 Location: Bangalore (Hybrid)
📌 Notice Period: Immediate Joiners Only
✅ Mandatory Skills:
🔹 Strong Core Java
🔹 Spring Boot (data flow basics)
🔹 JPA
🔹 Google Cloud Platform (GCP)
🔹 Spring Framework
🔹 Docker, Kubernetes (Good to have)

Position Name : Product Engineer (Backend Heavy)
Experience : 3 to 5 Years
Location : Bengaluru (Work From Office, 5 Days a Week)
Positions : 2
Notice Period : Immediate joiners or candidates serving notice (within 30 days)
Role Overview :
We’re looking for Product Engineers who are passionate about building scalable backend systems in the FinTech & payments domain. If you enjoy working on complex challenges, contributing to open-source projects, and driving impactful innovations, this role is for you!
What You’ll Do :
- Develop scalable APIs and backend services.
- Design and implement core payment systems.
- Take end-to-end ownership of product development from zero to one.
- Work on database design, query optimization, and system performance.
- Experiment with new technologies and automation tools.
- Collaborate with product managers, designers, and engineers to drive innovation.
What We’re Looking For :
- 3+ Years of professional backend development experience.
- Proficiency in any backend programming language (Ruby on Rails experience is a plus but not mandatory).
- Experience in building APIs and working with relational databases.
- Strong communication skills and ability to work in a team.
- Open-source contributions (minimum 50 stars on GitHub preferred).
- Experience in building and delivering 0 to 1 products.
- Passion for FinTech & payment systems.
- Familiarity with CI/CD, DevOps practices, and infrastructure management.
- Knowledge of payment protocols and financial regulations (preferred but not mandatory)
Main Technical Skills :
- Backend : Ruby on Rails, PostgreSQL
- Infrastructure : GCP, AWS, Terraform (fully automated infrastructure)
- Security : Zero-trust security protocol managed via Teleport
Java Developer with GCP
Skills : Java and Spring Boot, GCP, Cloud Storage, BigQuery, RESTful API,
EXP : SA(6-10 Years)
Loc : Bangalore, Mangalore, Chennai, Coimbatore, Pune, Mumbai, Kolkata
Np : Immediate to 60 Days.
Kindly share your updated resume via WA - 91five000260seven

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
at Bottle Lab Technologies Pvt Ltd

About SmartQ
A leading B2B Food-Tech company built on 4 pillars-great people, great food, great experience, and greater good. Solving complex business problems with our heart and analyzing possible solutions with our mind lie in our DNA. We are on the perpetual route of serving our clients wholeheartedly. Armed with the stability of an MNC and the agility of a start-up, we have spread across 17 countries, having collaborated and executed successfully with 600 clients. We have grown from strength to strength with a blend of exuberant youth and exceptional experience. Bengaluru, being our headquarters, is known as the innovation hub and we have grown up to be the global leader in the institutional food tech space. We were recently acquired by the world's largest foodservice company – Compass group which has an annual turnover of 20 billion USD.
In this role, you will:
1. Collaborate with Product & Design Teams Work closely with the Product team to ensure that we are building a scalable, bug-free platform. You will actively participate in product and design discussions, offering valuable insights from a backend perspective to align technology with business goals.
2. Drive Adoption of New Technologies
You will lead brainstorming sessions and define a clear direction for the backend team to incorporate the latest technologies into day-to-day development,continuously optimizing for performance, scalability, and efficiency.
3. RESTful API Design & Development:You will ensure that the APIs you design and develop are well-structured, following best practices, and are suitable for consumption by frontend teams across multiple platforms. A key part of your role is making sure these APIs are scalable and maintainable.
4. Third-Party Integration Support:As we sometimes partner with third-party providers to expedite our market entry,you’ll work closely with these partners to integrate their solutions into our system.This involves participating in calls, finding the best integration methods, and providing ongoing support.
5. AI and Prompt Engineering:With AI becoming more integral to backend development, you’ll leverage AI to speed up development processes and maintain best practices. Familiarity with prompt engineering and AI-driven problem-solving is a significant plus in our team.
Must-Have Requirements:
- Strong expertise in Python, microservices, backend development and scalable architectures.
- Proficiency in designing and building REST APIs.
- Experience with unit testing in any testing framework and maintaining 100% code coverage.
- Experience in working with NoSQL DB.
- Strong understanding of any Cloud platforms such as - GCP/AWS/Azure.
- Profound knowledge in Serverless design pattern .
- Familiarity with Django or Webapp2 or Flask or similar web app frameworks.
- Experience in writing unit test using any testing framework.
- Experience collaborating with product and design teams.
- Familiarity with integrating third-party solutions.
Good-to-Have Requirements:
- Educational background includes a degree (B.E/B.Tech/M.Tech) in ComputerScience, Engineering, or a related field.
- 4+ years’ experience as a backend/cloud developer.
- Good understanding of google cloud platform.
- Knowledge of AI and how to leverage it for day-to-day tasks in backend development.
- Familiarity with prompt engineering to enhance productivity.
- Prior experience working with global or regional teams.
- Experience with agile methodologies and working within cross-functional teams.
Job Title : Chief Technology Officer (CTO) – Blockchain & Web3
Location : Bangalore & Gurgaon
Job Type : Full-Time, On-Site
Working Days : 6 Days
About the Role :
- We are seeking an experienced and visionary Chief Technology Officer (CTO) to lead our Blockchain & Web3 initiatives.
- The ideal candidate will have a strong technical background in Blockchain, Distributed Ledger Technology (DLT), Smart Contracts, DeFi, and Web3 applications.
- As a CTO, you will be responsible for defining and implementing the technology roadmap, leading a high-performing tech team, and driving innovation in the Blockchain and Web3 space.
Key Responsibilities :
- Define and execute the technical strategy and roadmap for Blockchain & Web3 products and services.
- Lead the architecture, design, and development of scalable, secure, and efficient blockchain-based applications.
- Oversee Smart Contract development, Layer-1 & Layer-2 solutions, DeFi, NFTs, and decentralized applications (dApps).
- Manage and mentor a team of engineers, developers, and blockchain specialists to ensure high-quality product delivery.
- Drive R&D initiatives to stay ahead of emerging trends and advancements in Blockchain & Web3 technologies.
- Collaborate with cross-functional teams including Product, Marketing, and Business Development to align technology with business goals.
- Ensure regulatory compliance, security, and scalability of Blockchain solutions.
- Build and maintain relationships with industry partners, investors, and technology vendors to drive innovation.
Required Qualifications & Experience :
- 10+ Years of overall experience in software development with at least 5+ Years in Blockchain & Web3 technologies.
- Deep understanding of Blockchain protocols (Ethereum, Solana, Polkadot, Hyperledger, etc.), consensus mechanisms, cryptographic principles, and tokenomics.
- Hands-on experience with Solidity, Rust, Go, Node.js, Python, or other blockchain programming languages.
- Proven track record of building and scaling decentralized applications (dApps), DeFi platforms, or NFT marketplaces.
- Experience with cloud infrastructure (AWS, Azure, GCP) and DevOps best practices.
- Strong leadership and management skills with experience in building and leading high-performing teams.
- Excellent problem-solving skills with the ability to work in a fast-paced, high-growth environment.
- Strong understanding of Web3, DAOs, Metaverse, and the evolving regulatory landscape.
Preferred Qualifications :
- Prior experience in a CTO, VP Engineering, or similar leadership role.
- Experience in fundraising, investor relations, and strategic partnerships.
- Knowledge of cross-chain interoperability and Layer-2 scaling solutions.
- Understanding of data privacy, security, and compliance regulations related to Blockchain & Web3.


About the Role:
We are seeking a highly motivated and experienced Engineering Manager to lead and mentor a team of talented engineers. You will play a crucial role in shaping the technical direction, driving project execution, and fostering a collaborative and high-performing environment. This role requires a strong technical background, excellent leadership skills, and a passion for building innovative products.
Responsibilities:
- Leadership & Mentorship: Lead, mentor, and coach a team of engineers, fostering their professional growth and development. Conduct performance reviews, provide constructive feedback, and identify training opportunities.
- Technical Guidance: Provide technical leadership and guidance to the team, ensuring adherence to best practices and architectural principles. Participate in code reviews and contribute to technical design discussions.
- Project Management: Plan, execute, and deliver projects on time and within budget. Define project scope, manage timelines, allocate resources, and track progress. Proactively identify and mitigate risks.
- Strategic Planning: Contribute to the overall technical strategy and roadmap for the team and the organization. Collaborate with product managers and other stakeholders to define product requirements and prioritize projects.
- Collaboration & Communication: Foster a collaborative and communicative team environment. Effectively communicate project updates, technical decisions, and team accomplishments to stakeholders.
- Hiring & Onboarding: Participate in the hiring process, including interviewing candidates and making hiring recommendations. Onboard new team members and provide them with the necessary resources and support to succeed.
- Process Improvement: Identify opportunities to improve engineering processes and implement best practices. Champion continuous improvement and innovation within the team.
- Hands-on Contribution (as needed): While primarily a leadership role, a willingness to occasionally contribute hands-on to development tasks (e.g., coding, debugging) is a plus, especially in a growing team.
Required Skills & Experience:
- 10-15 years of experience in software engineering, with a proven track record of building and delivering high-quality software products.
- Strong proficiency in Python and API development.
- Solid understanding of front-end technologies, particularly ReactJS and Redux.
- Experience with big data technologies, such as PySpark.
- Familiarity with AI/ML concepts and applications is highly desirable.
- Proven leadership experience, with the ability to motivate and mentor a team of engineers.
- Excellent communication, interpersonal, and problem-solving skills.
- Experience with Agile development methodologies.
- Bachelor's degree in Computer Science or a related field.
Preferred Skills & Experience:
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Experience with containerization technologies (e.g., Docker, Kubernetes).
- Experience with DevOps practices.
- Master's degree in Computer Science or a related field.
Company Overview
Adia Health revolutionizes clinical decision support by enhancing diagnostic accuracy and personalizing care. It modernizes the diagnostic process by automating optimal lab test selection and interpretation, utilizing a combination of expert medical insights, real-world data, and artificial intelligence. This approach not only streamlines the diagnostic journey but also ensures precise, individualized patient care by integrating comprehensive medical histories and collective platform knowledge.
Position Overview
We are seeking a talented and experienced Site Reliability Engineer/DevOps Engineer to join our dynamic team. The ideal candidate will be responsible for ensuring the reliability, scalability, and performance of our infrastructure and applications. You will collaborate closely with development, operations, and product teams to automate processes, implement best practices, and improve system reliability.
Key Responsibilities
- Design, implement, and maintain highly available and scalable infrastructure solutions using modern DevOps practices.
- Automate deployment, monitoring, and maintenance processes to streamline operations and increase efficiency.
- Monitor system performance and troubleshoot issues, ensuring timely resolution to minimize downtime and impact on users.
- Implement and manage CI/CD pipelines to automate software delivery and ensure code quality.
- Manage and configure cloud-based infrastructure services to optimize performance and cost.
- Collaborate with development teams to design and implement scalable, reliable, and secure applications.
- Implement and maintain monitoring, logging, and alerting solutions to proactively identify and address potential issues.
- Conduct periodic security assessments and implement appropriate measures to ensure the integrity and security of systems and data.
- Continuously evaluate and implement new tools and technologies to improve efficiency, reliability, and scalability.
- Participate in on-call rotation and respond to incidents promptly to ensure system uptime and availability.
Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field
- Proven experience (5+ years) as a Site Reliability Engineer, DevOps Engineer, or similar role
- Strong understanding of cloud computing principles and experience with AWS
- Experience of building and supporting complex CI/CD pipelines using Github
- Experience of building and supporting infrastructure as a code using Terraform
- Proficiency in scripting and automating tools
- Solid understanding of networking concepts and protocols
- Understanding of security best practices and experience implementing security controls in cloud environments
- Knowing modern security requirements like SOC2, HIPAA, HITRUST will be a solid advantage.
Lightning Job By Cutshort ⚡
As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered)
Job Overview:
We are seeking an experienced DevOps Engineer to join our team. The successful candidate will be responsible for designing, implementing, and maintaining the infrastructure and software systems required to support our development and production environments. The ideal candidate should have a strong background in Linux, GitHub, Actions/Jenkins, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
Responsibilities:
• Design, implement and maintain CI/CD pipelines using GitHub, Actions/Jenkins, Kubernetes, Helm, and ArgoCD.
• Deploy and manage Kubernetes clusters using AWS.
• Configure and maintain Envoy Proxy and Cert-Manager to automate deployment and manage application environments.
• Monitor system performance using Datadog, ELK, and Cloudflare tools.
• Automate infrastructure management and maintenance tasks using Terraform, Ansible, or similar tools.
• Collaborate with development teams to design, implement and test infrastructure changes.
• Troubleshoot and resolve infrastructure issues as they arise.
• Participate in on-call rotation and provide support for production issues.
Qualifications:
• Bachelor's or Master's degree in Computer Science, Engineering or a related field.
• 4+ years of experience in DevOps engineering with a focus on Linux, GitHub, Actions/CodeFresh, ArgoCD, AWS, Kubernetes, Helm, Datadog, MongoDB, Envoy Proxy, Cert-Manager, Terraform, ELK, Cloudflare, and BigRock.
• Strong understanding of Linux administration and shell scripting.
• Experience with automation tools such as Terraform, Ansible, or similar.
• Ability to write infrastructure as code using tools such as Terraform, Ansible, or similar.
• Experience with container orchestration platforms such as Kubernetes.
• Familiarity with container technologies such as Docker.
• Experience with cloud providers such as AWS.
• Experience with monitoring tools such as Datadog and ELK.
Skills:
• Strong analytical and problem-solving skills.
• Excellent communication and collaboration skills.
• Ability to work independently or in a team environment.
• Strong attention to detail.
• Ability to learn and apply new technologies quickly.
• Ability to work in a fast-paced and dynamic environment.
• Strong understanding of DevOps principles and methodologies.
Kindly apply at https://www.wohlig.com/careers
About the Role:
We are seeking a talented and passionate DevOps Engineer to join our dynamic team. You will be responsible for designing, implementing, and managing scalable and secure infrastructure across multiple cloud platforms. The ideal candidate will have a deep understanding of DevOps best practices and a proven track record in automating and optimizing complex workflows.
Key Responsibilities:
Cloud Management:
- Design, implement, and manage cloud infrastructure on AWS, Azure, and GCP.
- Ensure high availability, scalability, and security of cloud resources.
Containerization & Orchestration:
- Develop and manage containerized applications using Docker.
- Deploy, scale, and manage Kubernetes clusters.
CI/CD Pipelines:
- Build and maintain robust CI/CD pipelines to automate the software delivery process.
- Implement monitoring and alerting to ensure pipeline efficiency.
Version Control & Collaboration:
- Manage code repositories and workflows using Git.
- Collaborate with development teams to optimize branching strategies and code reviews.
Automation & Scripting:
- Automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or similar.
- Write scripts to optimize and maintain workflows.
Monitoring & Logging:
- Implement and maintain monitoring solutions to ensure system health and performance.
- Analyze logs and metrics to troubleshoot and resolve issues.
Required Skills & Qualifications:
- 3-5 years of experience with AWS, Azure, and Google Cloud Platform (GCP).
- Proficiency in containerization tools like Docker and orchestration tools like Kubernetes.
- Hands-on experience building and managing CI/CD pipelines.
- Proficient in using Git for version control.
- Experience with scripting languages such as Bash, Python, or PowerShell.
- Familiarity with infrastructure-as-code tools like Terraform or CloudFormation.
- Solid understanding of networking, security, and system administration.
- Excellent problem-solving and troubleshooting skills.
- Strong communication and teamwork skills.
Preferred Qualifications:
- Certifications such as AWS Certified DevOps Engineer, Azure DevOps Engineer, or Google Professional DevOps Engineer.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with serverless architectures and microservices.

A niche, specialist position in an interdisciplinary team focused on end-to-end solutions. Nature of projects range from proof-of-concept innovative applications, parallel implementations per end user requests, scaling up and continuous monitoring for improvements. Majority of the projects will be focused on providing automation solutions via both custom solutions and adapting machine learning generic standards to specific use cases/domains.
Clientele includes major publishers from the US and Europe, pharmaceutical bigwigs and government funded projects.
As a Senior Fullstack Developer, you will be responsible for designing, building, and maintaining scalable and performant web applications using modern technologies. You will work with cutting-edge tools and cloud infrastructure (primarily Google Cloud) and implement robust back-end services with React JS with Typescript, Koa.js, MongoDB, and Redis, while ensuring reliable and efficient monitoring with OpenTelemetry and logging with Bunyan. Your expertise in CI/CD pipelines and modern testing frameworks will be key to maintaining a smooth and efficient software development lifecycle.
Key Responsibilities:
- Fullstack Development: Design, develop, and maintain web applications using JavaScript (Node.js for back-end and React.js with Typescript for front-end).
- Cloud Infrastructure: Leverage Google Cloud services (like Compute Engine, Cloud Storage, Pub/Sub, etc.) to build scalable and resilient cloud solutions.
- API Development: Implement RESTful APIs and microservices with Koa.js, ensuring high performance, security, and scalability.
- Database Management: Manage MongoDB databases for storing and retrieving application data, and use Redis for caching and session management.
- Logging and Monitoring: Utilize Bunyan for structured logging and OpenTelemetry for distributed tracing and monitoring to ensure system health and performance.
- CI/CD: Design, implement, and maintain efficient CI/CD pipelines for continuous integration and deployment, ensuring fast and reliable code delivery.
- Testing & Quality Assurance: Write unit and integration tests using Jest, Mocha, and React Testing Library to ensure code reliability and maintainability.
- Collaboration: Work closely with front-end and back-end engineers to deliver high-quality software solutions, following agile development practices.
- Optimization & Scaling: Identify performance bottlenecks, troubleshoot production issues, and scale the system as needed.
- Code Reviews & Mentorship: Conduct peer code reviews, share best practices, and mentor junior developers to improve team efficiency and code quality.
Must-Have Skills:
- Google Cloud (GCP): Hands-on experience with various Google Cloud services (Compute Engine, Cloud Storage, Pub/Sub, Firestore, etc.) for building scalable applications.
- React.js: Strong experience in building modern, responsive user interfaces with React.js and Typescript
- Koa.js: Strong experience in building web servers and APIs with Koa.js.
- MongoDB & Redis: Proficiency in working with MongoDB (NoSQL databases) and Redis for caching and session management.
- Bunyan: Experience using Bunyan for structured logging and tracking application events.
- OpenTelemetry Ecosystem: Hands-on experience with the OpenTelemetry ecosystem for monitoring and distributed tracing.
- CI/CD: Proficient in setting up CI/CD pipelines using tools like CircleCI, Jenkins, or GitLab CI.
- Testing Frameworks: Solid understanding and experience with Jest, Mocha, and React Testing Library for testing both back-end and front-end applications.
- JavaScript & Node.js: Strong proficiency in JavaScript (ES6+), and experience working with Node.js for back-end services.
Desired Skills & Experience:
- Experience with other cloud platforms (AWS, Azure).
- Familiarity with containerization and orchestration tools like Docker and Kubernetes.
- Experience working with TypeScript.
- Knowledge of other logging and monitoring tools.
- Familiarity with agile methodologies and project management tools (JIRA, Trello, etc.).
Qualifications:
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 5-10 years of hands-on experience as a Fullstack Developer.
- Strong problem-solving skills and ability to debug complex systems.
- Excellent communication skills and ability to work in a team-oriented, collaborative environment.
Job Description-
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect ML, Cloud
Experience:5-10 years
Client Location: Bangalore
Work Location: Tokyo, Japan (Onsite)
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Job Title: Solution Architect (ML, Cloud)
Location: Tokyo, Japan (Onsite)
Experience: 5-10 years
Overview: We are looking for a skilled Solution Architect with expertise in Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes to join our team in Tokyo. The ideal candidate will be responsible for designing and implementing cutting-edge, scalable solutions while leveraging the latest technologies and best practices to meet business objectives.
Key Responsibilities:
Collaborate with stakeholders to understand business needs and develop scalable, efficient technical solutions.
Architect and implement complex systems integrating Machine Learning, Cloud platforms (AWS, Azure, Google Cloud), and Full Stack Development.
Lead the development and deployment of cloud-native applications using NoSQL databases, Python, and Kubernetes.
Design and optimize algorithms to improve performance, scalability, and reliability of solutions.
Review, validate, and refine architecture to ensure flexibility, scalability, and cost-efficiency.
Mentor development teams and ensure adherence to best practices for coding, testing, and deployment.
Contribute to the development of technical documentation and solution roadmaps.
Stay up-to-date with emerging technologies and continuously improve solution design processes.
Required Skills & Qualifications:
5-10 years of experience as a Solution Architect or similar role with expertise in ML, Cloud, and Full Stack Development.
Proficiency in at least two major cloud platforms (AWS, Azure, Google Cloud).
Solid experience with Kubernetes for container orchestration and deployment.
Hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
Expertise in Python and ML frameworks like TensorFlow, PyTorch, etc.
Practical experience implementing at least two real-world algorithms (e.g., classification, clustering, recommendation systems).
Strong knowledge of scalable architecture design and cloud-native application development.
Familiarity with CI/CD tools and DevOps practices.
Excellent problem-solving abilities and the ability to thrive in a fast-paced environment.
Strong communication and collaboration skills with cross-functional teams.
Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Qualifications:
Experience with microservices and containerization.
Knowledge of distributed systems and high-performance computing.
Cloud certifications (AWS Certified Solutions Architect, Google Cloud Professional Architect, etc.).
Familiarity with Agile methodologies and Scrum.
Japanese language proficiency is an added advantage (but not mandatory).
Skills : ML, Cloud (any two major clouds), algorithms (two algorithms must be implemented), full stack, kubernatics, no sql, Python
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience 5-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
The candidate should have a background in development/programming with experience in at least one of the following: .NET, Java (Spring Boot), ReactJS, or AngularJS.
Primary Skills:
- AWS or GCP Cloud
- DevOps CI/CD pipelines (e.g., Azure DevOps, Jenkins)
- Python/Bash/PowerShell scripting
Secondary Skills:
- Docker or Kubernetes


Job Title: Solution Architect
Work Location: Tokyo
Experience: 7-10 years
Number of Positions: 3
Job Description:
We are seeking a highly skilled Solution Architect to join our dynamic team in Tokyo. The ideal candidate will have substantial experience in designing, implementing, and deploying cutting-edge solutions involving Machine Learning (ML), Cloud Computing, Full Stack Development, and Kubernetes. The Solution Architect will play a key role in architecting and delivering innovative solutions that meet business objectives while leveraging advanced technologies and industry best practices.
Responsibilities:
- Collaborate with stakeholders to understand business needs and translate them into scalable and efficient technical solutions.
- Design and implement complex systems involving Machine Learning, Cloud Computing (at least two major clouds such as AWS, Azure, or Google Cloud), and Full Stack Development.
- Lead the design, development, and deployment of cloud-native applications with a focus on NoSQL databases, Python, and Kubernetes.
- Implement algorithms and provide scalable solutions, with a focus on performance optimization and system reliability.
- Review, validate, and improve architectures to ensure high scalability, flexibility, and cost-efficiency in cloud environments.
- Guide and mentor development teams, ensuring best practices are followed in coding, testing, and deployment.
- Contribute to the development of technical documentation and roadmaps.
- Stay up-to-date with emerging technologies and propose enhancements to the solution design process.
Key Skills & Requirements:
- Proven experience (7-10 years) as a Solution Architect or similar role, with deep expertise in Machine Learning, Cloud Architecture, and Full Stack Development.
- Expertise in at least two major cloud platforms (AWS, Azure, Google Cloud).
- Solid experience with Kubernetes for container orchestration and deployment.
- Strong hands-on experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB, etc.).
- Proficiency in Python, including experience with ML frameworks (such as TensorFlow, PyTorch, etc.) and libraries for algorithm development.
- Must have implemented at least two algorithms (e.g., classification, clustering, recommendation systems, etc.) in real-world applications.
- Strong experience in designing scalable architectures and applications from the ground up.
- Experience with DevOps and automation tools for CI/CD pipelines.
- Excellent problem-solving skills and ability to work in a fast-paced environment.
- Strong communication skills and ability to collaborate with cross-functional teams.
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
Preferred Skills:
- Experience with microservices architecture and containerization.
- Knowledge of distributed systems and high-performance computing.
- Certifications in cloud platforms (AWS Certified Solutions Architect, Google Cloud Professional Cloud Architect, etc.).
- Familiarity with Agile methodologies and Scrum.
- Knowing Japanese Language is an additional advantage for the Candidate. Not mandatory.
Key Responsibilities
AI Model Development
- Design and implement advanced Generative AI models (e.g., GPT-based, LLaMA, etc.) to support applications across various domains, including text generation, summarization, and conversational agents.
- Utilize tools like LangChain and LlamaIndex to build robust AI-powered systems, ensuring seamless integration with data sources, APIs, and databases.
Backend Development with FastAPI
- Develop and maintain fast, efficient, and scalable FastAPI services to expose AI models and algorithms via RESTful APIs.
- Ensure optimal performance and low-latency for API endpoints, focusing on real-time data processing.
Pipeline and Integration
- Build and optimize data processing pipelines for AI models, including ingestion, transformation, and indexing of large datasets using tools like LangChain and LlamaIndex.
- Integrate AI models with external services, databases, and other backend systems to create end-to-end solutions.
Collaboration with Cross-Functional Teams
- Collaborate with data scientists, machine learning engineers, and product teams to define project requirements, technical feasibility, and timelines.
- Work with front-end developers to integrate AI-powered functionalities into web applications.
Model Optimization and Fine-Tuning
- Fine-tune and optimize pre-trained Generative AI models to improve accuracy, performance, and scalability for specific business use cases.
- Ensure efficient deployment of models in production environments, addressing issues related to memory, latency, and resource management.
Documentation and Code Quality
- Maintain high standards of code quality, write clear, maintainable code, and conduct thorough unit and integration tests.
- Document AI model architectures, APIs, and workflows for future reference and onboarding of team members.
Research and Innovation
- Stay updated with the latest advancements in Generative AI, LangChain, and LlamaIndex, and actively contribute to the adoption of new techniques and technologies.
- Propose and explore innovative ways to leverage cutting-edge AI technologies to solve complex problems.
Required Skills and Experience
Expertise in Generative AI
Strong experience working with Generative AI models, including but not limited to GPT-3/4, LLaMA, or other large language models (LLMs).
LangChain & LlamaIndex
Hands-on experience with LangChain for building language model-driven applications, and LlamaIndex for efficient data indexing and querying.
Python Programming
Proficiency in Python for building AI applications, working with frameworks such as TensorFlow, PyTorch, Hugging Face, and others.
API Development with FastAPI
Strong experience developing RESTful APIs using FastAPI, with a focus on high-performance, scalable web services.
NLP & Machine Learning
Solid foundation in Natural Language Processing (NLP) and machine learning techniques, including data preprocessing, feature engineering, model evaluation, and fine-tuning.
Database & Storage Systems Familiarity with relational and NoSQL databases, data storage, and management strategies for large-scale AI datasets.
Version Control & CI/CD
Experience with Git, GitHub, and implementing CI/CD pipelines for seamless deployment.
Preferred Skills
Containerization & Cloud Deployment
Familiarity with Docker, Kubernetes, and cloud platforms (e.g., AWS, GCP, Azure) for deploying scalable AI applications.
Data Engineering
Experience in working with data pipelines and frameworks such as Apache Spark, Airflow, or Dask.
Knowledge of Front-End Technologies Familiarity with front-end frameworks (React, Vue.js, etc.) for integrating AI APIs with user-facing applications.
Position Overview: We are seeking a talented and experienced Cloud Engineer specialized in AWS cloud services to join our dynamic team. The ideal candidate will have a strong background in AWS infrastructure and services, including EC2, Elastic Load Balancing (ELB), Auto Scaling, S3, VPC, RDS, CloudFormation, CloudFront, Route 53, AWS Certificate Manager (ACM), and Terraform for Infrastructure as Code (IaC). Experience with other AWS services is a plus.
Responsibilities:
• Design, deploy, and maintain AWS infrastructure solutions, ensuring scalability, reliability, and security.
• Configure and manage EC2 instances to meet application requirements.
• Implement and manage Elastic Load Balancers (ELB) to distribute incoming traffic across multiple instances.
• Set up and manage AWS Auto Scaling to dynamically adjust resources based on demand.
• Configure and maintain VPCs, including subnets, route tables, and security groups, to control network traffic.
• Deploy and manage AWS CloudFormation and Terraform templates to automate infrastructure provisioning using Infrastructure as Code (IaC) principles.
• Implement and monitor S3 storage solutions for secure and scalable data storage
• Set up and manage CloudFront distributions for content delivery with low latency and high transfer speeds.
• Configure Route 53 for domain management, DNS routing, and failover configurations.
• Manage AWS Certificate Manager (ACM) for provisioning, managing, and deploying SSL/TLS certificates.
• Collaborate with cross-functional teams to understand business requirements and provide effective cloud solutions.
• Stay updated with the latest AWS technologies and best practices to drive continuous improvement.
Qualifications:
• Bachelor's degree in computer science, Information Technology, or a related field.
• Minimum of 2 years of relevant experience in designing, deploying, and managing AWS cloud solutions.
• Strong proficiency in AWS services such as EC2, ELB, Auto Scaling, VPC, S3, RDS, and CloudFormation.
• Experience with other AWS services such as Lambda, ECS, EKS, and DynamoDB is a plus.
• Solid understanding of cloud computing principles, including IaaS, PaaS, and SaaS.
• Excellent problem-solving skills and the ability to troubleshoot complex issues in a cloud environment.
• Strong communication skills with the ability to collaborate effectively with cross-functional teams.
• Relevant AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are highly desirable.
Additional Information:
• We value creativity, innovation, and a proactive approach to problem-solving.
• We offer a collaborative and supportive work environment where your ideas and contributions are valued.
• Opportunities for professional growth and development. Someshwara Software Pvt Ltd is an equal opportunity employer.
We celebrate diversity and are dedicated to creating an inclusive environment for all employees.

NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
Job Purpose and Impact
The DevOps Engineer is a key position to strengthen the security automation capabilities which have been identified as a critical area for growth and specialization within Global IT’s scope. As part of the Cyber Intelligence Operation’s DevOps Team, you will be helping shape our automation efforts by building, maintaining and supporting our security infrastructure.
Key Accountabilities
- Collaborate with internal and external partners to understand and evaluate business requirements.
- Implement modern engineering practices to ensure product quality.
- Provide designs, prototypes and implementations incorporating software engineering best practices, tools and monitoring according to industry standards.
- Write well-designed, testable and efficient code using full-stack engineering capability.
- Integrate software components into a fully functional software system.
- Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff.
- Proficiency in at least one configuration management or orchestration tool, such as Ansible.
- Experience with cloud monitoring and logging services.
Qualifications
Minimum Qualifications
- Bachelor's degree in a related field or equivalent exp
- Knowledge of public cloud services & application programming interfaces
- Working exp with continuous integration and delivery practices
Preferred Qualifications
- 3-5 years of relevant exp whether in IT, IS, or software development
- Exp in:
- Code repositories such as Git
- Scripting languages (Python & PowerShell)
- Using Windows, Linux, Unix, and mobile platforms within cloud services such as AWS
- Cloud infrastructure as a service (IaaS) / platform as a service (PaaS), microservices, Docker containers, Kubernetes, Terraform, Jenkins
- Databases such as Postgres, SQL, Elastic
Job Description - Manager Sales
Min 15 years experience,
Should be experience from Sales of Cloud IT Saas Products portfolio which Savex deals with,
Team Management experience, leading cloud business including teams
Sales manager - Cloud Solutions
Reporting to Sr Management
Good personality
Distribution backgroung
Keen on Channel partners
Good database of OEMs and channel partners.
Age group - 35 to 45yrs
Male Candidate
Good communication
B2B Channel Sales
Location - Bangalore
If interested reply with cv and below details
Total exp -
Current ctc -
Exp ctc -
Np -
Current location -
Qualification -
Total exp Channel Sales -
What are the Cloud IT products, you have done sales for?
What is the Annual revenue generated through Sales ?
Experience: 5+ Years
• Experience in Core Java, Spring Boot
• Experience in microservices and angular
• Extensive experience in developing enterprise-scale systems for global organization. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should be able to analyze, design, develop and test complex, low-latency client-facing applications.
• Good development experience with RDBMS in SQL Server, Postgres, Oracle or DB2
• Good knowledge of multi-threading
• Basic working knowledge of Unix/Linux
• Excellent problem solving and coding skills in Java
• Strong interpersonal, communication and analytical skills.
• Should be able to express their design ideas and thoughts

GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills


Job Title: .NET Developer with Cloud Migration Experience
Job Description:
We are seeking a skilled .NET Developer with experience in C#, MVC, and ASP.NET to join our team. The ideal candidate will also have hands-on experience with cloud migration projects, particularly in migrating on-premise applications to cloud platforms such as Azure or AWS.
Responsibilities:
- Develop, test, and maintain .NET applications using C#, MVC, and ASP.NET
- Collaborate with cross-functional teams to define, design, and ship new features
- Participate in code reviews and ensure coding best practices are followed
- Work closely with the infrastructure team to migrate on-premise applications to the cloud
- Troubleshoot and debug issues that arise during migration and post-migration phases
- Stay updated with the latest trends and technologies in .NET development and cloud computing
Requirements:
- Bachelor's degree in Computer Science or related field
- X+ years of experience in .NET development using C#, MVC, and ASP.NET
- Hands-on experience with cloud migration projects, preferably with Azure or AWS
- Strong understanding of cloud computing concepts and principles
- Experience with database technologies such as SQL Server
- Excellent problem-solving and communication skills
Preferred Qualifications:
- Microsoft Azure or AWS certification
- Experience with other cloud platforms such as Google Cloud Platform (GCP)
- Familiarity with DevOps practices and tools
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation


Job Title: Senior Full Stack Engineer
Location: Bangalore
About threedots:
At threedots, we are committed to helping our customers navigate the complex world of secured credit financing. Our mission is to facilitate financial empowerment through innovative, secured credit solutions like Loans Against Property, Securities, FD & More. Founded by early members of Groww, we are a well funded startup with over $4M in funding from India’s top investors.
Role Overview:
The Senior Full Stack Engineer will be responsible for developing and managing our web infrastructure and leading a team of talented engineers. With a solid background in both front and back-end technologies, and a proven track record of developing scalable web applications, the ideal candidate will have a hands-on approach and a leader's mindset.
Key Responsibilities:
- Lead the design, development, and deployment of our Node and ReactJS-based applications.
- Architect scalable and maintainable web applications that can handle the needs of a rapidly growing user base.
- Ensure the technical feasibility and smooth integration of UI/UX designs.
- Optimize applications for maximum speed and scalability.
- Implement comprehensive security and data protection.
- Manage and review code contributed by the team and maintain high standards of software quality.
- Deploy applications on AWS/GCP and manage server infrastructure.
- Work collaboratively with cross-functional teams to define, design, and ship new features.
- Provide technical leadership and mentorship to other team members.
- Keep abreast with the latest technological advancements to leverage new tech and tools.
Minimum Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 3 years of experience as a full-stack developer.
- Proficient in Node.js and ReactJS.
- Experience with cloud services (AWS/GCP).
- Solid understanding of web technologies, including HTML5, CSS3, JavaScript, and responsive design.
- Experience with databases, web servers, and UI/UX design.
- Strong problem-solving skills and the ability to make sound architectural decisions.
- Proven ability to lead and mentor a tech team.
Preferred Qualifications:
- Experience in fintech
- Strong knowledge of software development methodologies and best practices.
- Experience with CI/CD pipelines and automated testing.
- Familiarity with microservices architecture.
- Excellent communication and leadership skills.
What We Offer:
- The opportunity to be part of a founding team and shape the company's future.
- Competitive salary with equity options.
- A creative and collaborative work environment.
- Professional growth opportunities as the company expands.
- Additional Startup Perks

How You'll Contribute:
● Redefine Fintech architecture standards by building easy-to-use, highly scalable,robust, and flexible APIs
● In-depth analysis of the systems/architectures and predict potential future breakdown and proactively bring solution
● Partner with internal stakeholders, to identify potential features implementation on that could cater to our growing business needs
● Drive the team towards writing high-quality codes, tackle abstracts/flaws in system design to attain revved-up API performance, high code reusability and readability.
● Think through the complex Fintech infrastructure and propose an easy-to-deploy modular infrastructure that could adapt and adjust to the specific requirements of the growing client base
● Design and create for scale, optimized memory usage and high throughput performance.
Skills Required:
● 5+ years of experience in the development of complex distributed systems
● Prior experience in building sustainable, reliable and secure microservice-based scalable architecture using Python Programming Language
● In-depth understanding of Python associated libraries and frameworks
● Strong involvement in managing and maintaining produc Ɵ on-level code with high volume API hits and low-latency APIs
● Strong knowledge of Data Structure, Algorithms, Design Patterns, Multi threading concepts, etc
● Ability to design and implement technical road maps for the system and components
● Bring in new software development practices, design/architecture innovations to make our Tech stack more robust
● Hands-on experience in cloud technologies like AWS/GCP/Azure as well as relational databases like MySQL/PostgreSQL or any NoSQL database like DynamoDB
Description:
As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights.
Key Responsibilities:
1. Team Leadership:
a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management.
b. Foster a culture of innovation, collaboration, and continuous learning within the team.
2. Data Pipeline Development (Google Cloud Focus):
a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.
b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP.
3. Data Architecture and Optimization:
a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently.
b. Optimize data storage, processing, and retrieval for maximum
performance and cost-effectiveness on GCP.
4. Data Governance and Quality:
a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues.
5. Cross-functional Collaboration:
a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights.
b. Participate in discussions regarding data strategy and provide technical expertise.
6. Documentation and Best Practices:
a. Create and maintain documentation for data engineering processes, standards, and best practices.
b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed.
Qualifications
● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform.
● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage.
● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java.
● Excellent problem-solving and communication skills.
● Leadership experience and the ability to manage and mentor a team.
Lead Data Engineer
Data Engineers develop modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems. On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product. It could also be a software delivery project where you're equally happy coding and tech-leading the team to implement the solution.
Job responsibilities
· You might spend a few weeks with a new client on a deep technical review or a complete organizational review, helping them to understand the potential that data brings to solve their most pressing problems
· You will partner with teammates to create complex data processing pipelines in order to solve our clients' most ambitious challenges
· You will collaborate with Data Scientists in order to design scalable implementations of their models
· You will pair to write clean and iterative code based on TDD
· Leverage various continuous delivery practices to deploy, support and operate data pipelines
· Advise and educate clients on how to use different distributed storage and computing technologies from the plethora of options available
· Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions
· Create data models and speak to the tradeoffs of different modeling approaches
· On other projects, you might be acting as the architect, leading the design of technical solutions, or perhaps overseeing a program inception to build a new product
· Seamlessly incorporate data quality into your day-to-day work as well as into the delivery process
· Assure effective collaboration between Thoughtworks' and the client's teams, encouraging open communication and advocating for shared outcomes
Job qualifications Technical skills
· You are equally happy coding and leading a team to implement a solution
· You have a track record of innovation and expertise in Data Engineering
· You're passionate about craftsmanship and have applied your expertise across a range of industries and organizations
· You have a deep understanding of data modelling and experience with data engineering tools and platforms such as Kafka, Spark, and Hadoop
· You have built large-scale data pipelines and data-centric applications using any of the distributed storage platforms such as HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.) and any of the distributed processing platforms like Hadoop, Spark, Hive, Oozie, and Airflow in a production setting
· Hands on experience in MapR, Cloudera, Hortonworks and/or cloud (AWS EMR, Azure HDInsights, Qubole etc.) based Hadoop distributions
· You are comfortable taking data-driven approaches and applying data security strategy to solve business problems
· You're genuinely excited about data infrastructure and operations with a familiarity working in cloud environments
· Working with data excites you: you have created Big data architecture, you can build and operate data pipelines, and maintain data storage, all within distributed systems
Professional skills
· Advocate your data engineering expertise to the broader tech community outside of Thoughtworks, speaking at conferences and acting as a mentor for more junior-level data engineers
· You're resilient and flexible in ambiguous situations and enjoy solving problems from technical and business perspectives
· An interest in coaching others, sharing your experience and knowledge with teammates
· You enjoy influencing others and always advocate for technical excellence while being open to change when needed
We are looking "Sr.Software Engineer(Devops)" for Reputed Client @ Bangalore Permanent Role.
Experience: 4+ Yrs
Responsibilities:
• As part of a team you will design, develop, and maintain scalable multi cloud DevOps blueprint.
• Understand overall virtualization platform architecture in cloud environments and design best of class solutions that fit the SaaS offering & legacy application modernization
• Continuously improve CI/CD pipeline, tools, processes and procedures and systems relating to Developer Productivity
• Collaborate continuously with the product development teams to implement CI/CD pipeline.
• Contribute to the subject matter on Developer Productivity, DevOps, Infrastructure Automation best practices.
Mandatory Skills:
• 1+ years of commercial server-side software development experience & 3+ years of commercial DevOps experience.
• Strong scripting skills (Java or Python) is a must.
• Experience with automation tools such as Ansible, Chef, Puppet etc.
• Hands-on experience with CI/CD tools such as GitLab, Jenkins, Nexus, Artifactory, Maven, Gradle
• Hands-on working experience in developing or deploying microservices is a must.
• Hands-on working experience of at least of the popular cloud infrastructure such as AWS / Azure / GCP / Red Hat OpenStack is a must.
• Knowledge about microservices hosted in leading cloud environments
• Experience with containerizing applications (Docker preferred) is a must
• Hands-on working experience of automating deployment, scaling, and management of containerized applications (Kubernetes) is a must.
• Strong problem-solving, analytical skills and good understanding of the best practices for building, testing, deploying and monitoring software
Mandatory Skills:
• Experience working with Secret management services such as HashiCorp Vault is desirable.
• Experience working with Identity and access management services such as Okta, Cognito is desirable.
• Experience with monitoring systems such as Prometheus, Grafana is desirable.
Educational Qualifications and Experience:
• B.E/B.Tech/MCA/M.Tech (Computer science/Information science/Information Technology is a Plus)
• 4 to 6 years of hands-on experience in server-side application development & DevOps
FINTECH CANDIDATES ONLY
About the job:
Emint is a fintech startup with the mission to ‘Make the best investing product that Indian consumers love to use, with simplicity & intelligence at the core’. We are creating a platformthat
gives a holistic view of market dynamics which helps our users make smart & disciplined
investment decisions. Emint is founded by a stellar team of individuals who come with decades of
experience of investing in Indian & global markets. We are building a team of highly skilled &
disciplined team of professionals and looking at equally motivated individuals to be part of
Emint. Currently are looking at hiring a Devops to join our team at Bangalore.
Job Description :
Must Have:
• Hands on experience on AWS DEVOPS
• Experience in Unix with BASH scripting is must
• Experience working with Kubernetes, Docker.
• Experience in Gitlab, Github or Bitbucket artifactory
• Packaging, deployment
• CI/CD pipeline experience (Jenkins is preferable)
• CI/CD best practices
Good to Have:
• Startup Experience
• Knowledge of source code management guidelines
• Experience with deployment tools like Ansible/puppet/chef is preferable
• IAM knowledge
• Coding knowledge of Python adds value
• Test automation setup experience
Qualifications:
• Bachelor's degree or equivalent experience in Computer Science or related field
• Graduates from IIT / NIT/ BITS / IIIT preferred
• Professionals with fintech ( stock broking / banking ) preferred
• Experience in building & scaling B2C apps preferred