50+ Google Cloud Platform (GCP) Jobs in Hyderabad | Google Cloud Platform (GCP) Job openings in Hyderabad
Apply to 50+ Google Cloud Platform (GCP) Jobs in Hyderabad on CutShort.io. Explore the latest Google Cloud Platform (GCP) Job opportunities across top companies like Google, Amazon & Adobe.


Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.

We're Hiring: Senior Unity Developer (Multiplayer | Mobile)
Hyderabad,Onsite
Full-Time |
4+ Years Experience
Are you passionate about building next-level mobile games? We’re on the lookout for a Senior Unity Developer who thrives in multiplayer environments, loves working with cutting-edge tech (like Photon Fusion), and understands how to architect great player experiences with FSMs and AI bots.
This is your chance to work alongside a world-class creative team and shape the gameplay of tomorrow.
What You’ll Do
Collaborate with developers, designers, and artists to build engaging, high-performance mobile games
Design and implement player-facing gameplay systems and features
Build scalable
Finite State Machines (FSM)
for character, bot, and UI behavior
Lead architecture and ensure code quality, efficiency, and scalability
Optimize and debug gameplay using Unity profiling tools
Mentor junior devs and participate in regular code reviews
Stay ahead of mobile and multiplayer game trends, tools, and tech
Required:
4+ years of Unity development experience (Android/iOS)
Experience in
multiplayer game development
(Photon Fusion or similar)
Solid grasp of
FSM architecture
and modular game logic
Skilled in
C#
, Unity APIs, and optimization for mobile platforms
Experience with
AI-driven bots
, game logic, and event systems
Strong debugging and profiling skills (Unity Profiler, Crashlytics, etc.)
Familiar with Google Play Console / App Store Connect
Excellent communication & teamwork skills
Passion for games and understanding of game design fundamentals

We are seeking a highly skilled Java full-stack developer with 5–8 years of experience to join our dynamic development team. The ideal candidate will have deep technical expertise across Java, Microservices, React/Redux, Kubernetes, DevOps tools, and GCP. You will work on designing and deploying full-stack applications that are robust, scalable, and aligned with business goals.
Key Responsibilities
- Design, develop, and deploy scalable full-stack applications using Java, React, and Redux
- Build microservices following SOLID principles
- Collaborate with cross-functional team,s including product owners, QA, BAs, and other engineers
- Write clean, maintainable, and efficient code
- Perform debugging, troubleshooting, and optimization
- Participate in code reviews and contribute to engineering best practices
- Stay updated on security, privacy, and compliance requirements
- Work in an Agile/Scrum environment using tools like JIRA and Confluence
Technical Skills Required
Frontend
- Strong proficiency in JavaScript and modern ES6 features
- Expertise in React.js with advanced knowledge of hooks (useCallback, useMemo, etc.)
- Solid understanding of Redux for state management
Backend
- Strong hands-on experience in Java
- Building and maintaining Microservices architectures
DevOps & Infrastructure
- Experience with CI/CD tools: Jenkins, Nexus, Maven, Ansible
- Terraform for infrastructure as code
- Containerization and orchestration using Docker and Kubernetes/GKE
- Experience with IAM, security roles, service accounts
Cloud
- Proficient with any cloud services
Database
- Hands-on experience with PostgreSQL, MySQL, BigQuery
Scripting
- Proficiency in Bash/Shell scripting and Python
Non-Technical Skills
- Strong communication and interpersonal skills
- Ability to work effectively in distributed teams across time zones
- Quick learner and adaptable to new technologies
- Team player with a collaborative mindset
- Ability to explain complex technical concepts to non-technical stakeholders
Nice to Have
- Experience with NetReveal / Detica
Why Join Us?
- 🚀 Challenging Projects: Be part of innovative solutions making a global impact
- 🌍 Global Exposure: Work with international teams and clients
- 📈 Career Growth: Clear pathways for professional advancement
- 🧘♂️ Flexible Work Options: Hybrid and remote flexibility to support work-life balance
- 💼 Competitive Compensation: Industry-leading salary and benefits
Job Title : Lead Java Developer (Backend)
Experience Required : 8 to 15 Years
Open Positions : 5
Location : Any major metro city (Bengaluru, Pune, Chennai, Kolkata, Hyderabad)
Work Mode : Open to Remote / Hybrid / Onsite
Notice Period : Immediate Joiner/30 Days or Less
About the Role :
- We are looking for experienced Lead Java Developers who bring not only strong backend development skills but also a product-oriented mindset and leadership capability.
- This is an opportunity to be part of high-impact digital transformation initiatives that go beyond writing code—you’ll help shape future-ready platforms and drive meaningful change.
- This role is embedded within a forward-thinking digital engineering team that thrives on co-innovation, lean delivery, and end-to-end ownership of platforms and products.
Key Responsibilities :
- Design, develop, and implement scalable backend systems using Java and Spring Boot.
- Collaborate with product managers, designers, and engineers to build intuitive and reliable digital products.
- Advocate and implement engineering best practices : SOLID principles, OOP, clean code, CI/CD, TDD/BDD.
- Lead Agile-based development cycles with a focus on speed, quality, and customer outcomes.
- Guide and mentor team members, fostering technical excellence and ownership.
- Utilize cloud platforms and DevOps tools to ensure performance and reliability of applications.
What We’re Looking For :
- Proven experience in Java backend development (Spring Boot, Microservices).
- 8+ Years of hands-on engineering experience with at least 2+ years in a Lead role.
- Familiarity with cloud platforms such as AWS, Azure, or GCP.
- Good understanding of containerization and orchestration tools like Docker and Kubernetes.
- Exposure to DevOps and Infrastructure as Code practices.
- Strong problem-solving skills and the ability to design solutions from first principles.
- Prior experience in product-based or startup environments is a big plus.
Ideal Candidate Profile :
- A tech enthusiast with a passion for clean code and scalable architecture.
- Someone who thrives in collaborative, transparent, and feedback-driven environments.
- A leader who takes ownership beyond individual deliverables to drive overall team and project success.
Interview Process
- Initial Technical Screening (via platform partner)
- Technical Interview with Engineering Team
- Client-facing Final Round
Additional Info :
- Targeting profiles from product/startup backgrounds.
- Strong preference for candidates with under 1 month of notice period.
- Interviews will be fast-tracked for qualified profiles.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG

Dear,
We are excited to inform you about an exclusive opportunity at Xebia for a Senior Backend Engineer role.
📌 Job Details:
- Role: Senior Backend Engineer
- Shift: 1 PM – 10 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or up to 30 days
🔹 Job Responsibilities:
✅ Design and develop scalable, reliable, and maintainable backend solutions
✅ Work on event-driven microservices architecture
✅ Implement REST APIs and optimize backend performance
✅ Collaborate with cross-functional teams to drive innovation
✅ Mentor junior and mid-level engineers
🔹 Required Skills:
✔ Backend Development: Scala (preferred), Java, Kotlin
✔ Cloud: AWS or GCP
✔ Databases: MySQL, NoSQL (Cassandra)
✔ DevOps & CI/CD: Jenkins, Terraform, Infrastructure as Code
✔ Messaging & Caching: Kafka, RabbitMQ, Elasticsearch
✔ Agile Methodologies: Scrum, Kanban
⚠ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response! Also, feel free to refer anyone in your network who might be a good fit.
Best regards,
Vijay S
Assistant Manager - TAG
Overview:
We are seeking a talented and experienced GCP Data Engineer with strong expertise in Teradata, ETL, and Data Warehousing to join our team. As a key member of our Data Engineering team, you will play a critical role in developing and maintaining data pipelines, optimizing ETL processes, and managing large-scale data warehouses on the Google Cloud Platform (GCP).
Responsibilities:
- Design, implement, and maintain scalable ETL pipelines on GCP (Google Cloud Platform).
- Develop and manage data warehouse solutions using Teradata and cloud-based technologies (BigQuery, Cloud Storage, etc.).
- Build and optimize high-performance data pipelines for real-time and batch data processing.
- Integrate, transform, and load large datasets into GCP-based data lakes and data warehouses.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Write efficient, clean, and reusable code for ETL processes and data workflows.
- Ensure data quality, consistency, and integrity across all pipelines and storage solutions.
- Implement data governance practices and ensure security and compliance of data processes.
- Monitor and troubleshoot data pipeline performance and resolve issues proactively.
- Participate in the design and implementation of scalable data architectures using GCP services like BigQuery, Cloud Dataflow, and Cloud Pub/Sub.
- Optimize and automate data workflows for continuous improvement.
- Maintain up-to-date documentation of data pipeline architectures and processes.
Requirements:
Technical Skills:
- Google Cloud Platform (GCP): Extensive experience with BigQuery, Cloud Storage, Cloud Dataflow, and Cloud Composer.
- ETL Tools: Expertise in building ETL pipelines using tools such as Apache NiFi, Apache Beam, or custom Python-based scripts.
- Data Warehousing: Strong experience working with Teradata for data warehousing, including data modeling, schema design, and performance tuning.
- SQL: Advanced proficiency in SQL and relational databases, particularly in the context of Teradata and GCP environments.
- Programming: Proficient in Python, Java, or Scala for building and automating data processes.
- Data Architecture: Knowledge of best practices in designing scalable data architectures for both structured and unstructured data.
Experience:
- Proven experience as a Data Engineer, with a focus on building and managing ETL pipelines and data warehouse solutions.
- Hands-on experience in data modeling and working with complex, high-volume data in a cloud-based environment.
- Experience with data migration from on-premises to cloud environments (Teradata to GCP).
- Familiarity with Data Lake concepts and technologies.
- Experience with version control systems like Git and working in Agile environments.
- Knowledge of CI/CD and automation processes in data engineering.
Soft Skills:
- Strong problem-solving and troubleshooting skills.
- Excellent communication skills, both verbal and written, for interacting with technical and non-technical teams.
- Ability to work collaboratively in a fast-paced, cross-functional team environment.
- Strong attention to detail and ability to prioritize tasks.
Preferred Qualifications:
- Experience with other GCP tools such as Dataproc, Bigtable, Cloud Functions.
- Knowledge of Terraform or similar infrastructure-as-code tools for managing cloud resources.
- Familiarity with data governance frameworks and data privacy regulations.
- Certifications in Google Cloud or Teradata are a plus.
Benefits:
- Competitive salary and performance-based bonuses.
- Health, dental, and vision insurance.
- 401(k) with company matching.
- Paid time off and flexible work schedules.
- Opportunities for professional growth and development.
Job Title: Senior Automation Engineer (API & Cloud Testing)
Job Type: Full-TimeJob Location: Bangalore, Pune
Work Mode:Hybrid
Experience: 8+ years (Minimum 5 years in Automation)
Notice Period: 0-30 days
About the Role:
We are looking for an experienced Senior Automation Engineer to join our team. The ideal candidate should have extensive expertise in API testing, Node.js, Cypress, Postman/Newman, and cloud-based platforms (AWS/Azure/GCP). The role involves automating workflows in ArrowSphere, optimizing test automation pipelines, and ensuring software quality in an Agile environment. The selected candidate will work closely with teams in France, requiring strong communication skills.
Key Responsibilities:
Automate ArrowSphere Workflows: Develop and implement automation strategies for ArrowSphere Public API workflows to enhance efficiency.
Support QA Team: Guide and assist QA engineers in improving automation strategies.
Optimize Test Automation Pipeline: Design and maintain a high-performance test automation framework.
Minimize Test Flakiness: Identify root causes of flaky tests and implement solutions to improve software reliability.
Ensure Software Quality: Actively contribute to maintaining the software’s high standards and cloud service innovation.
Mandatory Skills:
API Testing: Strong knowledge of API testing methodologies.
Node.js: Experience in automation with Cypress, Postman, and Newman.
Cloud Platforms: Working knowledge of AWS, Azure, or GCP (certification is a plus).
Agile Methodologies: Hands-on experience working in an Agile environment.
Technical Communication: Ability to interact with international teams effectively.
Technical Skills:
Cypress: Expertise in front-end automation with Cypress, ensuring scalable and reliable test scripts.
Postman & Newman: Experience in API testing and test automation integration within CI/CD pipelines.
Jenkins: Ability to set up and maintain CI/CD pipelines for automation.
Programming: Proficiency in Node.js (PHP knowledge is a plus).
AWS Architecture: Understanding of AWS services for development and testing.
Git Version Control: Experience with Git workflows (branching, merging, pull requests).
Scripting & Automation: Knowledge of Bash/Python for scripting and automating tasks.
Problem-Solving: Strong debugging skills across front-end, back-end, and database.
Preferred Qualifications:
Cloud Certification (AWS, Azure, or GCP) is an added advantage.
Experience working with international teams, particularly in Europe.
Job Title: ServiceNow ITOM Developer
Location: Hyderabad, India
Experience: 6 - 8 years
Job Summary:
We are seeking an experienced ServiceNow ITOM Developer with 6+ years of hands-on experience in ITOM development. The ideal candidate should have strong expertise in ServiceNow ITOM Suite, CMDB, and exposure to cloud infrastructure such as AWS, Azure, Google Cloud, or Oracle.
Key Responsibilities:
- Design, develop, and implement ServiceNow ITOM solutions, including Discovery, Service Mapping, Event Management, Cloud Management, and Orchestration.
- Configure and maintain CMDB (Configuration Management Database), ensuring data integrity and compliance with ITIL best practices.
- Develop and implement custom workflows, automation, and integrations with external systems.
- Work closely with stakeholders to gather requirements, design solutions, and troubleshoot issues related to ITOM functionalities.
- Optimize ServiceNow ITOM modules to enhance performance and efficiency.
- Implement and maintain discovery mechanisms for on-prem and cloud infrastructure (AWS, Azure, Google Cloud, Oracle).
- Ensure best practices, security, and compliance standards are followed while developing solutions.
- Provide technical expertise and guidance to junior developers and cross-functional teams.
Required Skills & Experience:
- 6+ years of hands-on experience in ServiceNow Development, specifically in ITOM.
- Strong expertise in ServiceNow ITOM Suite, including Discovery, Service Mapping, Orchestration, and Event Management.
- In-depth knowledge of CMDB architecture, CI relationships, data modeling, and reconciliation processes.
- Experience integrating ServiceNow ITOM with cloud platforms (AWS, Azure, Google Cloud, Oracle).
- Proficiency in JavaScript, REST/SOAP APIs, and scripting within the ServiceNow platform.
- Strong understanding of ITIL processes and best practices related to ITOM.
- Experience working with MID Servers, probes, sensors, and identification rules.
- Ability to troubleshoot and resolve complex issues related to ServiceNow ITOM configurations.
Nice-to-Have Skills:
- ServiceNow Certified Implementation Specialist – ITOM certification.
- Experience in CI/CD pipeline integration with ServiceNow.
- Knowledge of ServiceNow Performance Analytics and Reporting.
- Hands-on experience with Terraform or other Infrastructure as Code (IaC) tools.

About the Role:
We are seeking an experienced and driven Lead Backend Engineer to oversee and elevate our backend architecture. This role will focus deeply on backend systems, collaborating closely with the founder and core team to turn strategic goals into reality through backend excellence. The ideal candidate will combine strong technical expertise with leadership capabilities, driving backend development while ensuring system security, scalability, and performance.
Key Responsibilities:
- Backend Development Leadership
- Ownership of Backend Systems: Lead the backend development process, aligning it with the company's broader goals. Gain a full understanding of the existing backend infrastructure, especially in the initial phase.
- Roadmap Development: Within the first three months, build a detailed roadmap that addresses backend "must-do" tasks (e.g., major bugs, security vulnerabilities, data leakage prevention) alongside longer-term improvements. Continuously update the roadmap based on strategic directions from board meetings.
2. Strategic Planning and Execution
- Backend Strategy Implementation: Translate high-level strategies into backend tasks, ensuring clarity on how each piece fits into the company's larger goals.
- Sprint and Task Management: Lead the backend sprint planning process, break down tasks into manageable components, and ensure accurate estimations for efficient execution.
3. Team Leadership and Development
- Mentoring and Growth: Lead backend developers, nurturing their growth while ensuring a culture of responsibility and continuous improvement.
- Process Optimization: Regularly assess backend processes, identifying areas to streamline development and ensure adherence to best practices.
4. Security and Quality Assurance
- Security Oversight: Ensure the backend systems are fortified against potential threats, setting the highest standards for security in every aspect of development.
- Quality Assurance: Maintain top-tier backend development standards, ensuring the system remains resilient, scalable, and efficient under load.
5. Innovation and Continuous Learning
- Real-time Strategy Input: Offer insights during strategic discussions on backend challenges, providing quick, effective solutions when needed.
- Automation and Efficiency: Implement backend automation practices, from CI/CD pipelines to other efficiency-boosting tools that improve the backend workflow.
6. Research and Communication
- Technology Exploration: Stay ahead of backend trends and technologies, providing research and recommendations to stakeholders. Break down complex backend issues into understandable, actionable points.
7. Workplace Expectations
- Ownership Mentality: Embody a strong sense of ownership over the backend systems, with a proactive attitude that eliminates the need for close follow-up.
- On-site Work: Work from the office is required to foster close collaboration with the team.
Tech Stack & Skills
Must-Have:
- Programming Languages: Node.js & JavaScript (TypeScript or normal)
- Databases: Firestore, MongoDB, NoSQL
- Cloud Platforms: Google Cloud Platform (GCP), AWS
- Microservices: Google Cloud Functions
- Containerization: Docker (creation, hosting, maintenance, etc.)
- Deployment & Orchestration: Google Cloud Run
- Messaging & Task Management: Pub/Sub, Google Cloud Tasks
- Security: GCP/AWS Security (IAMs)
Good-to-Have:
- Programming Languages: Python
Qualifications:
- Proven experience as a Lead Backend Engineer or similar role, focusing on backend systems.
- Expertise in the backend technologies specified.
- Strong understanding of CI/CD pipelines and backend security best practices.
- Excellent problem-solving skills and an ability to think critically about backend challenges.
- Strong leadership qualities with the ability to mentor and manage backend developers.
- A passion for continuous learning and applying new backend technologies.
- A high degree of ownership over backend systems, with the ability to work independently.
We are seeking a skilled and motivated Software Engineer with over 3 years of experience in designing and developing web-based applications using Node.js.
Key Responsibilities
- Design, develop, and maintain web-based applications using Node.js.
- Build scalable, high-performance RESTful APIs using Express.js or Restify frameworks.
- Develop and maintain robust SQL database systems, leveraging Sequelize ORM.
- Ensure responsiveness of applications across various devices and platforms.
- Collaborate with cross-functional teams during the product development lifecycle, including prototyping, hardening, and testing phases.
- Work with real-time communication technologies and ensure seamless integration.
- Learn and adapt to alternative technologies as needed to meet project requirements.
Required Skills & Experience
- 3+ years of experience in web application development using Node.js.
- Proficiency with frameworks such as Express.js or Restify.
- Strong expertise in SQL databases and experience with Sequelize ORM.
- In-depth understanding of JavaScript, browser technologies, and real-time communication.
- Hands-on experience in developing responsive web applications.
- Experience with React Native (a plus).
- Proficiency in Java.
- Familiarity with product development lifecycle, including prototyping, testing, and deployment.
Additional Skills & Experience
- Experience with NoSQL databases such as MongoDB or Cassandra.
- Knowledge of internationalization (i18n) and latest UI/UX design trends.
- Familiarity with JavaScript libraries/frameworks like ReactJS or VueJS.
- Experience integrating payment gateways for various countries.
- Strong communication skills and ability to facilitate group discussions effectively.
- Eagerness to contribute to product functionality and user experience designs.
Education Requirements
- Bachelor's or Master's degree in Computer Science or a related field.

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Key Responsibilities:
- Lead Data Engineering Team: Provide leadership and mentorship to junior data engineers and ensure best practices in data architecture and pipeline design.
- Data Pipeline Development: Design, implement, and maintain end-to-end ETL (Extract, Transform, Load) processes to support analytics, reporting, and data science activities.
- Cloud Architecture (GCP): Architect and optimize data infrastructure on Google Cloud Platform (GCP), ensuring scalability, reliability, and performance of data systems.
- CI/CD Pipelines: Implement and maintain CI/CD pipelines using Jenkins and other tools to ensure the seamless deployment and automation of data workflows.
- Data Warehousing: Design and implement data warehousing solutions, ensuring optimal performance and efficient data storage using technologies like Teradata, Oracle, and SQL Server.
- Workflow Orchestration: Use Apache Airflow to orchestrate complex data workflows and scheduling of data pipeline jobs.
- Automation with Terraform: Implement Infrastructure as Code (IaC) using Terraform to provision and manage cloud resources.
Share Cv to
Thirega@ vysystems dot com - WhatsApp - 91Five0033Five2Three
About the Role -
We’re seeking a seasoned Senior Java Engineer to drive the development of our B2B product suite. In this pivotal role, you will build robust, scalable, and intuitive applications that empower customers to seamlessly handle money movement transactions, including international payments. You will ensure technical excellence by advocating best practices, prioritizing security, enhancing development processes, and championing a quality-first mindset.
What You'll Do -
• Design and Development: Create robust, efficient, and scalable backend services using Java and Spring Boot.
• API Development: Design, build, and maintain APIs for web and mobile applications.
• Performance and Security: Ensure application performance, scalability, and security best practices.
• Cloud Integration: Collaborate with cross-functional teams to integrate cloud services into our backend infrastructure.
• Code Quality: Write high-quality, maintainable code that adheres to industry standards.
• Mentorship: Support junior and mid-level team members, conduct code reviews, and foster a culture of continuous improvement.
What You’ll Need -
• 5+ years of professional experience as a Backend Engineer.
• Experience showing strong problem-solving skills and a passion for creating user-centric solutions.
• Core Java proficiency. A strong command of the Java language, including object-oriented programming, design patterns, exception handling, and memory management.
• Spring Framework (including Spring Boot)- In-depth knowledge of Spring, especially Spring Boot for building efficient and scalable backend applications.
• Understanding of Spring components like controllers, services, repositories, and security.
• RESTful API Development: Proficiency in designing and implementing RESTful APIs.
Bonus Points -
• Mastery over Java’s core APIs, such as collections, streams, and concurrency frameworks.
• Experience within a B2B fintech environment would be highly desirable
• Database-driven performance query optimization and Kafka messaging systems
We welcome people from all backgrounds who seek the opportunity to help build a future where we connect the dots for international property payments. If you have the curiosity, passion, and collaborative spirit, work with us, and let’s move the world of PropTech forward, together.
Redpin, Currencies Direct and TorFX are proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, colour, religion, national origin, disability, protected veteran status, age, or any other characteristic protected by law.
Key Responsibilities:
- Azure Cloud Sales & Solutioning: Lead Microsoft Azure cloud sales efforts across global regions, delivering solutions for applications, databases, and SAP servers based on customer requirements.
- Customer Engagement: Act as a trusted advisor for customers, leading them through their cloud transformation by understanding their requirements and recommending suitable cloud solutions.
- Lead Generation & Cost Optimization: Generate leads independently, provide cost-optimized Azure solutions, and continuously work to maximize value for clients.
- Sales Certifications: Hold basic Microsoft sales certifications (Foundation & Business Professional).
- Project Management: Oversee and manage Azure cloud projects, including setting up timelines, guiding technical teams, and communicating progress to customers. Ensure the successful completion of project objectives.
- Cloud Infrastructure Expertise: Maintain a deep understanding of Azure cloud infrastructure and services, including migrations, disaster recovery (DR), and cloud budgeting.
- Billing Management: Manage Azure billing processes, including subscription-based invoicing, purchase orders, renewals, license billing, and tracking expiration dates.
- Microsoft License Sales: Expert in selling Microsoft licenses such as SQL, Windows, and Office 365.
- Client Collaboration: Schedule meetings with internal teams and clients to align on project requirements and ensure effective communication.
- Customer Management: Track leads, follow up on calls, and ensure customer satisfaction by resolving issues and optimizing cloud resources. Provide regular updates on Microsoft technologies and programs.
- Field Sales: Participate in presales meetings and client visits to gather insights and propose cloud solutions.
- Internal Collaboration: Work closely with various internal departments to achieve project results and meet client expectations.
Qualifications:
- 1-3+ years of experience selling or consulting with corporate/public sector/ enterprise customers on Microsoft Azure cloud.
- Proficient in Azure cost optimization, cloud infrastructure, and sales of cloud solutions to end customers.
- Experience in generating leads and tracking sales progress.
- Project management experience with strong organizational skills.
- Ability to work collaboratively with internal teams and customers.
- Strong communication and problem-solving skills.
- SHIFT: DAY SHIFT
- WORKING DAYS: MON-SAT
- LOCATION: HYDERABAD
- WORK MODEL: WORK FROM THE OFFICE
REQUIRED QUALIFICATIONS:
- A degree in Computer Science or equivalent - Graduation
BENEFITS FROM THE COMPANY:
- High chance of Career Growth.
- Flexible working hours and the best infrastructure.
- Passionate Team Members surround you.

NASDAQ listed, Service Provider IT Company
Job Summary:
As a Cloud Architect at organization, you will play a pivotal role in designing, implementing, and maintaining our multi-cloud infrastructure. You will work closely with various teams to ensure our cloud solutions are scalable, secure, and efficient across different cloud providers. Your expertise in multi-cloud strategies, database management, and microservices architecture will be essential to our success.
Key Responsibilities:
- Design and implement scalable, secure, and high-performance cloud architectures across multiple cloud platforms (AWS, Azure, Google Cloud Platform).
- Lead and manage cloud migration projects, ensuring seamless transitions between on-premises and cloud environments.
- Develop and maintain cloud-native solutions leveraging services from various cloud providers.
- Architect and deploy microservices using REST, GraphQL to support our application development needs.
- Collaborate with DevOps and development teams to ensure best practices in continuous integration and deployment (CI/CD).
- Provide guidance on database architecture, including relational and NoSQL databases, ensuring optimal performance and security.
- Implement robust security practices and policies to protect cloud environments and data.
- Design and implement data management strategies, including data governance, data integration, and data security.
- Stay-up-to-date with the latest industry trends and emerging technologies to drive continuous improvement and innovation.
- Troubleshoot and resolve cloud infrastructure issues, ensuring high availability and reliability.
- Optimize cost and performance across different cloud environments.
Qualifications/ Experience & Skills Required:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- Experience: 10 - 15 Years
- Proven experience as a Cloud Architect or in a similar role, with a strong focus on multi-cloud environments.
- Expertise in cloud migration projects, both lift-and-shift and greenfield implementations.
- Strong knowledge of cloud-native solutions and microservices architecture.
- Proficiency in using GraphQL for designing and implementing APIs.
- Solid understanding of database technologies, including SQL, NoSQL, and cloud-based database solutions.
- Experience with DevOps practices and tools, including CI/CD pipelines.
- Excellent problem-solving skills and ability to troubleshoot complex issues.
- Strong communication and collaboration skills, with the ability to work effectively in a team environment.
- Deep understanding of cloud security practices and data protection regulations (e.g., GDPR, HIPAA).
- Experience with data management, including data governance, data integration, and data security.
Preferred Skills:
- Certifications in multiple cloud platforms (e.g., AWS Certified Solutions Architect, Google Certified Professional Cloud Architect, Microsoft Certified: Azure Solutions Architect).
- Experience with containerization technologies (Docker, Kubernetes).
- Familiarity with cloud cost management and optimization tools.
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L2 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. You are also required to have hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms.
Role & Responsibilities:
Your role is focused on Design, Development and delivery of solutions involving:
• Data Integration, Processing & Governance
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Implement scalable architectural models for data processing and storage
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time mode
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 5+ years of IT experience with 3+ years in Data related technologies
2.Minimum 2.5 years of experience in Big Data technologies and working exposure in at least one cloud platform on related data services (AWS / Azure / GCP)
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
6.Well-versed and working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Cloud data specialty and other related Big data technology certifications
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Publicis Sapient Overview:
The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
.
Job Summary:
As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution
The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.
Role & Responsibilities:
Job Title: Senior Associate L1 – Data Engineering
Your role is focused on Design, Development and delivery of solutions involving:
• Data Ingestion, Integration and Transformation
• Data Storage and Computation Frameworks, Performance Optimizations
• Analytics & Visualizations
• Infrastructure & Cloud Computing
• Data Management Platforms
• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
• Build functionality for data analytics, search and aggregation
Experience Guidelines:
Mandatory Experience and Competencies:
# Competency
1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies
2.Minimum 1.5 years of experience in Big Data technologies
3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable
5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc
Preferred Experience and Knowledge (Good to Have):
# Competency
1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience
2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc
3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures
4.Performance tuning and optimization of data pipelines
5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality
6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security
7.Cloud data specialty and other related Big data technology certifications
Job Title: Senior Associate L1 – Data Engineering
Personal Attributes:
• Strong written and verbal communication skills
• Articulation skills
• Good team player
• Self-starter who requires minimal oversight
• Ability to prioritize and manage multiple tasks
• Process orientation and the ability to define and set up processes
Data Engineering : Senior Engineer / Manager
As Senior Engineer/ Manager in Data Engineering, you will translate client requirements into technical design, and implement components for a data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution.
Must Have skills :
1. GCP
2. Spark streaming : Live data streaming experience is desired.
3. Any 1 coding language: Java/Pyhton /Scala
Skills & Experience :
- Overall experience of MINIMUM 5+ years with Minimum 4 years of relevant experience in Big Data technologies
- Hands-on experience with the Hadoop stack - HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.
- Strong experience in at least of the programming language Java, Scala, Python. Java preferable
- Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc.
- Well-versed and working knowledge with data platform related services on GCP
- Bachelor's degree and year of work experience of 6 to 12 years or any combination of education, training and/or experience that demonstrates the ability to perform the duties of the position
Your Impact :
- Data Ingestion, Integration and Transformation
- Data Storage and Computation Frameworks, Performance Optimizations
- Analytics & Visualizations
- Infrastructure & Cloud Computing
- Data Management Platforms
- Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time
- Build functionality for data analytics, search and aggregation

Required Qualifications:
∙Bachelor’s degree in computer science, Information Technology, or related field, or equivalent experience.
∙5+ years of experience in a DevOps role, preferably for a SaaS or software company.
∙Expertise in cloud computing platforms (e.g., AWS, Azure, GCP).
∙Proficiency in scripting languages (e.g., Python, Bash, Ruby).
∙Extensive experience with CI/CD tools (e.g., Jenkins, GitLab CI, Travis CI).
∙Extensive experience with NGINX and similar web servers.
∙Strong knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
∙Familiarity with infrastructure-as-code tools (e.g. Terraform, CloudFormation).
∙Ability to work on-call as needed and respond to emergencies in a timely manner.
∙Experience with high transactional e-commerce platforms.
Preferred Qualifications:
∙Certifications in cloud computing or DevOps are a plus (e.g., AWS Certified DevOps Engineer,
Azure DevOps Engineer Expert).
∙Experience in a high availability, 24x7x365 environment.
∙Strong collaboration, communication, and interpersonal skills.
∙Ability to work independently and as part of a team.

Job Description
Technical lead who will be responsible for development, managing team(s), monitoring the tasks / sprint. They will also work with BA Persons to gather the new requirements and change request. They will help solve application issues and helping developers when they are stuck.
Responsibilities
· Design and develop application based on the architecture provided by the solution architects.
· Help team members and co developers to achieve their tasks.
· Maintain / monitor the new work items and support issues and have to assign it to the respective developers.
· Communicate with BA persons and Solution architects for the new requirements and change requests.
· Resolve any support tickets with the help of your team within service timelines.
· Manage sprint to achieve the targets.
Technical Skills
· Microsoft .NET MVC
· .NET Core 3.1 or greater
· C#
· Web API
· Async Programming, Threading, and tasks
· Test Driven Development
· Strong expert in SQL (Table Design, Programing, Optimization)
· Azure Functions
· Azure Storage
· MongoDB, NoSQL
Qualifications/Skills Desired:
· Any Bachelor’s degree relevant to Computer Science. MBA or equivalent is a plus
· Minimum of 8-10 years IT experience and managing a team(s) out of which 4-5 years should be as a technical/team lead.
· Strong verbal and written communication skills with the ability to adapt to many different personalities and conflict resolution skills required
· Must have excellent organizational and time management skills with strong attention to detail
· Confidentiality with privacy-sensitive customer and employee documents
· Strong work ethic - demonstrate good attitude and judgment, discretion, and maintain high level of confidentiality
· Previous experience of customer interactions


Position: Technical Architect
Location: Hyderabad
Experience: 6+ years
Job Summary:
We are looking for an experienced Technical Architect with a strong background in Python, Node.js, and React to lead the design and development of complex and scalable software solutions. The ideal candidate will possess exceptional technical skills, a deep understanding of software architecture principles, and a proven track record of successfully delivering high-quality projects. You should be capable of leading a cross-functional team that's responsible for the full software development life cycle, from conception to deployment with Agile methodologies.
Responsibilities:
● Lead the design, development, and deployment of software solutions, ensuring architectural integrity and high performance.
● Collaborate with cross-functional teams, including developers, designers, and product managers, to define technical requirements and create effective solutions.
● Provide technical guidance and mentorship to development teams, ensuring best practices and coding standards are followed.
● Evaluate and recommend appropriate technologies, frameworks, and tools to achieve project goals.
● Drive continuous improvement by staying updated with industry trends, emerging technologies, and best practices.
● Conduct code reviews, identify areas of improvement, and promote a culture of excellence in software development.
● Participate in architectural discussions, making strategic decisions and aligning technical solutions with business objectives.
● Troubleshoot and resolve complex technical issues, ensuring optimal performance and reliability of software applications.
● Collaborate with stakeholders to gather and analyze requirements, translating them into technical specifications.
● Define and enforce architectural patterns, ensuring scalability, security, and maintainability of systems.
● Lead efforts to refactor and optimize existing codebase, enhancing performance and maintainability.
Qualifications:
● Bachelor's degree in Computer Science, Software Engineering, or a related field. Master's degree is a plus.
● Minimum of 8 years of experience in software development with a focus on Python, Node.js, and React.
● Proven experience as a Technical Architect, leading the design and development of complex software systems.
● Strong expertise in software architecture principles, design patterns, and best practices.
● Extensive hands-on experience with Python, Node.js, and React, including designing and implementing scalable applications.
● Solid understanding of microservices architecture, RESTful APIs, and cloud technologies (AWS, GCP, or Azure).
● Extensive knowledge of JavaScript, web stacks, libraries, and frameworks.
● Should create automation test cases and unit test cases (optional)
● Proficiency in database design, optimization, and data modeling.
● Experience with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes).
● Excellent problem-solving skills and the ability to troubleshoot complex technical issues.
● Strong communication skills, both written and verbal, with the ability to effectively interact with cross-functional teams.
● Prior experience in mentoring and coaching development teams.
● Strong leadership qualities with a passion for technology innovation.
● have experience in using Linux-based development environments using GitHub and CI/CD
● Atlassian stack (JIRA/Confluence)

Golang Developer
Location: Chennai/ Hyderabad/Pune/Noida/Bangalore
Experience: 4+ years
Notice Period: Immediate/ 15 days
Job Description:
- Must have at least 3 years of experience working with Golang.
- Strong Cloud experience is required for day-to-day work.
- Experience with the Go programming language is necessary.
- Good communication skills are a plus.
- Skills- Aws, Gcp, Azure, Golang
Electrum is looking for an experienced and proficient DevOps Engineer. This role will provide you with an opportunity to explore what’s possible in a collaborative and innovative work environment. If your goal is to work with a team of talented professionals that is keenly focused on solving complex business problems and supporting product innovation with technology, you might be our new DevOps Engineer. With this position, you will be involved in building out systems for our rapidly expanding team, enabling the whole engineering group to operate more effectively and iterate at top speed in an open, collaborative environment. The ideal candidate will have a solid background in software engineering and a vivid experience in deploying product updates, identifying production issues, and implementing integrations. The ideal candidate has proven capabilities and experience in risk-taking, is willing to take up challenges, and is a strong believer in efficiency and innovation with exceptional communication and documentation skills.
YOU WILL:
- Plan for future infrastructure as well as maintain & optimize the existing infrastructure.
- Conceptualize, architect, and build:
- 1. Automated deployment pipelines in a CI/CD environment like Jenkins;
- 2. Infrastructure using Docker, Kubernetes, and other serverless platforms;
- 3. Secured network utilizing VPCs with inputs from the security team.
- Work with developers & QA team to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional, and performance status.
- Work with developers to institute systems, policies, and workflows which allow for a rollback of deployments.
- Triage release of applications/ Hotfixes to the production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
- Assist the developers and on calls for other teams with a postmortem, follow up and review of issues affecting production availability.
- Scale Electum platform to handle millions of requests concurrently.
- Reduce Mean Time To Recovery (MTTR), enable High Availability and Disaster Recovery
PREREQUISITES:
- Bachelor’s degree in engineering, computer science, or related field, or equivalent work experience.
- Minimum of six years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructures such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail, and other services provided by AWS.
- At least 2 years of experience in building and owning serverless infrastructure.
- At least 2 years of scripting experience in Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible).
- Experience building a multi-region highly available auto-scaling infrastructure that optimizes performance and cost.
- Experience in automating the provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have prior experience automating deployments to production and lower environments.
- Worked on providing solutions for major automation with scripts or infrastructure.
- Experience with APM tools such as DataDog and log management tools.
- Experience in designing and implementing Essential Functions System Architecture Process; establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Experience establishing and enforcing:
- 1. System monitoring tools and standards
- 2. Risk Assessment policies and standards
- 3. Escalation policies and standards
- Excellent DevOps engineering, team management, and collaboration skills.
- Advanced knowledge of programming languages such as Python and writing code and scripts.
- Experience or knowledge in - Application Performance Monitoring (APM), and prior experience as an open-source contributor will be preferred.
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English


Looking for technical lead in .Net who are having good experience in .net Domain with cloud platform along with data structure and algorithms.
looking only for immediate joiners in Hyderabad region.
Key Sills Required for Lead DevOps Engineer
Containerization Technologies
Docker, Kubernetes, OpenShift
Cloud Technologies
AWS/Azure, GCP
CI/CD Pipeline Tools
Jenkins, Azure Devops
Configuration Management Tools
Ansible, Chef,
SCM Tools
Git, GitHub, Bitbucket
Monitoring Tools
New Relic, Nagios, Prometheus
Cloud Infra Automation
Terraform
Scripting Languages
Python, Shell, Groovy
· Ability to decide the Architecture for the project and tools as per the availability
· Sound knowledge required in the deployment strategies and able to define the timelines
· Team handling skills are a must
· Debugging skills are an advantage
· Good to have knowledge of Databases like Mysql, Postgresql
It is advantageous to be familiar with Kafka. RabbitMQ
· Good to have knowledge of Web servers to deploy web applications
· Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning
· Advantage to having experience in DevSecOps
Note: Tools mentioned in bold are a must and others are added advantage
POSITION SUMMARY:
We are looking for a passionate, high energy individual to help build and manage the infrastructure network that powers the Product Development Labs for F5 Inc. The F5 Infra Engineer plays a critical role to our Product Development team by providing valuable services and tools for the F5 Hyderabad Product Development Lab. The Infra team supports both production systems and customized/flexible testing environments used by Test and Product Development teams. As an Infra Engineer, you ’ll have the opportunity to work with cutting-edge technology and work with talented individuals. The ideal candidate will have experience in Private and Public Cloud – AWS-AZURE-GCP, OpenStack, storage, Backup, VMware, KVM, XEN, HYPER-V Hypervisor Server Administration, Networking and Automation in Data Center Operations environment at a global enterprise scale with Kubernetes, OpenShift Container Flatforms.
EXPERIENCE
7- 9+ Years – Software Engineer III
PRIMARY RESPONSIBILITIES:
-
Drive the design, Project Build, Infrastructure setup, monitoring, measurements, and improvements around the quality of services Provided, Network and Virtual Instances service from OpenStack, VMware VIO, Public and private cloud and DevOps environments.
-
Work closely with the customers and understand the requirements and get it done on timelines.
-
Work closely with F5 architects and vendors to understand emerging technologies and F5 Product Roadmap and how they would benefit the Infra team and its users.
-
Work closely with the Team and complete the deliverables on-time
-
Consult with testers, application, and service owners to design scalable, supportable network infrastructure to meet usage requirements.
-
Assume ownership for large/complex systems projects; mentor Lab Network Engineers in the best practices for ongoing maintenance and scaling of large/complex systems.
-
Drive automation efforts for the configuration and maintainability of the public/private Cloud.
-
Lead product selection for replacement or new technologies
-
Address user tickets in a timely manner for the covered services
-
Responsible for deploying, managing, and supporting production and pre-production environments for our core systems and services.
-
Migration and consolidations of infrastructure
-
Design and implement major service and infrastructure components.
-
Research, investigate and define new areas of technology to enhance existing service or new service directions.
-
Evaluate performance of services and infrastructure; tune, re-evaluate the design and implementation of current source code and system configuration.
-
Create and maintain scripts and tools to automate the configuration, usability and troubleshooting of the supported applications and services.
-
Ability to take ownership on activities and new initiatives.
-
Infra Global Support from India towards product Development teams.
-
On-call support on a rotational basis for a global turn-around time-zones
-
Vendor Management for all latest hardware and software evaluations keep the system up-to-date.
KNOWLEDGE, SKILLS AND ABILITIES:
-
Have an in-depth multi-disciplined knowledge of Storage, Compute, Network, DevOps technologies and latest cutting-edge technologies.
-
Multi-cloud - AWS, Azure, GCP, OpenStack, DevOps Operations
-
IaaS- Infrastructure as a service, Metal as service, Platform service
-
Storage – Dell EMC, NetApp, Hitachi, Qumulo and Other storage technologies
-
Hypervisors – (VMware, Hyper-V, KVM, Xen and AHV)
-
DevOps – Kubernetes, OpenShift, docker, other container and orchestration flatforms
-
Automation – Scripting experience python/shell/golan , Full Stack development and Application Deployment
-
Tools - Jenkins, splunk, kibana, Terraform, Bitbucket, Git, CI/CD configuration.
-
Datacenter Operations – Racking, stacking, cable matrix, Solution Design and Solutions Architect
-
Networking Skills – Cisco/Arista Switches, Routers, Experience on Cable matrix design and pathing (Fiber/copper)
-
Experience in SAN/NAS storage – (EMC/Qumulo/NetApp & others)
-
Experience with Red Hat Ceph storage.
-
A working knowledge of Linux, Windows, and Hypervisor Operating Systems and virtual machine technologies
-
SME - subject matter expert for all cutting-edge technologies
-
Data center architect professional & Storage Expert level Certified professional experience .
-
A solid understanding of high availability systems, redundant networking and multipathing solutions
-
Proven problem resolution related to network infrastructure, judgment, negotiating and decision-making skills along with excellent written and oral communication skills.
-
A Working experience in Object – Block – File storage Technologies
-
Experience in Backup Technologies and backup administration.
-
Dell/HP/Cisco UCS server’s administration is an additional advantage.
-
Ability to quickly learn and adopt new technologies.
-
A very very story experience and exposure towards open-source flatforms.
-
A working experience on monitoring tools Zabbix, nagios , Datadog etc ..
-
A working experience on and BareMetal services and OS administration.
-
A working experience on the cloud like AWS- ipsec, Azure - express route, GCP – Vpn tunnel etc.
-
A working experience in working using software define network like (VMware NSX, SDN, Openvswitch etc ..)
-
A working experience with systems engineering and Linux /Unix administration
-
A working experience with Database administration experience with PostgreSQL, MySQL, NoSQL
-
A working experience with automation/configuration management using either Puppet, Chef or an equivalent
-
A working experience with DevOps Operations Kubernetes, container, Docker, and git repositories
-
Experience in Build system process and Code-inspect and delivery methodologies.
-
Knowledge on creating Operational Dashboards and execution lane.
-
Experience and knowledge on DNS, DHCP, LDAP, AD, Domain-controller services and PXE Services
-
SRE experience in responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
-
Vendor support – OEM upgrades, coordinating technical support and troubleshooting experience.
-
Experience in handling On-call Support and hierarchy process.
-
Knowledge on scale-out and scale-in architecture.
-
Working experience in ITSM / process Management tools like ServiceNow, Jira, Jira Align.
-
Knowledge on Agile and Scrum principles
-
Working experience with ServiceNow
-
Knowledge sharing, transition experience and self-learning Behavioral.
mavQ is seeking a motivated Lead FullStack Developer to join our team. You will be an integral member of the Professional Services team dedicated to work on the recently acquired mavQ’s Electronic Research Administration products built on a REST based Java platform running in Tomcat. The work is about 70% implementation and new development and 30% maintenance.
Skills Required:
- At least 7 years of experience with Frontend and Backend Technologies.
- Experience in Java, Spring, API Integration, Angular, Frontend Development.
- Ability to work with caching systems such as Redis.
- Good understanding of cloud computing & Distributed systems.
- Experience in people management.
- Capability of working within a budget of hours and completing projects by a deadline to appropriate quality standards
- Communicate clearly and ask clarification questions, dig deeper.
- Ability to work from visual and functional specifications
- Work with a positive attitude even when circumstances may be unfavorable
- Understand RESTful APIs, MVC concepts, and how to effectively use SVC systems
- Ability to work effectively in a team of developers and representatives of other functional groups, such as design
- Good Experience in CI/CD, Kubernetes, Test suits, Docker.
- Good Understanding of the Command line.
- Good with client Interaction.
Roles & Responsibilities:
- Effective Problem Solving Skills. Assist the team with debugging & guidance when needed.
- Responsible for managing a team of 8-10 developers.
- Lead the product’s technical domain with full authority.
- Assist the product & project management teams with setting timelines & priorities for feature development.
- Drive good coding standards & development practices in the team.
- Own the code review process for the team & ensure high quality work.
- Assist with system & architecture design & research into new technologies.
- Be involved with client communications when needed from a product sales perspective.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
***
Job Description
- Implement IAM policies and configure VPCs to create a scalable and secure network for the application workloads
- Will be client point of contact for High Priority technical issues and new requirements
- Should act as Tech Lead and guide the junior members of team and mentor them
- Work with client application developers to build, deploy and run both monolithic and microservices based applications on AWS Cloud
- Analyze workload requirements and work with IT stakeholders to define proper sizing for cloud workloads on AWS
- Build, Deploy and Manage production workloads including applications on EC2 instance, APIs on Lambda Functions and more
- Work with IT stakeholders to monitor system performance and proactively improve the environment for scale and security
Qualifications
- Prefer to have at least 5+ years of IT experience implementing enterprise applications
- Should be AWS Solution Architect Associate Certified
- Must have at least 3+ years of working as a Cloud Engineer focused on AWS services such as EC2, CloudFront, VPC, CloudWatch, RDS, DynamoDB, Systems Manager, Route53, WAF, API Gateway, Elastic beanstalk, ECS, ECR, Lambda, SQS, SNS, S3 bucket, Elastic Search, DocumentDB IAM, etc.
- Must have a strong understanding of EC2 instances, types and deploying applications to the cloud
- Must have a strong understanding of IAM policies, VPC creation, and other security/networking principles
- Must have through experience in doing on prem to AWS cloud workload migration
- Should be comfortable in using AWS and other migrations tools
- Should have experience is working on AWS performance, Cost and Security optimisation
- Should be experience in implementing automated patching and hardening of the systems
- Should be involved in P1 tickets and also guide team wherever needed
- Creating Backups and Managing Disaster Recovery
- Experience in using Infra as a code automation using scripts & tools like CloudFormation and Terraform
- Any exposure towards creating CI/CD pipelines on AWS using CodeBuild, CodeDeploy, etc. is an advantage
- Experience with Docker, Bitbucket, ELK and deploying applications on AWS
- Good understanding of Containerisation technologies like Docker, Kubernetes etc.
- Should be experience in using and configuring cloud monitoring tools and ITSM ticketing tools
- Good exposure to Logging & Monitoring tools like Dynatrace, Prometheus, Grafana, ELF/EFK
- Provision Dev Test Prod Infrastructure as code using IaC (Infrastructure as Code)
- Good knowledge on Terraform
- In-depth knowledge of security and IAM / Role Based Access Controls in Azure, management of Azure Application/Network Security Groups, Azure Policy, and Azure Management Groups and Subscriptions.
- Experience with Azure and GCP compute, storage and networking (we can also look for GCP )
- Experience in working with ADLS Gen2, Databricks and Synapse Workspace
- Experience supporting cloud development pipelines using Git, CI/CD tooling, Terraform and other Infrastructure as Code tooling as appropriate
- Configuration Management (e.g. Jenkins, Ansible, Git, etc...)
- General automation including Azure CLI, or Python, PowerShell and Bash scripting
- Experience with Continuous Integration/Continuous Delivery models
- Knowledge of and experience in resolving configuration issues
- Understanding of software and infrastructure architecture
- Experience in Paas, Terraform and AKS
- Monitoring, alerting and logging tools, and build/release processes Understanding of computing technologies across Windows and Linux
Experienced with Azure DevOps, CI/CD and Jenkins.
Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.
Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.
Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.
Demonstrable experience with the following technologies:
Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.
Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.
Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.
Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.
Ability and desire to quickly pick up new technologies, languages, and tools
Excellent communication skills and Good team player.
Passionate about code quality and best practices is an absolute must
Must show evidence of your passion for technology and continuous learning

This company provides on-demand cloud computing platforms.

- 15+ years of Hands-on technical application architecture experience and Application build/ modernization experience
- 15+ years of experience as a technical specialist in Customer-facing roles.
- Ability to travel to client locations as needed (25-50%)
- Extensive experience architecting, designing and programming applications in an AWS Cloud environment
- Experience with designing and building applications using AWS services such as EC2, AWS Elastic Beanstalk, AWS OpsWorks
- Experience architecting highly available systems that utilize load balancing, horizontal scalability and high availability
- Hands-on programming skills in any of the following: Python, Java, Node.js, Ruby, .NET or Scala
- Agile software development expert
- Experience with continuous integration tools (e.g. Jenkins)
- Hands-on familiarity with CloudFormation
- Experience with configuration management platforms (e.g. Chef, Puppet, Salt, or Ansible)
- Strong scripting skills (e.g. Powershell, Python, Bash, Ruby, Perl, etc.)
- Strong practical application development experience on Linux and Windows-based systems
- Extra curricula software development passion (e.g. active open source contributor)
Must have | Proficient exp of minimum 4 years into DevOps with at least one devops end to end project implementation. Strong expertise on DevOps concepts like Continuous Integration (CI), Continuous delivery (CD) and Infrastructure as Code, Cloud deployments. Minimum exp of 2.5-3 years of Configuration, development and deployment with their underlying technologies including Docker/Kubernetes and Prometheus. Should have implemented an end to end devops pipeline using Jenkins or any similar framework. Experience with Microservices architecture. Sould have sound knowledge in branching and merging strategies. Experience working with cloud computing technologies like Oracle Cloud *(preferred) /GCP/AWS/OpenStack Strong experience in AWS/Azure/GCP/open stack , deployment process, dockerization. Good experience in release management tools like JIRA or similar tools. |
Good to have | Knowledge of Infra automation tools Terraform/CHEF/ANSIBLE (Preferred) Experience in test automation tools like selenium/cucumber/postman Good communication skills to present devops solutions to the client and drive the implementation. Experience in creating and managing custom operational and monitoring scripts. Good knowledge in source control tools like Subversion, Git,bitbucket, clearcase. Experience in system architecture design |
Job Description
Who are we looking for?
A senior level Java-J2EEE lead to manage a critical project for one of the biggest clients in banking domain. The Individual should be passionate about technology, experienced in developing and managing cutting edge technology applications.
We are looking for people from the trading background
Technical Skills:
- An excellent tech Lead or application architect with strong experience in monolithic Java legacy applications, modern cloud-native
- Strong hands-on experience in Spring, Core Java specifically on multi-threading, concurrency, memory management process, and fair understanding on network communication & protocols.
- experience working in software development on low-latency and high performing systems
- Experienced to guide & mentor the offshore team members, validating application deliverables on regular basis
- Ability to work in a collaborative manner with peers across different time zones.
- Passionate about good design and code quality and have strong engineering practices
- Experience working on GCP will be preferred.
Process Skills:
- Experience in analyzing requirements and develop software as per project defined software process
- Develop and review design, code
- Develop and document architecture framework, technical standards, and application roadmap
- Guide development teams to comply with the architecture and development standards and ensure quality application is designed, developed, and delivered
- Must have excellent communication skills.
Behavioral Skills:
- Resolve technical issues of projects and Explore alternate designs
- Effectively collaborates and communicates with the stakeholders and ensure client satisfaction
- Mentor, Train and coach members of project groups to ensure effective knowledge management activity.
Certification:
- Ultimate sun Java and GCP Certified
- Experience building large scale, large volume services & distributed apps., taking them through production and post-production life cycles
- Experience in Programming Language: Java 8, Javascript
- Experience in Microservice Development or Architecture
- Experience with Web Application Frameworks: Spring or Springboot or Micronaut
- Designing: High Level/Low-Level Design
- Development Experience: Agile/ Scrum, TDD(Test Driven Development)or BDD (Behaviour Driven Development) Plus Unit Testing
- Infrastructure Experience: DevOps, CI/CD Pipeline, Docker/ Kubernetes/Jenkins, and Cloud platforms like – AWS, AZURE, GCP, etc
- Experience on one or more Database: RDBMS or NoSQL
- Experience on one or more Messaging platforms: JMS/RabbitMQ/Kafka/Tibco/Camel
- Security (Authentication, scalability, performance monitoring)
Implementing various development, testing, automation tools, and IT infrastructure
Planning the team structure, activities, and involvement in project management activities.
Managing stakeholders and external interfaces
Setting up tools and required infrastructure
Defining and setting development, test, release, update, and support processes for DevOps operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Troubleshooting techniques and fixing the code bugs
Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
Encouraging and building automated processes wherever possible
Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
Incidence management and root cause analysis
Coordination and communication within the team and with customers
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Mentoring and guiding the team members
Monitoring and measuring customer experience and KPIs
Managing periodic reporting on the progress to the management and the customer
• Support software build and release efforts:
• Create, set up, and maintain builds
• Review build results and resolve build problems
• Create and Maintain build servers
• Plan, manage, and control product releases
• Validate, archive, and escrow product releases
• Maintain and administer configuration management tools, including source control, defect management, project management, and other systems.
• Develop scripts and programs to automate process and integrate tools.
• Resolve help desk requests from worldwide product development staff.
• Participate in team and process improvement projects.
• Interact with product development teams to plan and implement tool and build improvements.
• Perform other duties as assigned.
While the job description describes what is anticipated as the requirements of the position, the job requirements are subject to change based upon any changing needs and requirements of the business.
Required Skills
• TFS 2017 vNext Builds or AzureDevOps Builds Process
• Must to have PowerShell 3.0+ Scripting knowledge
• Exposure on Build Tools like MSbuild, NANT, XCode.
• Exposure on Creating and Maintaining vCenter/VMware vSphere 6.5
• Hands On experiences on above Win2k12 OS and basic info on MacOS
• Good to have Shell or Batch Script (optional)
Required Experience
Candidates for this position should hold the following qualifications to be considered as a suitable applicant. Please note that except where specified as “preferred,” or as a “plus,” all points listed below are considered minimum requirements.
• Bachelors Degree in a related discipline is strongly preferred
• 3 or more years experience with Software Configuration Management tools, concepts, and processes.
• Exposure to Source control systems such as TFS, GIT, or Subversion (Optional)
• Familiarity with object-oriented concepts and programming in C# and Power Shell Scripting.
• Experience working on AzureDevOps Builds or vNext Builds or Jenkins Builds
• Experience working with developers to resolve development issues related to source control systems.
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience


Technical Proficiency :
Must have :
- Strong development experience in Python in the environment of Unix/Linux/Ubuntu
- Strong practical knowledge of Python and its libraries.
- Current working experience with cloud deployment of AWS/Azure/GCP, Microservice architecture, and Docker in Python.
- Good knowledge of CI/CD and DevOps practices
- Good Experience of Python with Django/ Scrapy/ Flask frameworks.
- Good Experience in Jupyter/ Docker/ Elastic Search, etc.
- Solid understanding of software development principles and best practices.
- Strong analytical thinking and problem-solving skills.
- Proven ability to drive large-scale projects with a deep understanding of Agile SDLC, high collaboration, and leadership.
Good to have :
- Expected to have migration experience from one version to the other, as this project is about migration to the latest version.
- Preferred if had an OpenEdx platform experience or any LMS platform.


We are a tech venture which provides Product Engineering, QA Automation, Infrastructure, Data, and Market Research services.
Technical Proficiency :
Must have :
-
Strong development experience in Python in the environment of Unix/Linux/Ubuntu
-
Strong practical knowledge of Python and its libraries.
-
Current working experience with cloud deployment of AWS/Azure/GCP, Microservice architecture, and Docker in Python.
-
Good knowledge of CI/CD and DevOps practices
-
Good Experience of Python with Django/ Scrapy/ Flask frameworks.
-
Good Experience in Jupyter/ Docker/ Elastic Search, etc.
-
Solid understanding of software development principles and best practices.
-
Strong analytical thinking and problem-solving skills.
-
Proven ability to drive large-scale projects with a deep understanding of Agile SDLC, high collaboration, and leadership.
Good to have : -
Expected to have migration experience from one version to the other, as this project is about migration to the latest version.
-
Preferred if had an OpenEdx platform experience or any LMS platform.
Datametica is looking for talented Big Query engineers
Total Experience - 2+ yrs.
Notice Period – 0 - 30 days
Work Location – Pune, Hyderabad
Job Description:
- Sound understanding of Google Cloud Platform Should have worked on Big Query, Workflow, or Composer
- Experience in migrating to GCP and integration projects on large-scale environments ETL technical design, development, and support
- Good SQL skills and Unix Scripting Programming experience with Python, Java, or Spark would be desirable.
- Experience in SOA and services-based data solutions would be advantageous
About the Company:
www.datametica.com
Datametica is amongst one of the world's leading Cloud and Big Data analytics companies.
Datametica was founded in 2013 and has grown at an accelerated pace within a short span of 8 years. We are providing a broad and capable set of services that encompass a vision of success, driven by innovation and value addition that helps organizations in making strategic decisions influencing business growth.
Datametica is the global leader in migrating Legacy Data Warehouses to the Cloud. Datametica moves Data Warehouses to Cloud faster, at a lower cost, and with few errors, even running in parallel with full data validation for months.
Datametica's specialized team of Data Scientists has implemented award-winning analytical models for use cases involving both unstructured and structured data.
Datametica has earned the highest level of partnership with Google, AWS, and Microsoft, which enables Datametica to deliver successful projects for clients across industry verticals at a global level, with teams deployed in the USA, EU, and APAC.
Recognition:
We are gratified to be recognized as a Top 10 Big Data Global Company by CIO story.
If it excites you, please apply.
Key Skills Required:
· You will be part of the DevOps engineering team, configuring project environments, troubleshooting integration issues in different systems also be involved in building new features for next generation of cloud recovery services and managed services.
· You will directly guide the technical strategy for our clients and build out a new capability within the company for DevOps to improve our business relevance for customers.
· You will be coordinating with Cloud and Data team for their requirements and verify the configurations required for each production server and come with Scalable solutions.
· You will be responsible to review infrastructure and configuration of micro services and packaging and deployment of application
To be the right fit, you'll need:
· Expert in Cloud Services like AWS.
· Experience in Terraform Scripting.
· Experience in container technology like Docker and orchestration like Kubernetes.
· Good knowledge of frameworks such as Jenkins, CI/CD pipeline, Bamboo Etc.
· Experience with various version control system like GIT, build tools (Mavan, ANT, Gradle ) and cloud automation tools (Chef, Puppet, Ansible)

Job Responsibilities
- Design, build & test ETL processes using Python & SQL for the corporate data warehouse
- Inform, influence, support, and execute our product decisions
- Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
- Evaluate and prototype new technologies in the area of data processing
- Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
- High energy level, strong team player and good work ethic
- Data analysis, understanding of business requirements and translation into logical pipelines & processes
- Identification, analysis & resolution of production & development bugs
- Support the release process including completing & reviewing documentation
- Configure data mappings & transformations to orchestrate data integration & validation
- Provide subject matter expertise
- Document solutions, tools & processes
- Create & support test plans with hands-on testing
- Peer reviews of work developed by other data engineers within the team
- Establish good working relationships & communication channels with relevant departments
Skills and Qualifications we look for
- University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
- 4 - 6 years experience with data engineering.
- Strong coding ability and software development experience in Python.
- Strong hands-on experience with SQL and Data Processing.
- Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
- Good working experience in any one of the ETL tools (Airflow would be preferable).
- Should possess strong analytical and problem solving skills.
- Good to have skills - Apache pyspark, CircleCI, Terraform
- Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
- Understanding & experience of agile / scrum delivery methodology
A strong technologist at Hitwicket cares about code modularity, scalability and reusability and thrives in a complex and ambiguous environment.
1. Design and develop game features from prototype to full implementation
2. Have Strong Unity 3D programming skills on C#, JavaScript to craft cutting edge immersive
mobile gaming experience
3. Understand cross-platform development and client server applications to give Players a
seamless and fun mobile gaming experience
4. Implement game functionality by translating design ideas, concepts, and requirements into
functional and engaging features in the game
5. Get involved in all areas of game development including Graphics, Game Logic, and User
Interface
6. Take ownership of the game and work closely with the Product and Design teams
7. Help maintain code quality by writing robust code to be used by millions of users
8. Develop crazy new experiences for our players and think outside the box to deliver
something new
9. Be proactive, support and contribute new ideas to game design.
10. Work experience of 4 - 10 years is preferable
11. Experience of team management is a plus
12. A broad skill set is a strength! We are a small team, so cross discipline skills are highly valued.
What we offer you?
- Casual dress every single day
- 5 days a week
- Well stocked pantry
- Work with cool people and delight millions of gamers across the globe
- ESOPs and other incentives
About Us:
What is Hitwicket?
Hitwicket is a strategy-based cricket game played by over 3 million diverse groups of players across the world!! Our mission is to build the world’s best cricket game & be the first mobile Esports IP from India!!
We’re a Series A funded Technology startup based in Hyderabad and co-founded by VIT alumni. We are backed by Prime Venture Partners, one of India’s oldest and most successful venture funds - https://primevp.in/">https://primevp.in/
Hitwicket Superstars won the First prize in Prime Minister’s AatmaNirbhar Bharat App Innovation Challenge, a nation-wide contest to identify the top homegrown startups who are building for the Global market; Made in India, for India & the World!
What is next?
With the phenomenal success of our Cricket Game, we are now entering into the world of Football, NFTs & Blockchain gaming! We are assembling a team to join us on our mission to make something as massive as PUBG or Clash of Clans from India. -
How are we unique?
- 1st Place (Gaming) - Prime Minister’s Atma Nirbhar App Challenge
- Selected among the 'Top Gaming Startups in Asia' for the inaugural batch of Google Games Accelerator program 2018 in Singapore
- Selected among the 'Global Top 10 Startups in Sports Innovation' by the HYPE, UK, competition was held at the University Olympic Games in Taiwan
- "Best App" Award presented by the IT Minister at HYSEA Annual Product Sumit
Our work culture is driven by speed, innovation and passion, not by hierarchy. Our work philosophy is centered around building a company like a 'Sports Team' where each member has an important role to play in the success of the company as a whole.
It doesn't matter if you are not a cricket fan, or a gamer, what matters to us are your Problem-solving skills and your Creativity.
Join us on this EPIC journey and make memories for a lifetime!
How to know more about us?
Download us https://play.google.com/store/apps/details?id=cricketgames.hitwicket.strategy">Android: https://apps.apple.com/in/app/hitwicket-superstars-2020/id1498437026">iOS - Website: https://hitwicket.com/">Hitwicket
https://www.google.com/search?q=hitwicket&tbm=nws&source=univ&tbo=u&sa=X&ved=2ahUKEwiTisPl95P7AhVVl-YKHQKPBqkQt8YBegQIPBAG&biw=1366&bih=657&dpr=1">Media Coverage - https://www.thehindubusinessline.com/info-tech/hitwicket-raises-3-million-from-prime-venture-partners/article66091944.ece">The Hindu Business Line - https://www.deccanchronicle.com/business/companies/031122/hitwicket-launches-cricket-strategy-game-hitwicket-superstars.html">Deccan Chronicle - https://telanganatoday.com/hyderabad-hitwicket-raises-rs-24-crore-launches-hitwicket-superstars">Telangana Today - https://inc42.com/buzz/hitwicket-bags-funding-to-build-strategy-driven-cricket-gaming-studio/">INC42 - https://www.vccircle.com/leapfroglaunches-climate-investment-strategy-appoints-new-partner">VCCircle
Watch us:
https://youtu.be/Q3arlzDf2KY">Hitwicket Superstars Launch - IGDC 2022 - https://yourstory.com/2022/10/keerti-singh-kashyap-reddy--india-cricket-web3-hitwicket">YourStory - https://play.google.com/console/about/weareplay/">Hitwicket Story - https://youtu.be/K16vvQt6sAM">Ms Keerti Interview on NDTV
How to get connected with us?
https://discord.gg/aTnZswv9">Hitwicket Discord Global Community - https://youtube.com/channel/UCQ_0a_kBzap0nJySx668Ekg">Youtube - https://www.linkedin.com/company/metasports-media/">Linkedin - https://twitter.com/HitwicketGame">Twitter - https://instagram.com/hitwicketsuperstars">Instagram - https://m.facebook.com/HitwicketSuperstarsCricketGame">Facebook - https://www.reddit.com/r/hitwicketsuperstars?utm_medium=android_app&utm_source=share">Reddit - https://superstars-35181.medium.com/">Medium
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.
Job Responsibilities:
- Be a part of a team developing a rapidly growing product used by thousands of small businesses.
- Responsible for building large scale applications that are high performance, scalable, and resilient in an microservices environment
- Evaluate new technologies and tools such as new frameworks, methodologies, best practices and other areas that will improve overall efficiencies and product quality
Skill Set:
- 3+ years of working experience in developing API centric core Java/J2EE applications using Spring boot, JPA, REST API, XML and JSON
- Experience in frontend framework - any one of Backbone, Angular, Vue or React
- Working experience in any one of the Cloud Platforms - Google Cloud (preferable) or AWS
- Experience in large scale databases - NoSQL and MongoDB.
- Hands on experience in Eclipse based development and using Git, SVN, Junit
- Past experience in startups or product development is a big plus.