50+ AWS (Amazon Web Services) Jobs in India
Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

- 2+ years of hands-on experience in Java development.
- Strong knowledge of Boto3/Boto AWS libraries (Python).
- Solid experience with AWS services: EC2, ELB/ALB, CloudWatch.
- Familiarity with SRE practices and maintenance processes.
- Strong experience in debugging, troubleshooting, and unit testing.
- Proficiency with Git and CI/CD tools.
- Understanding of distributed systems and cloud-native architecture.
A backend developer is an engineer who can handle all the work of databases, servers,
systems engineering, and clients. Depending on the project, what customers need may
be a mobile stack, a Web stack, or a native application stack.
You will be responsible for:
Build reusable code and libraries for future use.
Own & build new modules/features end-to-end independently.
Collaborate with other team members and stakeholders.
Required Skills :
Thorough understanding of Node.js and Typescript.
Excellence in at least one framework like strongloop loopback, express.js, sail.js, etc.
Basic architectural understanding of modern day web applications
Diligence for coding standards
Must be good with git and git workflow
Experience of external integrations is a plus
Working knowledge of AWS or GCP or Azure - Expertise with linux based systems
Experience with CI/CD tools like jenkins is a plus.
Experience with testing and automation frameworks.
Extensive understanding of RDBMS systems


Job description
Required Skills & Qualifications: (Minimum Experience 3Years)
- Proven experience as a Full Stack Developer or similar role.
- Proficiency in front-end technologies such as HTML, CSS, JavaScript, and frameworks like React, Angular, or Vue.js.
- Strong backend development experience with Node.js, Python, Ruby, Java, or .NET.
- Familiarity with database technologies such as SQL, MongoDB, or PostgreSQL.
- Experience with RESTful APIs and/or GraphQL.
- Knowledge of cloud platforms (AWS, Azure, Google Cloud) is a plus.
- Familiarity with version control tools, such as Git.
- Excellent problem-solving skills and attention to detail.
- Ability to work independently as well as collaboratively in a team environment.
- Strong communication and interpersonal skills for effective client interaction.
- Proven ability to manage projects, prioritize tasks, and meet deadlines.
Job Type: Full-time
Pay: ₹40,000.00 - ₹60,000.00 per month
Location Type:
- In-person
Schedule:
- Fixed shift
Experience:
- Full-stack development: 3 years (Required)

Job Summary:
We are looking for a motivated and detail-oriented Data Engineer with 1–2 years of experience to join our data engineering team. The ideal candidate should have solid foundational skills in SQL and Python, along with exposure to building or maintaining data pipelines. You’ll play a key role in helping to ingest, process, and transform data to support various business and analytical needs.
Key Responsibilities:
- Assist in the design, development, and maintenance of scalable and efficient data pipelines.
- Write clean, maintainable, and performance-optimized SQL queries.
- Develop data transformation scripts and automation using Python.
- Support data ingestion processes from various internal and external sources.
- Monitor data pipeline performance and help troubleshoot issues.
- Collaborate with data analysts, data scientists, and other engineers to ensure data quality and consistency.
- Work with cloud-based data solutions and tools (e.g., AWS, Azure, GCP – as applicable).
- Document technical processes and pipeline architecture.
Core Skills Required:
- Proficiency in SQL (data querying, joins, aggregations, performance tuning).
- Experience with Python, especially in the context of data manipulation (e.g., pandas, NumPy).
- Exposure to ETL/ELT pipelines and data workflow orchestration tools (e.g., Airflow, Prefect, Luigi – preferred).
- Understanding of relational databases and data warehouse concepts.
- Familiarity with version control systems like Git.
Preferred Qualifications:
- Experience with cloud data services (AWS S3, Redshift, Azure Data Lake, etc.)
- Familiarity with data modeling and data integration concepts.
- Basic knowledge of CI/CD practices for data pipelines.
- Bachelor’s degree in Computer Science, Engineering, or related field.

Role description:
You will be building curated enterprise grade solutions for GenAI application deployment at a production scale for clients. Solid understanding and hands on skills for GenAI application deployment that includes development and engineering skills. The role requires development and engineering skills on GenAI application development including data ingestion, choosing the right fit LLMs, simple and advanced RAG, guardrails, prompt engineering for optimisation, traceability, security, LLM evaluation, observability, and deployment at scale on cloud or on premise. As this space evolves very rapidly, candidates must also demonstrate knowledge on agentic AI frameworks. Candidates having strong background on ML with engineering skills is highly preferred for LLMOps role.
Required skills:
- 4-8 years of experience in working on ML projects that includes business requirement gathering, model development, training, deployment at scale and monitoring model performance for production use cases
- Strong knowledge on Python, NLP, Data Engineering, Langchain, Langtrace, Langfuse, RAGAS, AgentOps (optional)
- Should have worked on proprietary and open source large language models
- Experience on LLM fine tuning, creating distilled model from hosted LLMs
- Building data pipelines for model training
- Experience on model performance tuning, RAG, guardrails, prompt engineering, evaluation and observability
- Experience in GenAI application deployment on cloud and on-premise at scale for production
- Experience in creating CI/CD pipelines
- Working knowledge on Kubernetes
- Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services
- Experience in creating workable prototypes using Agentic AI frameworks like CrewAI, Taskweaver, AutoGen
- Experience in light weight UI development using streamlit or chainlit (optional)
- Desired experience on open-source tools for ML development, deployment, observability and integration
- Background on DevOps and MLOps will be a plus
- Experience working on collaborative code versioning tools like GitHub/GitLab
- Team player with good communication and presentation skills
Job Title: IT Head – Fintech Industry
Department: Information Technology
Location: Andheri East
Reports to: COO
Job Type: Full-Time
Job Overview:
The IT Head in a fintech company is responsible for overseeing the entire information technology infrastructure, including the development, implementation, and maintenance of IT systems, networks, and software solutions. The role involves leading the IT team, managing technology projects, ensuring data security, and ensuring the smooth functioning of all technology operations. As the company scales, the IT Head will play a key role in enabling digital innovation, optimizing IT processes, and ensuring compliance with relevant regulations in the fintech sector.
Key Responsibilities:
1. IT Strategy and Leadership
- Develop and execute the company’s IT strategy to align with the organization’s overall business goals and objectives, ensuring the integration of new technologies and systems.
- Lead, mentor, and manage a team of IT professionals, setting clear goals, priorities, and performance expectations.
- Stay up-to-date with industry trends and emerging technologies, providing guidance and recommending innovations to improve efficiency and security.
- Oversee the design, implementation, and maintenance of IT systems that support fintech products, customer experience, and business operations.
2. IT Infrastructure Management
- Oversee the management and optimization of the company’s IT infrastructure, including servers, networks, databases, and cloud services.
- Ensure the scalability and reliability of IT systems to support the company’s growth and increasing demand for digital services.
- Manage system updates, hardware procurement, and vendor relationships to ensure that infrastructure is cost-effective, secure, and high-performing.
3. Cybersecurity and Data Protection
- Lead efforts to ensure the company’s IT infrastructure is secure, implementing robust cybersecurity measures to protect sensitive customer data, financial transactions, and intellectual property.
- Develop and enforce data protection policies and procedures to ensure compliance with data privacy regulations (e.g., GDPR, CCPA, RBI, etc.).
- Conduct regular security audits and vulnerability assessments, working with the security team to address potential risks proactively.
4. Software Development and Integration
- Oversee the development and deployment of software applications and tools that support fintech operations, including payment gateways, loan management systems, and customer engagement platforms.
- Collaborate with product teams to identify technological needs, integrate new features, and optimize existing products for improved performance and user experience.
- Ensure the seamless integration of third-party platforms, APIs, and fintech partners into the company’s core systems.
5. IT Operations and Support
- Ensure the efficient day-to-day operation of IT services, including helpdesk support, system maintenance, and troubleshooting.
- Establish service level agreements (SLAs) for IT services, ensuring that internal teams and customers receive timely support and issue resolution.
- Manage incident response, ensuring quick resolution of system failures, security breaches, or service interruptions.
6. Budgeting and Cost Control
- Manage the IT department’s budget, ensuring cost-effective spending on technology, software, hardware, and IT services.
- Analyze and recommend investments in new technologies and infrastructure that can improve business performance while optimizing costs.
- Ensure the efficient use of IT resources and the appropriate allocation of budget to support business priorities.
7. Compliance and Regulatory Requirements
- Ensure IT practices comply with relevant industry regulations and standards, such as financial services regulations, data privacy laws, and cybersecurity guidelines.
- Work with legal and compliance teams to ensure that all systems and data handling procedures meet industry-specific regulatory requirements (e.g., PCI DSS, ISO 27001).
- Provide input and guidance on IT-related regulatory audits and assessments, ensuring the organization is always in compliance.
8. Innovation and Digital Transformation
- Drive innovation by identifying opportunities for digital transformation within the organization, using technology to streamline operations and enhance the customer experience.
- Collaborate with other departments (marketing, customer service, product development) to introduce new fintech products and services powered by cutting-edge technology.
- Oversee the implementation of AI, machine learning, and other advanced technologies to enhance business performance, operational efficiency, and customer satisfaction.
9. Vendor and Stakeholder Management
- Manage relationships with external technology vendors, service providers, and consultants to ensure the company gets the best value for its investments.
- Negotiate contracts, terms of service, and service level agreements (SLAs) with vendors and technology partners.
- Ensure strong communication with business stakeholders, understanding their IT needs and delivering technology solutions that align with company objectives.
Qualifications and Skills:
Education:
- Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (Master’s degree or relevant certifications like ITIL, PMP, or CISSP are a plus).
Experience:
- 8-12 years of experience in IT management, with at least 4 years in a leadership role, preferably within the fintech, banking, or technology industry.
- Strong understanding of IT infrastructure, cloud computing, database management, and cybersecurity best practices.
- Proven experience in managing IT teams and large-scale IT projects, especially in fast-paced, growth-driven environments.
- Knowledge of fintech products and services, including digital payments, blockchain, and online lending platforms.
Skills:
- Expertise in IT infrastructure management, cloud services (AWS, Azure, Google Cloud), and enterprise software.
- Strong understanding of cybersecurity protocols, data protection laws, and IT governance frameworks.
- Experience with software development and integration, particularly for fintech platforms.
- Strong project management and budgeting skills, with a track record of delivering IT projects on time and within budget.
- Excellent communication and leadership skills, with the ability to manage cross-functional teams and communicate complex technical concepts to non-technical stakeholders.
- Ability to manage multiple priorities in a fast-paced, high-pressure environment.

Role Summary:
AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 2+ years of prior experience in data engineering, with a strong background in AWS (Amazon Web Services) technologies. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.
Responsibilities:
· Design, develop, and maintain scalable data pipelines and ETL processes leveraging AWS services such as S3, Glue, EMR, Lambda, and Redshift.
· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.
· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness.
· Implement data governance and security best practices to ensure compliance and data integrity.
· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring.
· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.
Qualifications:
· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
· 2+ years of prior experience in data engineering, with a focus on designing and building data pipelines.
· Proficiency in AWS services, particularly S3, Glue, EMR, Lambda, and Redshift.
· Strong programming skills in languages such as Python, Java, or Scala.
· Experience with SQL and NoSQL databases, data warehousing concepts, and big data technologies.
· Familiarity with containerization technologies (e.g., Docker, Kubernetes) and orchestration tools (e.g., Apache Airflow) is a plus.

Job Description:
We are seeking a highly analytical and detail-oriented Data Analyst to join our team. The ideal candidate will have strong problem-solving skills, proficiency in SQL and AWS QuickSight, and a passion for extracting meaningful insights from data. You will be responsible for analyzing complex datasets, building reports and dashboards, and providing data-driven recommendations to support business decisions.
Key Responsibilities:
- Extract, transform, and analyze data from multiple sources to generate actionable insights.
- Develop interactive dashboards and reports in AWS QuickSight to visualize trends and key metrics.
- Write optimized SQL queries to retrieve and manipulate data efficiently.
- Collaborate with stakeholders to understand business requirements and provide analytical solutions.
- Identify patterns, trends, and statistical correlations in data to support strategic decision-making.
- Ensure data integrity, accuracy, and consistency across reports.
- Continuously explore new tools, techniques, and methodologies to enhance analytical capabilities.
Qualifications & Skills:
- Strong proficiency in SQL for querying and data manipulation.
- Hands-on experience with AWS QuickSight for data visualization and reporting.
- Strong analytical thinking and problem-solving skills with the ability to interpret complex data.
- Experience working with large datasets and relational databases.
- Passion for slicing and dicing data to uncover key insights.
- Exceptional communication skills to effectively understand business requirements and present insights.
- A growth mindset with a strong attitude for continuous learning and improvement.
Preferred Qualifications:
- Experience with Python is a plus.
- Familiarity with cloud-based data environments (AWS etc).
- Familiarity with leveraging existing LLMs/AI tools to enhance productivity, automate repetitive tasks, and improve analysis efficiency.

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Work on our web frontend using React JS
- Knowledge Redux, React JS, HTML CSS is a must
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You have 1 Year of hands-on React JS Development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!

Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA



Title - Pncpl Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Principal Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.


Title - Sr Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
External Job Title :
Sr Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React.
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 4-6 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.
AuxoAI is seeking a Senior Platform Engineer (Lead) with strong AWS administration expertise to architect and manage cloud infrastructure while contributing to data engineering initiatives. The ideal candidate will have a deep understanding of cloud platforms, infrastructure as code, and data platform integrations. This role requires both technical leadership and hands-on implementation to ensure scalable, secure, and efficient infrastructure across the organization.
Key Responsibilities:
- Define, implement, and manage scalable platform architecture aligned with business and technical requirements.
- Lead AWS infrastructure design and operations including IAM, VPC, networking, security, and cost optimization.
- Design and optimize cloud-based storage, compute resources, and orchestration workflows to support data platforms.
- Collaborate with Data Engineers and DevOps teams to streamline deployment of data pipelines and infrastructure components.
- Automate infrastructure provisioning and management using Terraform, CloudFormation, or similar Infrastructure as Code (IaC) tools.
- Integrate platform capabilities with internal tools, analytics platforms, and business applications.
- Establish cloud engineering best practices including infrastructure security, reliability, and observability.
- Provide technical mentorship to engineering team members and lead knowledge-sharing initiatives.
- Monitor system performance, troubleshoot production issues, and implement solutions for reliability and scalability.
- Drive best practices in cloud engineering, security, and infrastructure as code (IaC).
Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field; or equivalent work experience.
- 6+ years of hands-on experience in platform engineering, DevOps, or cloud infrastructure roles.
- Expertise in AWS core services (IAM, EC2, S3, VPC, CloudWatch, etc.) and managing secure, scalable environments.
- Proficiency in Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools.
- Strong understanding of data platforms, pipelines, and workflow orchestration in cloud-native environments.
- Experience integrating infrastructure with CI/CD tools and workflows (e.g., GitHub Actions, Jenkins, GitLab CI).
- Familiarity with cloud security best practices, access management, and cost optimization strategies.
- Strong problem-solving and troubleshooting skills across cloud and data systems.
- Prior experience in a leadership or mentoring role is a plus.
- Excellent communication and collaboration skills to work effectively with cross-functional teams.
Roles and Responsibilities:
- AWS Cloud Management: Design, deploy, and manage AWS cloud infrastructure. Optimize and maintain cloud resources for performance and cost efficiency. Monitor and ensure the security of cloud-based systems.
- Automated Provisioning: Develop and implement automated provisioning processes for infrastructure deployment. Utilize tools like Terraform and Packer to automate and streamline the provisioning of resources.
- Infrastructure as Code (IaC): Champion the use of Infrastructure as Code principles. Collaborate with development and operations teams to define and maintain IaC scripts for infrastructure deployment and configuration.
- Collaboration and Communication: Work closely with cross-functional teams to understand project requirements and provide DevOps expertise. Communicate effectively with team members and stakeholders regarding infrastructure changes, updates, and improvements.
- Continuous Integration/Continuous Deployment (CI/CD): Implement and maintain CI/CD pipelines to automate software delivery processes. Ensure reliable and efficient deployment of applications through the development lifecycle.
- Performance Monitoring and Optimization: Implement monitoring solutions to track system performance, troubleshoot issues, and optimize resource utilization. Proactively identify opportunities for system and process improvements.
Mandatory Skills:
- Proven experience as a DevOps Engineer or similar role, with a focus on AWS.
- Strong proficiency in automated provisioning and cloud management.
- Experience with Infrastructure as Code tools, particularly Terraform and Packer.
- Solid understanding of CI/CD pipelines and version control systems.
- Strong scripting skills (e.g., Python, Bash) for automation tasks.
- Excellent problem-solving and troubleshooting skills.
- Good interpersonal and communication skills for effective collaboration.
Secondary Skills:
- AWS certifications (e.g., AWS Certified DevOps Engineer, AWS Certified Solutions Architect).
- Experience with containerization and orchestration tools (e.g., Docker, Kubernetes).
- Knowledge of microservices architecture and serverless computing.
- Familiarity with monitoring and logging tools (e.g., CloudWatch, ELK stack).


About the Company – Gruve
Gruve is an innovative software services startup dedicated to empowering enterprise customers in managing their Data Life Cycle. We specialize in Cybersecurity, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence.
As a well-funded early-stage startup, we offer a dynamic environment, backed by strong customer and partner networks. Our mission is to help customers make smarter decisions through data-driven business strategies.
Why Gruve
At Gruve, we foster a culture of:
- Innovation, collaboration, and continuous learning
- Diversity and inclusivity, where everyone is encouraged to thrive
- Impact-focused work — your ideas will shape the products we build
We’re an equal opportunity employer and encourage applicants from all backgrounds. We appreciate all applications, but only shortlisted candidates will be contacted.
Position Summary
We are seeking a highly skilled Software Engineer to lead the development of an Infrastructure Asset Management Platform. This platform will assist infrastructure teams in efficiently managing and tracking assets for regulatory audit purposes.
You will play a key role in building a comprehensive automation solution to maintain a real-time inventory of critical infrastructure assets.
Key Responsibilities
- Design and develop an Infrastructure Asset Management Platform for tracking a wide range of assets across multiple environments.
- Build and maintain automation to track:
- Physical Assets: Servers, power strips, racks, DC rooms & buildings, security cameras, network infrastructure.
- Virtual Assets: Load balancers (LTM), communication equipment, IPs, virtual networks, VMs, containers.
- Cloud Assets: Public cloud services, process registry, database resources.
- Collaborate with infrastructure teams to understand asset-tracking requirements and convert them into technical implementations.
- Optimize performance and scalability to handle large-scale asset data in real-time.
- Document system architecture, implementation, and usage.
- Generate reports for compliance and auditing.
- Ensure integration with existing systems for streamlined asset management.
Basic Qualifications
- Bachelor’s or Master’s degree in Computer Science or a related field
- 3–6 years of experience in software development
- Strong proficiency in Golang and Python
- Hands-on experience with public cloud infrastructure (AWS, GCP, Azure)
- Deep understanding of automation solutions and parallel computing principles
Preferred Qualifications
- Excellent problem-solving skills and attention to detail
- Strong communication and teamwork skills


Full Stack Developer
Location: Hyderabad
Experience: 7+ Years
Type: BCS - Business Consulting Services
RESPONSIBILITIES:
* Strong programming skills in Node JS [ Must] , React JS, Android and Kotlin [Must]
* Hands on Experience in UI development with good UX sense understanding.
• Hands on Experience in Database design and management
• Hands on Experience to create and maintain backend-framework for mobile applications.
• Hands-on development experience on cloud-based platforms like GCP/Azure/AWS
• Ability to manage and provide technical guidance to the team.
• Strong experience in designing APIs using RAML, Swagger, etc.
• Service Definition Development.
• API Standards, Security, Policies Definition and Management.
REQUIRED EXPERIENCE:
* Bachelor’s and/or master's degree in computer science or equivalent work experience
* Excellent analytical, problem solving, and communication skills.
* 7+ years of software engineering experience in a multi-national company
* 6+ years of development experience in Kotlin, Node and React JS
* 3+ Year(s) experience creating solutions in native public cloud (GCP, AWS or Azure)
* Experience with Git or similar version control system, continuous integration
* Proficiency in automated unit test development practices and design methodologies
* Fluent English
CLOUDSUFI is seeking a Information Security Lead overseeing the organization's information security framework, ensuring the confidentiality, integrity, and availability of all data. This role involves developing and implementing security policies, managing risk assessments, and addressing compliance requirements. The Infosec Lead will also lead incident response efforts, conduct regular security audits, and collaborate with cross-functional teams to mitigate vulnerabilities. Strong expertise in cybersecurity tools, frameworks, and best practices is essential for this role.
Roles & Responsibilities
➢ Work independently with vendors and collaborate with colleagues.
➢ Experience negotiating remediation timelines and/or remediating found issues independently.
➢ Ability to implement vendor platforms within CI/CD pipelines.
➢ Experience managing/responding to incidents, collecting evidence, and making decisions.
➢ Work with vendors and internal teams to deploy criteria within WAF and finetune configurations based on application needs.
➢ Multitasking and maintaining a high level of concentration on assigned projects.
Strong working knowledge of AWS security in general and familiarity with AWS native security tools.
➢ Promote security within the organization despite roadblocks, demonstrating resilience and persistence.
➢ Define and integrate DevSecOps security requirements in projects. ➢ Articulate security requirements during architecture meetings while collaborating with application and DevOps teams.
➢ Hands-on experience with various security tools and techniques, including:
➢ Trivy, Prowler, Port53, Snyk for container and application security.
➢ Kali Discovery and vulnerability scanning for penetration testing and threat assessment.
➢ Network and website penetration testing (PT) to identify and remediate security vulnerabilities.
➢ SAST and DAST tools for static and dynamic application security testing.
➢ API security testing
➢ Web/Mobile App SAST and DAST
Certification
➢ AWS Security /CISSP /CISM (Certified Information Security Manager)
Job Summary
We are seeking a skilled Infrastructure Engineer with 3 to 5 years of experience in Kubernetes to join our team. The ideal candidate will be responsible for managing, scaling, and securing our cloud infrastructure, ensuring high availability and performance. You will work closely with DevOps, SREs, and development teams to optimize our containerized environments and automate deployments.
Key Responsibilities:
- Deploy, manage, and optimize Kubernetes clusters in cloud and/or on-prem environments.
- Automate infrastructure provisioning and management using Terraform, Helm, and CI/CD pipelines.
- Monitor system performance and troubleshoot issues related to containers, networking, and storage.
- Ensure high availability, security, and scalability of Kubernetes workloads.
- Manage logging, monitoring, and alerting using tools like Prometheus, Grafana, and ELK stack.
- Optimize resource utilization and cost efficiency within Kubernetes clusters.
- Implement RBAC, network policies, and security best practices for Kubernetes environments.
- Work with CI/CD pipelines (Jenkins, ArgoCD, GitHub Actions, etc.) to streamline deployments.
- Collaborate with development teams to containerize applications and enhance performance.
- Maintain disaster recovery and backup strategies for Kubernetes workloads.
Required Skills & Qualifications:
- 3 to 5 years of experience in infrastructure and cloud technologies.
- Strong hands-on experience with Kubernetes (K8s), Helm, and container orchestration.
- Experience with cloud platforms (AWS, GCP, Azure) and managed Kubernetes services (EKS, GKE, AKS).
- Proficiency in Terraform, Ansible, or other Infrastructure as Code (IaC) tools.
- Knowledge of Linux system administration, networking, and security.
- Experience with Docker, container security, and runtime optimizations. Hands-on experience in monitoring, logging, and observability tools.
- Scripting skills in Bash, Python, or Go for automation.
- Good understanding of CI/CD pipelines and deployment automation.
- Strong troubleshooting skills and experience handling production incidents


Role Overview:
We are seeking a highly skilled and experienced Lead Web App Developer - Backend to join our dynamic team in Bengaluru. The ideal candidate will have a strong background in backend development, microservices architecture, and cloud technologies, with a proven ability to deliver robust, scalable solutions. This role involves designing, implementing, and maintaining complex distributed systems, primarily in the Mortgage Finance domain.
Key Responsibilities:
- Cloud-Based Web Applications Development:
- Lead backend development efforts for cloud-based web applications.
- Work on diverse projects within the Mortgage Finance domain.
- Microservices Design & Development:
- Design and implement microservices-based architectures.
- Ensure scalability, availability, and reliability of distributed systems.
- Programming & API Development:
- Write efficient, reusable, and maintainable code in Python, Node.js, and Java.
- Develop and optimize RESTful APIs.
- Infrastructure Management:
- Leverage AWS platform infrastructure to build secure and scalable solutions.
- Utilize tools like Docker for containerization and deployment.
- Database Management:
- Work with RDBMS (MySQL) and NoSQL databases to design efficient schemas and optimize queries.
- Team Collaboration:
- Collaborate with cross-functional teams to ensure seamless integration and delivery of projects.
- Mentor junior developers and contribute to the overall skill development of the team.
Core Requirements:
- Experience: Minimum 10+ years in backend development, with at least 3+ years of experience in designing and delivering large-scale products on microservices architecture.
- Technical Skills:
- Programming Languages: Python, Node.js, Java.
- Frameworks & Tools: AWS (Lambda, RDS, etc.), Docker.
- Database Expertise: Proficiency in RDBMS (MySQL) and NoSQL databases.
- API Development: Hands-on experience in developing REST APIs.
- System Design: Strong understanding of distributed systems, scalability, and availability.
Additional Skills (Preferred):
- Experience with modern frontend frameworks like React.js or AngularJS.
- Strong design and architecture capabilities.
What We Offer:
- Opportunity to work on cutting-edge technologies in a collaborative environment.
- Competitive salary and benefits package.
- Flexible hybrid working model.
- Chance to contribute to impactful projects in the Mortgage Finance domain.
We’re looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization.
Responsibilities:
- Lead the design of data warehouses, lakes, and ETL workflows.
- Collaborate with teams to gather requirements and build scalable solutions.
- Ensure data governance, security, and optimal performance of systems.
- Mentor junior engineers and drive end-to-end project delivery.
Requirements:
- 6+ years of experience in data engineering, including at least 2 full-cycle data warehouse projects.
- Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms.
- Expertise in big data tools (e.g., Apache Spark, Kafka).
- Excellent communication skills and leadership abilities.
Preferred: Experience with workflow orchestration tools (e.g., Airflow), real-time data, and DataOps practices.
We are in search of a proficient Java Principal Engineer with a minimum of 10 years' experience in designing and developing Java applications. The ideal candidate will demonstrate a deep understanding of Java technologies, including Java EE, Spring Framework, and Hibernate. Proficiency in database technologies such as MySQL, Oracle, or PostgreSQL is essential, along with a proven track record of delivering high-quality, scalable, and efficient Java solutions.
We are looking for you!
You are a team player, get-it-done person, intellectually curious, customer focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You have the zeal to think differently, understand that career is a journey and make the choices right. You must have experience in creating visually compelling designs that effectively communicate our message and engage our target audience. Ideal candidates would be someone who is creative, proactive, go getter and motivated to look for ways to add value to job accomplishments.
As an ideal candidate for the Java Lead position, you bring a wealth of experience and expertise in Java development, combined with strong leadership qualities. Your proven track record showcases your ability to lead and mentor teams to deliver high-quality, enterprise-grade applications.
Your technical proficiency and commitment to excellence make you a valuable asset in driving innovation and success within our development projects. You possess a team-oriented mindset and a "get-it-done" attitude, inspiring your team members to excel and collaborate effectively.
You have a proven ability to lead mid to large size teams, emphasizing a quality-first approach and ensuring that projects are delivered on time and within scope. As a Java Lead, you are responsible for overseeing project planning, implementing best practices, and driving technical solutions that align with business objectives.
You collaborate closely with development managers, architects, and cross-functional teams to design scalable and robust Java applications.
Your proactive nature and methodical approach enable you to identify areas for improvement, mentor team members, and foster a culture of continuous learning and growth.
Your leadership style, technical acumen, and dedication to delivering excellence make you an ideal candidate to lead our Java development initiatives and contribute significantly to the success and innovation of our organization.
What You Will Do:
- Design and development of RESTful Web Services.
- Hands on database experience (Oracle / PostgreSQL / MySQL /SQL Server).
- Hands on experience with developing web applications leveraging Spring Framework.
- Hands on experience with developing microservices leveraging Spring Boot.
- Experience with cloud platforms (e.g., AWS, Azure) and containerization technologies.
- Continuous Integration tools (Jenkins & Git Lab), CICD Tools.
- Strong believer and follower of agile methodologies with an emphasis on Quality & Standards based development.
- Architect, design, and implement complex software systems using [Specify relevant technologies, e.g., Java, Python, Node.js.
What we need?
- BTech computer science or equivalent
- Minimum 10+ years of relevant experience in Java/J2EE technologies
- Experience in building back in API using Spring Boot Framework, Spring DI, Spring AOP
- Real time messaging integration using Kafka or similar framework
- Experience in at least one database: Oracle, SQL server or PostgreSQL
- Previous experience managing and leading high-performing software engineering teams.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.

As a Solution Architect, you will collaborate with our sales, presales and COE teams to provide technical expertise and support throughout the new business acquisition process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.
You thrive in high-pressure environments, maintaining a positive outlook and understanding that career growth is a journey that requires making strategic choices. You possess good communication skills, both written and verbal, enabling you to convey complex technical concepts clearly and effectively. You are a team player, customer-focused, self-motivated, responsible individual who can work under pressure with a positive attitude. You must have experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids. You possess a strong work ethic, positive attitude, and enthusiasm to embrace new challenges. You can multi-task and prioritize (good time management skills), willing to display and learn. You should be able to work independently with less or no supervision. You should be process-oriented, have a methodical approach and demonstrate a quality-first approach.
Ability to convert client’s business challenges/ priorities into winning proposal/ bid through excellence in technical solution will be the key performance indicator for this role.
What you’ll do
- Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions.
- Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
- Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
- Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.
- Design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platform like MS Fabric.
- Translate business needs into technical solutions by designing secure, scalable, and performant data architectures on cloud platforms.
- Select and recommend appropriate Data services (e.g. Fabric, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Power BI etc) to meet specific data storage, processing, and analytics needs.
- Develop and recommend data models that optimize data access and querying. Design and implement data pipelines for efficient data extraction, transformation, and loading (ETL/ELT) processes.
- Ability to understand Conceptual/Logical/Physical Data Modelling.
- Choose and implement appropriate data storage, processing, and analytics services based on specific data needs (e.g., data lakes, data warehouses, data pipelines).
- Understand and recommend data governance practices, including data lineage tracking, access control, and data quality monitoring.
What you will Bring
- 10+ years of working in data analytics and AI technologies from consulting, implementation and design perspectives
- Certifications in data engineering, analytics, cloud, AI will be a certain advantage
- Bachelor’s in engineering/ technology or an MCA from a reputed college is a must
- Prior experience of working as a solution architect during presales cycle will be an advantage
Soft Skills
- Communication Skills
- Presentation Skills
- Flexible and Hard-working
Technical Skills
- Knowledge of Presales Processes
- Basic understanding of business analytics and AI
- High IQ and EQ
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
As a Lead Java Developer, you will take charge of driving the development and delivery of high-quality Java-based applications. Your role will involve leading a team of developers, providing technical guidance, and overseeing the entire software development life cycle. With your deep understanding of Java programming and related frameworks, you will design and implement scalable and efficient solutions that meet the project requirements. Your strong problem-solving skills and attention to detail will ensure the code quality and performance of the applications. Additionally, you will stay updated with the latest industry trends and best practices to improve the development processes continuously and contribute to the success of the team.
What You Will Do:
- Design and development of RESTful Web Services.
- Hands on database experience (Oracle / PostgreSQL / MySQL /SQL Server).
- Hands on experience with developing web applications leveraging Spring Framework.
- Hands on experience with developing microservices leveraging Spring Boot.
- Experience with cloud platforms (e.g., AWS, Azure) and containerization technologies.
- Continuous Integration tools (Jenkins & Git Lab), CICD Tools.
- Strong believer and follower of agile methodologies with an emphasis on Quality & Standards based development.
- Architect, design, and implement complex software systems using [Specify relevant technologies, e.g., Java, Python, Node.js.
What we need?
- BTech computer science or equivalent
- Minimum 8+ years of relevant experience in Java/J2EE technologies
- Experience in building back in API using Spring Boot Framework, Spring DI, Spring AOP
- Real time messaging integration using Kafka or similar framework
- Experience in at least one database: Oracle, SQL server or PostgreSQL
- Previous experience managing and leading high-performing software engineering teams.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
POSITION: Sr. Devops Engineer
Job Type: Work From Office (5 days)
Location: Sector 16A, Film City, Noida / Mumbai
Relevant Experience: Minimum 4+ year
Salary: Competitive
Education- B.Tech
About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com
We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices.
Roles and Responsibilities
- Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures
- Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes
- Implement the CI/CD pipelines to automate the build, test, and deployment processes
- Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution
- Work closely with cross-functional teams to troubleshoot and resolve issues.
- Implement and enforce security best practices across infrastructure components
- Establish and enforce configuration standards across various environments.
- Implement and manage infrastructure using Infrastructure as Code principles
- Leverage tools like Terraform for provisioning and managing resources.
- Stay abreast of industry trends and emerging technologies.
- Evaluate and recommend new tools and technologies to enhance infrastructure and operations
Must have Skills:
Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux
Good to have Skills:
Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST
Interview Process:
Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure
with your short success story into Devops and Tech
Cheers
For more details, visit our website- https://www.devnagri.com
Note for approver

Unstop (Formerly Dare2Compete) is looking for Frontend and Full Stack Developers. Developer responsibilities include building our application from concept to completion from the bottom up, fashioning everything from the home page to site layout and function.
Requirements:-
- Write well-designed, testable, efficient code by using the best software development practices
- Integrate data from various back-end services and databases
- Gather and refine specifications and requirements based on technical needs
- Be responsible for maintaining, expanding, and scaling our products
- Stay plugged into emerging technologies/industry trends and apply them into operations and activities
- End-to-end management and coding of all our products and services
- To make products modular, flexible, scalable and robust
Tech Skill:-
- Angular 10 or later
- PHP Laravel
- NodeJS
- MYSQL 8
- NoSQL DB
- Amazon AWS services – EC2, WAF, EBS, SNS, SES, Lambda, Fargate, etc.
- The whole ecosystem of AWS
Qualifications:-
- Freshers and Candidates with a maximum of 10 years of experience in the technologies that we work with
- Proven working experience in programming – Full Stack
- Top-notch programming and analytical skills
- Must know and have experience in AngularJS 2 onwards
- A solid understanding of how web applications work including security, session management, and best development practices
- Adequate knowledge of relational database systems, Object-Oriented Programming and web application development
- Ability to work and thrive in a fast-paced environment, learn rapidly and master diverse web technologies and techniques
- B.Tech in Computer Science or a related field or equivalent


Job Description:
We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with experience in deploying and managing applications on AWS.
Proficiency in Django Rest Framework (DRF) and a solid understanding of machine learning concepts and their practical applications are essential.
Key Responsibilities:
Develop and maintain web applications using Django and Flask frameworks.
Design and implement RESTful APIs using Django Rest Framework (DRF).
Deploy, manage, and optimize applications on AWS.
Develop and maintain APIs for AI/ML models and integrate them into existing systems.
Create and deploy scalable AI and ML models using Python.
Ensure the scalability, performance, and reliability of applications.
Write clean, maintainable, and efficient code following best practices.
Perform code reviews and provide constructive feedback to peers.
Troubleshoot and debug applications, identifying and fixing issues in a timely manner.
Stay up-to-date with the latest industry trends and technologies to ensure our applications remain current and competitive.
Required Skills and Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field.
3+ years of professional experience as a Python Developer.
Proficient in Python with a strong understanding of its ecosystem.
Extensive experience with Django and Flask frameworks.
Hands-on experience with AWS services, including but not limited to EC2, S3, RDS, Lambda, and CloudFormation.
Strong knowledge of Django Rest Framework (DRF) for building APIs.
Experience with machine learning libraries and frameworks, such as scikit-learn, TensorFlow, or PyTorch.
Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB).
Familiarity with front-end technologies (e.g., JavaScript, HTML, CSS) is a plus.
Excellent problem-solving skills and the ability to work independently and as part of a team.
Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders.
We are seeking a talented DevOps Engineer to join our dynamic IT team. As a DevOps Engineer, you will play a crucial role in streamlining our software development and deployment processes. You will collaborate closely with development and operations teams to automate and improve our infrastructure and application delivery.
Requirements :
⦁ 4+ years of hands-on experience with DevOps
⦁ Proficiency in AWS services (EC2, S3, RDS, etc.).
⦁ Proficiency in kubernetes and should have knowledge of ArgoCD.
⦁ Should have exp with devops tools ( Jenkins, Docker, Github CI, Terraform, Kubernetes, etc.).
⦁ Strong scripting skills (Bash, Python).
⦁ Understanding of cloud computing concepts.
⦁ Excellent problem-solving and troubleshooting abilities.
⦁ Experience with configuration management tools (Ansible, Puppet, Chef).
⦁ Experience with monitoring and logging tools (ELK Stack, Prometheus, Grafana).
Responsibilities:
⦁ Build and maintain the CI/CD pipeline using DevOps tools.
⦁ Automate infrastructure provisioning and management on AWS.
⦁ Administer and troubleshoot Linux-based systems.
⦁ Monitor system performance and identify potential issues.
⦁ Collaborate with development teams to ensure smooth application deployment.
⦁ Implement security best practices and maintain system integrity.
⦁ Troubleshoot and resolve technical issues efficiently.
⦁ Stay updated with the latest DevOps trends and technologies.
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.


Job Profile : Python Developer
Job Location : Ahmedabad, Gujarat - On site
Job Type : Full time
Experience - 1-3 Years
Key Responsibilities:
Design, develop, and maintain Python-based applications and services.
Collaborate with cross-functional teams to define, design, and ship new features.
Write clean, maintainable, and efficient code following best practices.
Optimize applications for maximum speed and scalability.
Troubleshoot, debug, and upgrade existing systems.
Integrate user-facing elements with server-side logic.
Implement security and data protection measures.
Work with databases (SQL/NoSQL) and integrate data storage solutions.
Participate in code reviews to ensure code quality and share knowledge with the team.
Stay up-to-date with emerging technologies and industry trends.
Requirements:
1-3 years of professional experience in Python development.
Strong knowledge of Python frameworks such as Django, Flask, or FastAPI.
Experience with RESTful APIs and web services.
Proficiency in working with databases (e.g., PostgreSQL, MySQL, MongoDB).
Familiarity with front-end technologies (e.g., HTML, CSS, JavaScript) is a plus.
Experience with version control systems (e.g., Git).
Knowledge of cloud platforms (e.g., AWS, Azure, Google Cloud) is a plus.
Understanding of containerization tools like Docker and orchestration tools like Kubernetes is good to have
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Good to Have:
Experience with data analysis and visualization libraries (e.g., Pandas, NumPy, Matplotlib).
Knowledge of asynchronous programming and event-driven architecture.
Familiarity with CI/CD pipelines and DevOps practices.
Experience with microservices architecture.
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch) is a plus.
Hands on experience in RAG and LLM model intergration would be surplus.


Job Description:
We are looking for a Python Lead who has the following experience and expertise -
- Proficiency in developing RESTful APIs using Flask/Django or Fast API framework
- Hands-on experience of using ORMs for database query mapping
- Unit test cases for code coverage and API testing
- Using Postman for validating the APIs Experienced with GIT process and rest of the code management including knowledge of ticket management systems like JIRA
- Have at least 2 years of experience in any cloud platform
- Hands-on leadership experience
- Experience of direct communication with the stakeholders
Skills and Experience:
- Good academics
- Strong teamwork and communications
- Advanced troubleshooting skills
- Ready and immediately available candidates will be preferred.
Job Title- Senior Full Stack Web Developer
Job location- Bangalore/Hybrid
Availability- Immediate Joiners
Experience Range- 5-8yrs
Desired skills - Java,AWS, SQL/NoSQL, Javascript, Node.js(good to have)
We are looking for 8-10 years Senior Full Stack Web Developer Java
- Working on different aspects of the core product and associated tools, (server-side or user-interfaces depending on the team you'll join)
- Expertise as a full stack software engineer of large scale complex software systems with at 8+ years of experience with technologies such as Java, Relational and Non relational databases,Node.js and AWS Cloud
- Assisting with in-life maintenance, testing, debugging and documentation of deployed services
- Coding & designing new features
- Creating the supporting functional and technical specifications
- Deep understanding of system architecture , and distributed systems
- Stay updated with the latest services, tools, and trends, and implement innovative solutions that contribute to the company's growth

Job Title- Senior Full Stack Developer
Job location- Bangalore/Hybrid
Availability- Immediate Joiners
Experience Range- 8-10 yrs
Desired skills - Node.js, Vue.JS / AngularJS / React, AWS, Javascript, Typescript
Requirements
● Total 8+ years of IT experience and 5+ years of experience in full-stack development working with JavaScript, Typescript
● Experience with modern web frameworks such as Vue.JS / AngularJS / React
● Extensive experience with back-end technologies - Nodejs, AWS, K8S, Postgresql, Redis
● Demonstrated proficiency in designing, developing, and deploying microservices-based applications. Ability to architect and implement scalable, loosely coupled, and maintainable microservices.
● Having experience in implementing CI/CD pipelines for automated testing, building, and deploying applications.
● Ability to lead end-to-end projects, working with other team members across the world
● Deep understanding of system architecture , and distributed systems
● Enjoy working in a fast-paced environment
● Able to work collaboratively within different teams and with differing levels of seniority
What you will bring:
● Work closely with cross-functional teams such as Development, Operations, and Product Management to ensure seamless integration of new features and services with a focus on reliability, scalability, and performance
● Experience with back-end technologies
● Good knowledge and understanding of client-side architecture
● Capable of managing time well and working efficiently and independently
● Ability to collaborate with multi-functional teams
● Excellent communication skills
Nice to Have
● Bachelor's or Master's degree in CS or related field/experience
Job Title- Java Developer
Exp Range- 5-8 yrs
Location- Bangalore/ Hybrid
Desired skill- Java 8, Microservices (Must), AWS, Kafka, Kubernetes
What you will bring
● Strong core Java, concurrency and server-side experience
● 5+ Years of experience with hands-on coding.
● Strong Java8 and Microservices. (Must)
● Should have good understanding on AWS/GCP
● Kafka, AWS stack/Kubernetes
● An understanding of Object Oriented Design and standard design patterns.
● Experience of multi-threaded, 3-tier architectures/Distributed architectures, web services and caching.
● A familiarity with SQL databases
● Ability and willingness to work in a global, fast-paced environment.
● Flexible with the ability to adapt working style to meet objectives.
● Excellent communication and analytical skills
● Ability to effectively communicate with team members
● Experience in the following technologies would be beneficial but not essential, SpringBoot, AWS, Kubernetes, Terraform, Redis
Job Title- Senior Java Developer
Exp Range- 8-10 yrs
Location- Bangalore/ Hybrid
Desired skill- Java 8, Microservices (Must), AWS, Kafka, Kubernetes
What you will bring:
● Strong core Java, concurrency and server-side experience
● 8 + Years of experience with hands-on coding.
● Strong Java8 and Microservices. (Must)
● Should have good understanding on AWS/GCP
● Kafka, AWS stack/Kubernetes
● An understanding of Object Oriented Design and standard design patterns.
● Experience of multi-threaded, 3-tier architectures/Distributed architectures, web services and caching.
● A familiarity with SQL databases
● Ability and willingness to work in a global, fast-paced environment.
● Flexible with the ability to adapt working style to meet objectives.
● Excellent communication and analytical skills
● Ability to effectively communicate with team members
● Experience in the following technologies would be beneficial but not essential, SpringBoot, AWS, Kubernetes, Terraform, Redis

Job Title- Frontend Developer
Job location- Bangalore/Hybrid
Availability- Immediate Joiners
Experience Range- 5-8yrs
Desired skills - Vue.JS / AngularJS / React, AWS, Javascript, Typescript
Requirements:
● Total 5+ years of IT experience and 5+ years of experience in front end development working with JavaScript, Typescript
● Experience with modern web frameworks such as Vue.JS / AngularJS / React
● Extensive experience with cloud technologies AWS
● Demonstrated proficiency in designing, developing, and deploying microservices-based applications. Ability to architect and implement scalable, loosely coupled, and maintainable microservices.
● Having experience in implementing CI/CD pipelines for automated testing, building, and deploying applications.
● Ability to lead end-to-end projects, working with other team members across the world
● Deep understanding of system architecture , and distributed systems
● Enjoy working in a fast-paced environment
● Able to work collaboratively within different teams and with differing levels of seniority
What you will bring:
● Work closely with cross-functional teams such as Development, Operations, and Product
Management to ensure seamless integration of new features and services with a focus on reliability, scalability, and performance
● Experience with back-end technologies
● Good knowledge and understanding of client-side architecture
● Capable of managing time well and working efficiently and independently
● Ability to collaborate with multi-functional teams
● Excellent communication skills
Nice to Have
● Bachelor's or Master's degree in CS or related field/experience
Job Title- DevOps Engineer
Job location- Pune/Remote
Availability- Immediate Joiners
Experience Range- 6-8 yrs
Desired skills - Docker, Kubernetes, Terraform, SQL.
As part of this role, the candidate will learn, maintain and upgrade all aspects of the Business Support Systems (BSS) including billing, product catalog, ordering shipping and more. This role will focus on maintaining/upgrading the infrastructure running the applications and also be able to contribute to application code development.
Specific Requirements
- Excellent communication skills (written and oral)
- Experience working with Linux based systems
- (RedHat, Rocky, Docker, etc.)
- Experience with Apache & WebLogic platforms
- Understanding of various database architectures (SQL / Oracle, Mongo, etc.)
- Working knowledge of GIT, SVN, JIRA, AWS, Puppet, SOAPUI
- Develop and maintain escalation processes for monitoring and alerting
- Create and maintain runbooks for supported applications, adhering to best practices.
- Ability to troubleshoot complex issues and implement solutions accordingly
- Experience maintaining and supporting a large scale production environment
- Work with internal and external teams to ensure application and process compliance
- Experience partnering and collaborating with both on and off-shore teams and team members
- Experience working in an Agile environment.
- Work closely with the development team to plan and execute maintenance operations.
- Suggest and implement any infrastructure or architecture changes to improve application performance and stability. Identify any infrastructure
- or architecture areas for cost saving.
- Participate in a 24/7/365 on call rotation, responding to events as needed to troubleshoot and resolve.
- Participate in internal and external compliance audits (SOX, PCI, etc.)
- Identify and execute improvements to minimize the amount of on call events received by the team.
- Create and maintain application build and deployment scripts.
- Ability to analyze data from application databases.
- Experience in process automation, build automation, and continuous integration using Jenkins, Code Deploy, Git Deploy, etc.
Additional Skills:
- Knowledge of billing platforms
- Experienced in working on EJB.
- Understanding of JMS or other message processing platforms.
- Experience with SSL certificate renewals and installation

About the job
Location: Bangalore, India
Job Type: Full-Time | On-Site
Job Description
We are looking for a highly skilled and motivated Python Backend Developer to join our growing team in Bangalore. The ideal candidate will have a strong background in backend development with Python, deep expertise in relational databases like MySQL, and hands-on experience with AWS cloud infrastructure.
Key Responsibilities
- Design, develop, and maintain scalable backend systems using Python.
- Architect and optimize relational databases (MySQL), including complex queries and indexing.
- Manage and deploy applications on AWS cloud services (EC2, S3, RDS, DynamoDB, API Gateway, Lambda).
- Automate cloud infrastructure using CloudFormation or Terraform.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Mentor junior developers and contribute to a culture of technical excellence.
- Proactively identify issues and provide solutions to challenging backend problems.
Mandatory Requirements
- Minimum 3 years of professional experience in Python backend development.
- Expert-level knowledge in MySQL database creation, optimization, and query writing.
- Strong experience with AWS services, particularly EC2, S3, RDS, DynamoDB, API Gateway, and Lambda.
- Hands-on experience with infrastructure as code using CloudFormation or Terraform.
- Proven problem-solving skills and the ability to work independently.
- Demonstrated leadership abilities and team collaboration skills.
- Excellent verbal and written communication.
Position Title: Senior Software Engineer, Cloud Delivery
Location: India-Remote.
About Deltek:
Deltek is a multinational Product-based Software Development Company headquartered in Herndon, Virginia (USA) with over 4200 employees operating across 13 countries. We are a software and information solutions provider that specializes in serving project-based businesses. The company focuses on delivering enterprise software and information solutions to industries such as government contracting, architecture and engineering, professional services, and more. Deltek's solutions are designed to address the unique challenges and requirements of organizations that operate on a project-centric model. We have more than 30,000 customers in 80 countries, and we are specialized into Project Management Software, ERP Based Project, Accounting Software, Federal Government Contracting. • Deltek got recognized at by Forbes as one of America's Best Mid-Sized Employers • We are in 8th spot on Glassdoor’s 2024 for Best Places to Work list globally • More than 30,000 customers in 80 countries • 98% of the top 50 federal contractors are using our products • More than 10,000 small businesses are powered by Deltek Solutions • Top 5 of the Global Accounting and Consulting firms use our products
Learn more at: https://www.deltek.com/en
Position Responsibilities:
Deltek is looking for software engineers who are passionate about coding and technology to work on our wide variety of products and platforms. Our SaaS applications are used globally by thousands of customers and millions of users. We want to build out our team with more engineers who are also passionate about building great applications and care about giving our customers a great user experience. Working at Deltek is an opportunity to take on the unique challenges of a successful, large scale SaaS business application that existed before "web applications" as we know them today even existed. We challenge ourselves to solve customer problems on a flexible & customizable platform, while we support scaling out from the smallest mom-and-pop shop to the world's largest multinational corporations, and constantly build and evolve our easy-to-use customer-first product. It is a "never-stop-learning" environment, where you will be working with a strong technical team and integrate cutting edge technology with tried and tested core platforms. Our technology stack is diverse and constantly evolving, but currently contains:
• JavaScript • Node.js, including Eexpress.js
• C# / .NET • Python
• Docker & Kubernetes
• PostgreSQL
• Oracle Cloud Infrastructure (OCI) technologies
• Amazon Web Services (AWS) technologies
• Terraform Key Responsibilities:
• Coding skills across the entire stack – UI, Services, Data, and Infrastructure.
• A technology toolbelt that includes leading edge languages and tools with knowledge on when and why to use them.
• Having experience with OCI is an asset, but any Cloud infrastructure platform is a strong starting point (eg. AWS, Microsoft Azure, Google Cloud)
• You adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting.
• You are pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users.
• Desire and ability to take ownership of an application as it runs in production, at scale, for millions of users; quickly respond to issues and fix problems as they are found.
• Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand. As a team, we believe in doing all of this while having fun! A core part of our team culture involves supporting an environment where everyone can do their best work. We expect all our team members to support this culture.
Qualifications:
• Education: Bachelor’s degree in computer science or a related field, or equivalent experience.
• Experience:
o Minimum of 7 years of overall experience in software development and/or infrastructure engineering or similar role.
o 3+ years of experience applying an automation-first approach to problem-solving using configuration management tools and scripting.
o Strong experience and expertise with atleast one cloud platform with preference for OCI (or AWS, Azure, GCP).
o Experience with one programming language with preference for NodeJS or other similar languages
o Experience with one SQL database with preference for PostgreSQL or other similar languages
o Hand on with TDD, CI/CD pipelines, Networking,
• Technical Skills: o Infrastructure-as-Code mentality with tools like Terraform and/or Ansible.
o Demonstrated experience in building high-quality products or services.
o Experience delivering successful web or cloud projects.
• Soft Skills:
o Strong communication skills: ability to explain work, ask great questions, listen to peers and customers, influence without authority, and give and receive feedback.
o Passion for building software that solves real problems for real people.
o Commitment to writing well-designed, easy-to-test, and maintainable code.
SHIFT: 12pm IST – 8pm IST
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Senior Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is a fast-growing company, the ideal candidate will be comfortable leading other data engineers in the future. However, this is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +7 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred
At Verto, we’re passionate about helping businesses in Emerging markets reach the world. What first started life as a FX solution for trading Nigerian Naira has now become a market-leading platform, changing the way thousands of businesses transfer money in and out of Emerging Markets.
We believe that where you do business shouldn’t determine how successful you are, or your ability to scale. Millions of companies a day have to juggle long settlement periods, high transaction fees and issues accessing liquidity in order to trade with African businesses. We’re on a mission to change this by creating equal access to easy payment and liquidity solutions that are already a given in developed markets.
We’re not alone in realising the opportunity and need to solve for emerging markets. We’re backed by world-class investors including Y-Combinator, Quona and MEVP, power payments for some of the most disruptive start-ups in the world and have a list of accolades from leading publications including being voted ‘Fintech Start Up of the Year’ at Fintech Awards London 2022.
Each year we process billions of dollars of payments and provide companies with solutions which help them to save money, automate processes and grow, but we’re only just getting started.
We’re seeking a driven and results-oriented Data Engineer who is excited to help build out a best-in-class Data Platform. In this role, you will be expected to achieve key milestones such as improving on our existing Data Warehouse, implementing a CI/CD framework, and enabling core technologies such as dbt and git. You will play a pivotal role in enabling long-term scalability and efficiency when it comes to all things data, and leveraging your expertise in Data Engineering to drive measurable impact.
In this role you will:
- Conceptualize, maintain and improve the data architecture
- Evaluating design and operational cost-benefit tradeoffs within systems
- Design, build, and launch collections of data models that support multiple use cases across different products or domains
- Solve our most challenging data integration problems, optimising ELT pipelines, frameworks, sourcing from structured and unstructured data sources
- Implementing CI/CD frameworks
- Create and contribute to frameworks that improve the accuracy, efficiency and general data integrity
- Design and execute ‘best-in-class’ schema design
- Implementing other potential data tools
- Define and manage refresh schedules, load-balancing and SLA for all data sets in allocated areas of ownership
- Collaborate with engineers, product managers, and data analysts to understand data needs, identify and resolve issues and help set best practices for efficient data capture
- Determine and implement the data governance model and processes within ownership realm (GDPR, PPI, etc)
You’ll be responsible for:
- Taking ownership of the data engineering process - from project scoping, design, communication, execution and conclusion
- Support and strengthen data infrastructure together with data team and engineering
- Support organisation in understanding the importance of data and advocate for best-in-class infrastructure
- Mentoring, educating team members on best-in-class DE practices
- Priorising workload effectively
- Support quarterly and half-year planning from Data Engineering perspective
Note: This is currently a small data team, you may be asked to contribute to projects outside of the typical Data Engineering role. This will most probably involve analytics engineering responsibilities such as maintenance and improvement of ‘core’ tables (transactions, companies, product/platform management).
Skills and Qualifications
- University degree; ideally in data engineering, software engineering, computer science-engineering, numerate or similar
- +4 years of data engineering experience or equivalent
- Expert experience building data warehouses and ETL pipelines
- Expert experience of SQL, python, git, dbt (incl. query efficiency and optimization)
- Expert experience of Cloud Data Platforms (AWS, Snowflake and/or Databricks) → Qualification preferred, not mandatory
- Significant experience of Automation and Integrations tools (FiveTran, Airflow, Astronomer or similar)
- Significant experience with IoC tools (Terraform, Docker, Kubernetes or similar)
- Significant experience with CI/CD tools (Jenkins, GitHub Actions, CircleCI or similar)
Preferred Experience:
- Experience with real time data pipelines (AWS Kinesis, Kafka, Spark)
- Experience with observability tools (Metaplane, MonteCarlo, Datadog or similar)
- Experience within FinTech/Finance/FX preferred

Job Description: Data Engineer
Position Overview:
Role Overview
We are seeking a skilled Python Data Engineer with expertise in designing and implementing data solutions using the AWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes.
Key Responsibilities
· Design, develop, and optimize data pipelines using Python and AWS services such as Glue, Lambda, S3, EMR, Redshift, Athena, and Kinesis.
· Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses).
· Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions.
· Monitor, troubleshoot, and enhance data workflows for performance and cost optimization.
· Ensure data quality and consistency by implementing validation and governance practices.
· Work on data security best practices in compliance with organizational policies and regulations.
· Automate repetitive data engineering tasks using Python scripts and frameworks.
· Leverage CI/CD pipelines for deployment of data workflows on AWS.
Required Skills and Qualifications
· Professional Experience: 5+ years of experience in data engineering or a related field.
· Programming: Strong proficiency in Python, with experience in libraries like pandas, pySpark, or boto3.
· AWS Expertise: Hands-on experience with core AWS services for data engineering, such as:
· AWS Glue for ETL/ELT.
· S3 for storage.
· Redshift or Athena for data warehousing and querying.
· Lambda for serverless compute.
· Kinesis or SNS/SQS for data streaming.
· IAM Roles for security.
· Databases: Proficiency in SQL and experience with relational (e.g., PostgreSQL, MySQL) and NoSQL (e.g., DynamoDB) databases.
· Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus.
· DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline.
· Version Control: Proficient with Git-based workflows.
· Problem Solving: Excellent analytical and debugging skills.
Optional Skills
· Knowledge of data modeling and data warehouse design principles.
· Experience with data visualization tools (e.g., Tableau, Power BI).
· Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes).
· Exposure to other programming languages like Scala or Java.
Education
· Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Why Join Us?
· Opportunity to work on cutting-edge AWS technologies.
· Collaborative and innovative work environment.
Role: Senior Software Engineer - Backend
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Senior Backend Engineer with a minimum of 3 years of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that power our applications. You will work closely with cross-functional teams to ensure seamless integration between frontend and backend components, leveraging your expertise to architect scalable, secure, and high-performance solutions. As a senior team member, you will mentor junior developers and lead technical initiatives to drive innovation and excellence.
Annual Compensation: 12-18 LPA
Responsibilities:
- Lead the design, development, and maintenance of scalable and efficient backend systems and APIs.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases and NoSQL databases.
- Mentor and guide junior developers, fostering a culture of knowledge sharing and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and provide technical leadership in architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 3 years of proven experience as a Backend Engineer, with a strong portfolio of product-building projects.
- Strong proficiency in backend development using Java, Python, and JavaScript, with experience in building scalable and high-performance applications.
- Experience with popular backend frameworks and libraries for Java (e.g., Spring Boot) and Python (e.g., Django, Flask).
- Strong expertise in SQL and NoSQL databases (e.g., MySQL, MongoDB) with a focus on data modeling and scalability.
- Practical experience with caching mechanisms (e.g., Redis) to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
About AiSensy
AiSensy is a WhatsApp based Marketing & Engagement platform helping businesses like Skullcandy, Vivo, Rentomojo, Physicswallah, Cosco grow their revenues via WhatsApp.
- Enabling 100,000+ Businesses with WhatsApp Engagement & Marketing
- 400Crores + WhatsApp Messages done between Businesses and Users via AiSensy per year
- Working with top brands like Delhi Transport Corporation, Vivo, Physicswallah & more
- High Impact as Businesses drive 25-80% Revenues using AiSensy Platform
- Mission-Driven and Growth Stage Startup backed by Marsshot.vc, Bluelotus.vc & 50+ Angel Investors
Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀
What You’ll Do (Key Responsibilities)
🔹 CI/CD & Automation:
- Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins.
- Automate deployment processes to improve efficiency and reduce downtime.
🔹 Infrastructure Management:
- Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code.
- Deploy and maintain Dockerized applications on Kubernetes clusters for scalability.
🔹 Cloud & Security:
- Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure.
- Optimize cloud costs and ensure security best practices are in place.
🔹 Monitoring & Troubleshooting:
- Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime.
- Proactively identify and resolve infrastructure-related issues.
🔹 Scripting & Automation:
- Use Python or Bash scripting to automate repetitive DevOps tasks.
- Build internal tools for system health monitoring, logging, and debugging.
What We’re Looking For (Must-Have Skills)
✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket)
✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins
✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi
✅ Containerization & Orchestration: Experience with Docker & Kubernetes
✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers
✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana
✅ Scripting Knowledge: Python or Bash for automation
Bonus Skills (Good to Have, Not Mandatory)
➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking
➕ Experience with Microsoft/Linux/F5 Technologies
➕ Hands-on knowledge of Database servers
➕ Experience on AWS Sagemaker, AWS Bedrock

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.
Must be:
- Based in Mumbai
- Comfortable with Work from Office
- Available to join immediately
Responsibilities:
- Manage, monitor, and scale production systems across cloud (AWS/GCP) and on-prem.
- Work with Kubernetes, Docker, Lambdas to build reliable, scalable infrastructure.
- Build tools and automation using Python, Go, or relevant scripting languages.
- Ensure system observability using tools like NewRelic, Prometheus, Grafana, CloudWatch, PagerDuty.
- Optimize for performance and low-latency in real-time systems using Kafka, gRPC, RTP.
- Use Terraform, CloudFormation, Ansible, Chef, Puppet for infra automation and orchestration.
- Load testing using Gatling, JMeter, and ensuring fault tolerance and high availability.
- Collaborate with dev teams and participate in on-call rotations.
Requirements:
- B.E./B.Tech in CS, Engineering or equivalent experience.
- 3+ years in production infra and cloud-based systems.
- Strong background in Linux (RHEL/CentOS) and shell scripting.
- Experience managing hybrid infrastructure (cloud + on-prem).
- Strong testing practices and code quality focus.
- Experience leading teams is a plus.


About NonStop io Technologies:
NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.
Brief Description:
We are looking for an Engineering Manager who combines technical depth with leadership strength. This role involves leading one or more product engineering pods, driving architecture decisions, ensuring delivery excellence, and working closely with stakeholders to build scalable web and mobile technology solutions. As a key part of our leadership team, you’ll play a pivotal role in mentoring engineers, improving processes, and fostering a culture of ownership, innovation, and continuous learning
Roles and Responsibilities:
● Team Management: Lead, coach, and grow a team of 15-20 software engineers, tech leads, and QA engineers
● Technical Leadership: Guide the team in building scalable, high-performance web and mobile applications using modern frameworks and technologies
● Architecture Ownership: Architect robust, secure, and maintainable technology solutions aligned with product goals
● Project Execution: Ensure timely and high-quality delivery of projects by driving engineering best practices, agile processes, and cross-functional collaboration
● Stakeholder Collaboration: Act as a bridge between business stakeholders, product managers, and engineering teams to translate requirements into technology plans
● Culture & Growth: Help build and nurture a culture of technical excellence, accountability, and continuous improvement
● Hiring & Onboarding: Contribute to recruitment efforts, onboarding, and learning & development of team members
Requirements:
● 8+ years of software development experience, with 2+ years in a technical leadership or engineering manager role
● Proven experience in architecting and building web and mobile applications at scale
● Hands-on knowledge of technologies such as JavaScript/TypeScript, Angular, React, Node.js, .NET, Java, Python, or similar stacks
● Solid understanding of cloud platforms (AWS/Azure/GCP) and DevOps practices
● Strong interpersonal skills with a proven ability to manage stakeholders and lead diverse teams
● Excellent problem-solving, communication, and organizational skills
● Nice to haves:
○ Prior experience in working with startups or product-based companies
○ Experience mentoring tech leads and helping shape engineering culture
○ Exposure to AI/ML, data engineering, or platform thinking
Why Join Us?:
● Opportunity to work on a cutting-edge healthcare product
● A collaborative and learning-driven environment
● Exposure to AI and software engineering innovations
● Excellent work ethics and culture
If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!
Job Opportunity: AWS Infrastructure Engineer
- Location: Anywhere, Permanent Remote
- Work Mode: Remote, Work from Home
- Payroll Company: Talpro India
- Experience Required: 4+ Years
- Notice Period: Immediate Joiner Only
Key Responsibilities:
- Cloud Infrastructure Design & Management:
- • Architect, deploy, and maintain AWS services like EC2, S3, VPC, RDS, IAM, Lambda
- • Build secure, scalable cloud environments on AWS; Azure/GCP exposure is a plus
- Security & Compliance:
- • Implement security best practices using IAM, GuardDuty, Security Hub, WAF
- • Apply patching and security updates for Linux and Windows systems
- Networking & Connectivity:
- • Configure and troubleshoot AWS networking (VPC, TGW, Route 53, VPNs, Direct Connect)
- • Manage hybrid environments and URL filtering solutions
- Server & Service Optimization:
- • Optimize Apache, NGINX, MySQL/PostgreSQL on Linux and Windows platforms
- • Ensure server health, availability, and performance
- Firewall & Access Control:
- • Hands-on with physical firewalls like Palo Alto and Cisco
- Support & Integration:
- • Provide L2/L3 support and ensure quick incident resolution
- • Collaborate with DevOps, application, and database teams
- Automation & Monitoring:
- • Automate using Terraform, CloudFormation, Bash, or Python
- • Monitor and optimize using CloudWatch, Trusted Advisor, and Cost Explorer
Must-Have Skills:
- 4+ years of hands-on experience with AWS Cloud
- System administration for both Linux and Windows
- Expertise in AWS networking and security
- Proficiency with Terraform or CloudFormation
- Scripting skills in Bash or Python
- Experience managing Apache, NGINX, MySQL/PostgreSQL
- Familiarity with Palo Alto or Cisco firewalls
Knowledge of AWS monitoring tools like CloudWatch and Trusted Advisor
Good to Have Skills:
- Exposure to multi-cloud environments like Azure or GCP
- RHCSA or MCSE certification
- Strong collaboration and DevOps integration skills
Certifications Required:
- AWS Certified Solutions Architect (Mandatory)
- RHCSA or MCSE (Preferred)
Summary:
We are looking for an AWS Infrastructure Engineer to manage and secure scalable cloud environments. This role requires hands-on AWS experience, strong system admin skills, and automation expertise. If you’re certified, skilled in modern cloud tooling, and ready to work in a dynamic tech environment, we’d love to connect with you.

What you’ll do
- Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
- Work closely with business, product, analytics, and ML teams to define data needs
- Ensure high data quality, lineage, versioning, and observability
- Optimize performance of batch and streaming jobs
- Automate and scale ingestion, transformation, and monitoring workflows
- Document data models and key business metrics in a self-serve way
- Use AI tools to accelerate development, troubleshooting, and documentation
Must-Haves:
- 2–4 years of experience as a data engineer (product or analytics-focused preferred)
- Solid hands-on experience with Python and SQL
- Experience with data pipeline orchestration tools like Airflow or Prefect
- Understanding of data modeling, warehousing concepts, and performance optimization
- Familiarity with cloud platforms (GCP, AWS, or Azure)
- Bachelor's in Computer Science, Data Engineering, or a related field
- Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries.
What will you do at Fynd?
- Run the production environment by monitoring availability and taking a holistic view of system health.
- Improve reliability, quality, and time-to-market of our suite of software solutions
- Be the 1st person to report the incident.
- Debug production issues across services and levels of the stack.
- Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realise it.
- Building automated tools in Python / Java / GoLang / Ruby etc.
- Help Platform and Engineering teams gain visibility into our infrastructure.
- Lead design of software components and systems, to ensure availability, scalability, latency, and efficiency of our services.
- Participate actively in detecting, remediating and reporting on Production incidents, ensuring the SLAs are met and driving Problem Management for permanent remediation.
- Participate in on-call rotation to ensure coverage for planned/unplanned events.
- Perform other task like load-test & generating system health reports.
- Periodically check for all dashboards readiness.
- Engage with other Engineering organizations to implement processes, identify improvements, and drive consistent results.
- Working with your SRE and Engineering counterparts for driving Game days, training and other response readiness efforts.
- Participate in the 24x7 support coverage as needed Troubleshooting and problem-solving complex issues with thorough root cause analysis on customer and SRE production environments
- Collaborate with Service Engineering organizations to build and automate tooling, implement best practices to observe and manage the services in production and consistently achieve our market leading SLA.
- Improving the scalability and reliability of our systems in production.
- Evaluating, designing and implementing new system architectures.
Some specific Requirements:
- B.E./B.Tech. in Engineering, Computer Science, technical degree, or equivalent work experience
- At least 3 years of managing production infrastructure. Leading / managing a team is a huge plus.
- Experience with cloud platforms like - AWS, GCP.
- Experience developing and operating large scale distributed systems with Kubernetes, Docker and and Serverless (Lambdas)
- Experience in running real-time and low latency high available applications (Kafka, gRPC, RTP)
- Comfortable with Python, Go, or any relevant programming language.
- Experience with monitoring alerting using technologies like Newrelic / zybix /Prometheus / Garafana / cloudwatch / Kafka / PagerDuty etc.
- Experience with one or more orchestration, deployment tools, e.g. CloudFormation / Terraform / Ansible / Packer / Chef.
- Experience with configuration management systems such as Ansible / Chef / Puppet.
- Knowledge of load testing methodologies, tools like Gating, Apache Jmeter.
- Work your way around Unix shell.
- Experience running hybrid clouds and on-prem infrastructures on Red Hat Enterprise Linux / CentOS
- A focus on delivering high-quality code through strong testing practices.
What do we offer?
Growth
Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially.
Flex University: We help you upskill by organising in-house courses on important subjects
Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you.
Culture
Community and Team building activities
Host weekly, quarterly and annual events/parties.
Wellness
Mediclaim policy for you + parents + spouse + kids
Experienced therapist for better mental health, improve productivity & work-life balance
We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
We are looking for a highly skilled Solution Architect with a passion for software engineering and deep experience in backend technologies, cloud, and DevOps. This role will be central in managing, designing, and delivering large-scale, scalable solutions.
Core Skills
- Strong coding and software engineering fundamentals.
- Experience in large-scale custom-built applications and platforms.
- Champion of SOLID principles, OO design, and pair programming.
- Agile, Lean, and Continuous Delivery – CI, TDD, BDD.
- Frontend experience is a plus.
- Hands-on with Java, Scala, Golang, Rust, Spark, Python, and JS frameworks.
- Experience with Docker, Kubernetes, and Infrastructure as Code.
- Excellent understanding of cloud technologies – AWS, GCP, Azure.
Responsibilities
- Own all aspects of technical development and delivery.
- Understand project requirements and create architecture documentation.
- Ensure adherence to development best practices through code reviews.