50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!





● Candidate should have Hands-on development experience as Data Analyst and/or ML Engineer.
● Candidate must have Coding experience in Python.
● Candidate should have Good Experience with ML models and ML algorithms.
● Need Experience with statistical modelling of large data sets.
● Looking for Immediate joiners or max. 30 days of Notice Period candidates.
● The candidates based out of these locations - Bangalore, Pune, Hyderabad, Mumbai, will be preffered.
What You will do:
● Play the role of Data Analyst / ML Engineer
● Collection, cleanup, exploration and visualization of data
● Perform statistical analysis on data and build ML models
● Implement ML models using some of the popular ML algorithms
● Use Excel to perform analytics on large amounts of data
● Understand, model and build to bring actionable business intelligence out of data that is available in different formats
● Work with data engineers to design, build, test and monitor data pipelines for ongoing business operations
Basic Qualifications:
● Experience: 4+ years.
● Hands-on development experience playing the role of Data Analyst and/or ML Engineer.
● Experience in working with excel for data analytics
● Experience with statistical modelling of large data sets
● Experience with ML models and ML algorithms
● Coding experience in Python
Nice to have Qualifications:
● Experience with wide variety of tools used in ML
● Experience with Deep learning
Benefits:
● Competitive salary.
● Hybrid work model.
● Learning and gaining experience rapidly.
● Reimbursement for basic working set up at home.
● Insurance (including a top up insurance for COVID).



Experience:
- Junior Level: 4+ years
- Senior Level: 8+ years
Work Mode:Remote
About the Role:
We are seeking a highly skilled and motivated Data Scientist with deep expertise in Machine Learning (ML), Deep Learning, and Large Language Models (LLMs) to join our forward-thinking AI & Data Science team. This is a unique opportunity to contribute to real-world impact in the healthcare industry, transforming the way patients and providers interact with health data through Generative AI and NLP-driven solutions.
Key Responsibilities:
- LLM Development & Fine-Tuning:
- Fine-tune and customize LLMs (e.g., GPT, LLaMA2, Mistral) for use cases such as text classification, NER, summarization, Q&A, and sentiment analysis.
- Experience with other transformer-based models (e.g., BERT) is a plus.
- Data Engineering & Pipeline Design:
- Collaborate with data engineering teams to build scalable, high-quality data pipelines for training/fine-tuning LLMs on structured and unstructured healthcare datasets.
- Experimentation & Evaluation:
- Design rigorous model evaluation and testing frameworks (e.g., with tools like TruLens) to assess performance and optimize model outcomes.
- Deployment & MLOps Integration:
- Work closely with MLOps teams to ensure seamless integration of models into production environments on cloud platforms (AWS, Azure, GCP).
- Predictive Modeling in Healthcare:
- Apply ML/LLM techniques to build predictive models for use cases in oncology (e.g., survival analysis, risk prediction, RWE generation).
- Cross-functional Collaboration:
- Engage with domain experts, product managers, and clinical teams to translate healthcare challenges into actionable AI solutions.
- Mentorship & Knowledge Sharing:
- Mentor junior team members and contribute to the growth of the team’s technical expertise.
Qualifications:
- Master’s or Doctoral degree in Computer Science, Data Science, Artificial Intelligence, or related field.
- 5+ years of hands-on experience in machine learning and deep learning, with at least 12 months of direct work on LLMs.
- Strong coding skills in Python, with experience in libraries like HuggingFace Transformers, spaCy, NLTK, TensorFlow, or PyTorch.
- Experience with prompt engineering, RAG pipelines, and evaluation techniques in real-world NLP deployments.
- Hands-on experience in deploying models on cloud platforms (AWS, Azure, or GCP).
- Familiarity with the healthcare domain and working on Real World Evidence (RWE) datasets is highly desirable.
Preferred Skills:
- Strong understanding of healthcare data regulations (HIPAA, PHI handling, etc.)
- Prior experience in speech and text-based AI applications
- Excellent communication and stakeholder engagement skills
- A passion for impactful innovation in the healthcare space

Job Title: AI Architecture Intern
Company: PGAGI Consultancy Pvt. Ltd.
Location: Remote
Employment Type: Internship
Position Overview
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Key Responsibilities:
- AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
- Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
- Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
- Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
- Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.
Requirements:
- Strong understanding of AI concepts, machine learning algorithms, and data structures.
- Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
- Proficiency in programming languages such as Python, Java, or C++.
- Demonstrated interest in system architecture, design thinking, and scalable solutions.
- Up-to-date knowledge of AI trends, tools, and technologies.
- Ability to work independently and collaboratively in a remote team environment
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).
Preferred Experience:
- Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
- Exposure to AI-driven startups or fast-paced technology environments.
- Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.


Job Description:
We are looking for a Python Lead who has the following experience and expertise -
- Proficiency in developing RESTful APIs using Flask/Django or Fast API framework
- Hands-on experience of using ORMs for database query mapping
- Unit test cases for code coverage and API testing
- Using Postman for validating the APIs Experienced with GIT process and rest of the code management including knowledge of ticket management systems like JIRA
- Have at least 2 years of experience in any cloud platform
- Hands-on leadership experience
- Experience of direct communication with the stakeholders
Skills and Experience:
- Good academics
- Strong teamwork and communications
- Advanced troubleshooting skills
- Ready and immediately available candidates will be preferred.
Real-World Evidence (RWE) Analyst
Summary:
As an experienced Real-World Evidence (RWE) Analyst, you will leverage our cutting-edge healthcare data platform (accessing over 60 million lives in Asia, with ambitious growth plans across Africa and the Middle East) to deliver impactful clinical insights to our pharmaceutical clients. You will be involved in the full project lifecycle, from designing analyses to execution and delivery, within our agile data science team. This is an exciting opportunity to contribute significantly to a growing early-stage company focused on improving precision medicine and optimizing patient care for diverse populations.
Responsibilities:
· Contribute to the design and execution of retrospective and prospective real-world research, including epidemiological and patient outcomes studies.
· Actively participate in problem-solving discussions by clearly defining issues and proposing effective solutions.
· Manage the day-to-day progress of assigned workstreams, ensuring seamless collaboration with the data engineering team on analytical requests.
· Provide timely and clear updates on project status to management and leadership.
· Conduct in-depth quantitative and qualitative analyses, driven by project objectives and your intellectual curiosity.
· Ensure the quality and accuracy of analytical outputs, and contextualize findings by reviewing relevant published research.
· Synthesize complex findings into clear and compelling presentations and written reports (e.g., slides, documents).
· Contribute to the development of standards and best practices for future RWE analyses.
Requirements:
· Undergraduate or post-graduate degree (MS or PhD preferred) in a quantitative analytical discipline such as Epidemiology, (Bio)statistics, Data Science, Engineering, Econometrics, or Operations Research.
· 8+ years of relevant work experience demonstrating:
o Strong analytical and problem-solving capabilities.
o Experience conducting research relevant to the pharmaceutical/biotech industry.
· Proficiency in technical skills including SQL and at least one programming language (R, Python, or similar).
· Solid understanding of the healthcare/medical and pharmaceutical industries.
· Proven experience in managing workstream or project management activities.
· Excellent written and verbal communication, and strong interpersonal skills with the ability to build collaborative partnerships.
· Exceptional attention to detail.
· Proficiency in Microsoft Office Suite (Excel, PowerPoint, Word).
Other Desirable Skills:
· Demonstrated dedication to teamwork and the ability to collaborate effectively across different functions.
· A strong desire to contribute to the growth and development of the RWE analytics function.
· A proactive and innovative mindset with an entrepreneurial spirit, eager to take on a key role in a dynamic, growing company.


Join CD Edverse, an innovative EdTech app, as AI Specialist! Develop a deep research tool to generate comprehensive courses and enhance AI mentors. Must have strong Python, NLP, and API integration skills. Be part of transforming education! Apply now.
Position Responsibilities :
- Work with product managers to understand the business workflows/requirements, identify needs gaps, and propose relevant technical solutions
- Design, Implement & tune changes to the product that work within the time tracking/project management environment
- Be understanding and sensitive to customer requirements to be able to offer alternative solutions
- Keep in pace with the product releases
- Work within Deltek-Replicon's software development process, expectations and quality initiatives
- Work to accurately evaluate risk and estimate software development tasks
- Strive to continually improve technical and developmental skills
Qualifications :
- Bachelor of Computer Science, Computer Engineering, or related field.
- 4+ years of software development experience (Core: Python v2.7 or higher).
- Strong Data structures, algorithm design, problem-solving, and Quantitative analysis skills.
- Knowledge of how to use microservices and APIs in code.
- TDD unit test framework knowledge (preferably Python).
- Strong and well-versed with Git basic and advanced concepts and their respective commands and should be able to handle merge conflicts.
- Must have basic knowledge of web development technologies and should have worked on any web development framework.
- SQL queries working knowledge.
- Basic operating knowledge in some kind of project management tool like Jira.
- Good to have:- Knowledge of EmberJs, C#, and .Net framework.

Experience: 5-8 Years
Work Mode: Remote
Job Type: Fulltime
Mandatory Skills: Python,SQL, Snowflake, Airflow, ETL, Data Pipelines, Elastic Search, & AWS.
Role Overview:
We are looking for a talented and passionate Senior Data Engineer to join our growing data team. In this role, you will play a key part in building and scaling our data infrastructure, enabling data-driven decision-making across the organization. You will be responsible for designing, developing, and maintaining efficient and reliable data pipelines for both ELT (Extract, Load, Transform) and ETL (Extract, Transform, Load) processes.
Responsibilities:
- Design, develop, and maintain robust and scalable data pipelines for ELT and ETL processes, ensuring data accuracy, completeness, and timeliness.
- Work with stakeholders to understand data requirements and translate them into efficient data models and pipelines.
- Build and optimize data pipelines using a variety of technologies, including Elastic Search, AWS S3, Snowflake, and NFS.
- Develop and maintain data warehouse schemas and ETL/ELT processes to support business intelligence and analytics needs.
- Implement data quality checks and monitoring to ensure data integrity and identify potential issues.
- Collaborate with data scientists and analysts to ensure data accessibility and usability for various analytical purposes.
- Stay current with industry best practices, CI/CD/DevSecFinOps, Scrum and emerging technologies in data engineering.
- Contribute to the development and enhancement of our data warehouse architecture
Required Skills:
- Bachelor's degree in Computer Science, Engineering, or a related field.
- 5+ years of experience as a Data Engineer with a strong focus on ELT/ETL processes.
- At least 3+ years of exp in Snowflake data warehousing technologies.
- At least 3+ years of exp in creating and maintaining Airflow ETL pipelines.
- Minimum 3+ years of professional level experience with Python languages for data manipulation and automation.
- Working experience with Elastic Search and its application in data pipelines.
- Proficiency in SQL and experience with data modelling techniques.
- Strong understanding of cloud-based data storage solutions such as AWS S3.
- Experience working with NFS and other file storage systems.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration skills.


About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 3+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 3+ years of Object-Oriented Programming with Python or equivalent
- 3+ years of experience working with relational (SQL) databases
- 3+ years of experience using Git to contribute code as part of a team of Software Craftspeople
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.

What We’re Looking For
Proven experience as a Machine Learning Engineer, Data Scientist, or similar role
Expertise in applying machine learning algorithms, deep learning, and data mining techniques in an enterprise environment
Strong proficiency in Python (or other languages) and familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, or similar.
Experience working with natural language processing (NLP) or computer vision is highly desirable.
Understanding and experience with (MLOps), including model development, deployment, monitoring, and maintenance.
Experience with cloud platforms (like AWS, Google Cloud, or Azure) and knowledge of deploying machine learning models at scale.
Familiarity with data architecture, data engineering, and data pipeline tools.
Familiarity with containerization technologies such as Docker, and orchestration systems like Kubernetes.
Knowledge of the insurance sector is beneficial but not required.
Bachelor's/Master's degree in Computer Science, Data Science, Mathematics, or a related field.
What You’ll Be Doing
Algorithm Development:
Design and implement advanced machine learning algorithms tailored for our datasets.
Model Creation:
Build, train, and refine machine learning models for business integration.
Collaboration:
Partner with product managers, developers, and data scientists to align machine learning solutions with business goals.
Industry Innovation:
Stay updated with Insurtech trends and ensure our solutions remain at the forefront.
Validation:
Test algorithms for accuracy and efficiency, collaborating with the QA team.
Documentation:
Maintain clear records of algorithms and models for team reference.
Professional Growth:
Engage in continuous learning and mentor junior team members.

Mandatory (Experience 1) - Must have a minimum 4+ years of experience in backend software development.
Mandatory (Experience 2) -Must have 4+ years of experience in backend development using Python (Highly preferred), Java, or Node.js.
Mandatory (Experience 3) - Must have experience with Cloud platforms like AWS (highly preferred), gcp or azure
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server / DB2 / SQL / MongoDB / Ne

At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
- Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
- High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
- Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
- Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
- Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
- Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Work on our web frontend using ReactJs
- Knowledge Redux, ReactJs, HTML CSS is a must
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 2 years of hands-on Python development experience
- You have 2 Years of hands-on ReactJs Developement experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
- All the best!!


AI Architect
Location and Work Requirements
- Position is based in KSA or UAE
- Must be eligible to work abroad without restrictions
- Regular travel within the region required
Key Responsibilities
- Minimum 7+ years of experience in Data & Analytics domain and minimum 2 years as AI Architect
- Drive technical solution design engagements and implementations
- Support customer implementations across various deployment modes (Public SaaS, Single-Tenant SaaS, and Self-Managed Kubernetes)
- Provide advanced technical support, including deployment troubleshooting and coordinating with customer AI Architect and product development teams when needed
- Guide customers in implementing generative AI solutions, including LLM integration, vector database management, and prompt engineering
- Coordinate and oversee platform installations and configuration work
- Assist customers with platform integration, including API implementation and custom model deployment
- Establish and promote best practices for AI governance and MLOps
- Proactively identify and address potential technical challenges before they impact customer success
Required Technical Skills
- Strong programming skills in Python with experience in data processing libraries (Pandas, NumPy)
- Proficiency in SQL and experience with various database technologies including MongoDB
- Container technologies: Docker (build, modify, deploy) and Kubernetes (kubectl, helm)
- Version control systems (Git) and CI/CD practices
- Strong networking fundamentals (TCP/IP, SSH, SSL/TLS)
- Shell scripting (Linux/Unix environments)
- Experience in working on on-prem, airgapped environments
- Experience with cloud platforms (AWS, Azure, GCP)
Required AI/ML Skills
- Deep expertise in both predictive machine learning and generative AI technologies
- Proven experience implementing and operationalizing large language models (LLMs)
- Strong knowledge of vector databases, embedding technologies, and similarity search concepts
- Advanced understanding of prompt engineering, LLM evaluation, and AI governance methods
- Practical experience with machine learning deployment and production operations
- Understanding of AI safety considerations and risk mitigation strategies
Required Qualities
- Excellent English communication skills with ability to explain complex technical concepts. Arabic language is advantageous.
- Strong consultative approach to understanding and solving business problems
- Proven ability to build trust through proactive customer engagement
- Strong problem-solving abilities and attention to detail
- Ability to work independently and as part of a distributed team
- Willingness to travel within the Middle East & Africa region as needed

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website - https://www.gohighlevel.com/
YouTube Channel - https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post - https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a Lead SDET with 8-10 years of experience to play a pivotal role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is an exciting opportunity to work on cutting-edge performance testing strategies and drive impactful initiatives across the organisation.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Review application architecture and suggest improvements to enhance scalability
- Leverage AI at appropriate layers to improve efficiency and drive positive business outcomes
- Drive performance testing initiatives across the organization and ensure seamless execution
- Automate the capturing of performance metrics and generate performance trend reports
- Research, evaluate, and conduct PoCs for new tools and solutions
- Collaborate with developers and architects to enhance frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Ensure high availability and reliability of applications and services
Requirements:
- 6-9 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimising frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
- Experience in increasing application/service availability from 99.9% (three 9s) to 99.99% or higher (four/five 9s)
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.

About HighLevel:
HighLevel is a cloud-based, all-in-one white-label marketing and sales platform that empowers marketing agencies, entrepreneurs, and businesses to elevate their digital presence and drive growth. With a focus on streamlining marketing efforts and providing comprehensive solutions, HighLevel helps businesses of all sizes achieve their marketing goals. We currently have ~1200 employees across 15 countries, working remotely as well as in our headquarters, which is located in Dallas, Texas. Our goal as an employer is to maintain a strong company culture, foster creativity and collaboration, and encourage a healthy work-life balance for our employees wherever they call home.
Our Website: https://www.gohighlevel.com/
YouTube Channel: https://www.youtube.com/channel/UCXFiV4qDX5ipE-DQcsm1j4g
Blog Post: https://blog.gohighlevel.com/general-atlantic-joins-highlevel/
Our Customers:
HighLevel serves a diverse customer base, including over 60K agencies & entrepreneurs and 500K businesses globally. Our customers range from small and medium-sized businesses to enterprises, spanning various industries and sectors.
Scale at HighLevel:
We operate at scale, managing over 40 billion API hits and 120 billion events monthly, with more than 500 micro-services in production. Our systems handle 200+ terabytes of application data and 6 petabytes of storage.
About the Role:
HighLevel Inc. is looking for a SDET III with 5-6 years of experience to play a crucial role in ensuring the quality, performance, and scalability of our products. We are seeking engineers who thrive in a fast-paced startup environment, enjoy problem-solving, and stay updated with the latest models and solutions. This is a great opportunity to work on cutting-edge performance testing strategies and contribute to the success of our products.
Responsibilities:
- Implement performance, scalability, and reliability testing strategies
- Capture and analyze key performance metrics to identify bottlenecks
- Work closely with development, DevOps, and infrastructure teams to optimize system performance
- Develop test strategies based on customer behavior to ensure high-performing applications
- Automate the capturing of performance metrics and generate performance trend reports
- Collaborate with developers and architects to optimize frontend and API performance
- Conduct root cause analysis of performance issues using logs and monitoring tools
- Research, evaluate, and conduct PoCs for new tools and solutions
- Ensure high availability and reliability of applications and services
Requirements:
- 4-7 years of hands-on experience in Performance, Reliability, and Scalability testing
- Strong skills in capturing, analyzing, and optimizing performance metrics
- Expertise in performance testing tools such as Locust, Gatling, k6, etc.
- Experience working with cloud platforms (Google Cloud, AWS, Azure) and setting up performance testing environments
- Knowledge of CI/CD deployments and integrating performance testing into pipelines
- Proficiency in scripting languages (Python, Java, JavaScript) for test automation
- Hands-on experience with monitoring and observability tools (New Relic, AppDynamics, Prometheus, etc.)
- Strong knowledge of JVM monitoring, thread analysis, and RESTful services
- Experience in optimizing frontend performance and API performance
- Ability to deploy applications in Kubernetes and troubleshoot environment issues
- Excellent problem-solving skills and the ability to troubleshoot customer issues effectively
EEO Statement:
The company is an Equal Opportunity Employer. As an employer subject to affirmative action regulations, we invite you to voluntarily provide the following demographic information. This information is used solely for compliance with government recordkeeping, reporting, and other legal requirements. Providing this information is voluntary and refusal to do so will not affect your application status. This data will be kept separate from your application and will not be used in the hiring decision.
We are looking for a Site Reliability Engineer to join our team and help ensure the availability, performance, and scalability of our critical systems. You will work closely with development and operations teams to automate processes, enhance system reliability, and improve observability.
Requirements:
- Experience: 4+ years in Site Reliability Engineering, DevOps, or Cloud Infrastructure roles
- Cloud Expertise: Hands-on experience with GCP and AWS
- Infrastructure as Code (IaC): Terraform, Helm, or equivalent tools
- Containerisation & Orchestration: Docker, Kubernetes (GKE)
- Observability: Experience with Prometheus, Grafana, ELK, OpenTelemetry, or similar monitoring/logging tools
- Programming/Scripting: Proficiency in Python, Bash, or Shell scripting. Basic understanding of API parsing and JSON manipulation
- CI/CD Pipelines: Hands-on experience with Jenkins, GitHub Actions, ArgoCD, or similar tools
- Incident Management: Experience with on-call rotations, SLOs, SLIs, SLAs, Escalation Policies, and incident resolution
- Databases: Experience in monitoring MongoDB, Redis, ES, Queue based etc
Responsibilities:
- Develop and improve observability using monitoring, logging, tracing, and alerting tools (Prometheus, Grafana, ELK, OpenTelemetry, etc.)
- Optimize system performance, troubleshoot incidents, and conduct post-mortems/RCA to prevent future issues
- Collaborate with developers to enhance application reliability, scalability, and performance
- Drive cost optimisation efforts in cloud environments.
- Monitor multiple databases (MongoDB, Redis, ES, Queue based etc.)


Role - AI Architect
Location - Noida/Remote
Mode - Hybrid - 2 days WFO
As an AI Architect at CLOUDSUFI, you will play a key role in driving our AI strategy for customers in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, and Fintech sectors. You will be responsible for delivering large-scale AI transformation programs for multinational organizations, preferably Fortune 500 companies. You will also lead a team of 10-25 Data Scientists to ensure successful project execution.
Required Experience
● Minimum 12+ years of experience in Data & Analytics domain and minimum 3 years as AI Architect
● Master’s or Ph.D. in a discipline such as Computer Science, Statistics or Applied Mathematics with an emphasis or thesis work on one or more of the following: deep learning, machine learning, Generative AI and optimization.
● Must have experience of articulating and presenting business transformation journey using AI / Gen AI technology to C-Level Executives
● Proven experience in delivering large-scale AI and GenAI transformation programs for multinational organizations
● Strong understanding of AI and GenAI algorithms and techniques
● Must have hands-on experience in open-source software development and cloud native technologies especially on GCP tech stack
● Proficiency in python and prominent ML packages Proficiency in Neural Networks is desirable, though not essential
● Experience leading and managing teams of Data Scientists, Data Engineers and Data Analysts
● Ability to work independently and as part of a team Additional Skills
(Preferred):
● Experience in the Oil & Gas, Energy, Manufacturing, Retail, Healthcare, or Fintech sectors
● Knowledge of cloud platforms (AWS, Azure, GCP)
● GCP Professional Cloud Architect and GCP Professional Machine Learning Engineer Certification
● Experience with AI frameworks and tools (TensorFlow, PyTorch, Keras)

Job Type: Full-time
Location: Remote
Company Description
The Blue Owls Solutions specializes in delivering cutting-edge Generative AI Solutions, AI-Powered Software Development, and comprehensive Data Analytics and Engineering services. Our expertise in End-To-End ML/AI Development ensures that clients benefit from scalable and efficient AI-driven solutions tailored to their unique business needs. We create intelligent voice and text agents, chatbots, and process automation solutions, and our data analytics services provide actionable insights for strategic decision-making. Our mission is to bridge the gap between AI innovation and adoption, delivering value-driven, outcome-based solutions that empower our clients to achieve their business goals.
Role Description
We're seeking an enthusiastic Backend Developer who thrives on solving interesting challenges and building reliable, efficient applications. While basic competency in frontend (React) is sufficient, strong backend skills (Python, FastAPI, SQL, pandas) and cloud-native awareness are essential. The ideal candidate enjoys learning new tech stacks, and enjoys solving problems independently.
Required Skills (In order of importance)
- Strong proficiency in Python backend development with FastAPI.
- Familiarity with data analysis using pandas, numpy, and SQL.
- Familiarity with cloud-native concepts and containerization (Docker).
- Basic React skills for frontend integration.
- Excellent problem-solving skills, adaptability, and quick learning abilities.
- Experience with version control systems (e.g., Git)
Preferred Qualifications:
- 3+ Years of experience as Backend Engineer
- Experience with PostgreSQL or other relational databases.
- Azure Cloud Experience.
- Experience writing clean, maintainable, and testable code.
- Experience in AI/ML development is a plus
Why Join Us?
- Collaborative, remote-first environment.
- Opportunities for rapid career growth and learning.
- Competitive Pay.
- Engaging projects focused on practical problem-solving.

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

Job Description:
We are seeking a highly analytical and detail-oriented Data Analyst to join our team. The ideal candidate will have strong problem-solving skills, proficiency in SQL and AWS QuickSight, and a passion for extracting meaningful insights from data. You will be responsible for analyzing complex datasets, building reports and dashboards, and providing data-driven recommendations to support business decisions.
Key Responsibilities:
- Extract, transform, and analyze data from multiple sources to generate actionable insights.
- Develop interactive dashboards and reports in AWS QuickSight to visualize trends and key metrics.
- Write optimized SQL queries to retrieve and manipulate data efficiently.
- Collaborate with stakeholders to understand business requirements and provide analytical solutions.
- Identify patterns, trends, and statistical correlations in data to support strategic decision-making.
- Ensure data integrity, accuracy, and consistency across reports.
- Continuously explore new tools, techniques, and methodologies to enhance analytical capabilities.
Qualifications & Skills:
- Strong proficiency in SQL for querying and data manipulation.
- Hands-on experience with AWS QuickSight for data visualization and reporting.
- Strong analytical thinking and problem-solving skills with the ability to interpret complex data.
- Experience working with large datasets and relational databases.
- Passion for slicing and dicing data to uncover key insights.
- Exceptional communication skills to effectively understand business requirements and present insights.
- A growth mindset with a strong attitude for continuous learning and improvement.
Preferred Qualifications:
- Experience with Python is a plus.
- Familiarity with cloud-based data environments (AWS etc).
- Familiarity with leveraging existing LLMs/AI tools to enhance productivity, automate repetitive tasks, and improve analysis efficiency.

Responsibilities:
- Test Planning & Execution: Develop and execute test plans, test cases, and test scripts to verify software functionality, performance, and scalability.
- Collaboration: Work closely with cross-functional teams (developers, product managers, designers) to understand requirements and provide input on testability and quality aspects.
- Testing: Conduct both manual and automated testing to validate software functionality and identify potential issues.
- Regression Testing: Perform regression testing to ensure the stability of new features and enhancements.
- Bug Identification: Identify, isolate, and document bugs, issues, and defects, and collaborate with the development team to resolve them.
- Code Reviews: Participate in code reviews and contribute to improving software quality and testing processes.
- Industry Awareness: Stay updated with industry best practices and emerging trends in QA and testing methodologies.
Requirements:
- Educational Qualification: Bachelor’s degree in Computer Science, Engineering, or a related field.
- Experience: 4+ years as a QA Lead Engineer or similar role, preferably in a SaaS or software development environment.
- QA Expertise: Strong knowledge of software QA methodologies, tools, and processes.
- Test Planning & Execution: Experience in test planning, test case development, and execution.
- Testing Frameworks: Familiarity with manual and automated testing frameworks and tools.
- Automation Skills: Proficiency in at least one programming or scripting language (e.g., Python, JavaScript) for test automation.
- Technical Knowledge: Solid understanding of web technologies, APIs, and databases.
- Problem-Solving: Excellent analytical and problem-solving skills with keen attention to detail.
- Communication: Strong communication skills and ability to work effectively in a remote team environment.
Preferred Qualifications:
- Test Automation Frameworks: Experience with tools like Playwright, Selenium, or Cypress.
- Performance Testing: Knowledge of performance testing and load testing methodologies.
- Agile Methodologies: Familiarity with agile development methodologies (e.g., Scrum) and experience in an Agile/DevOps environment.
Note:
- This is a remote-only position for candidates based in India.

Key Responsibilities:
• IAM Solution Implementation: Lead and execute OpenIAM deployments at enterprise clients, including integration with directories, databases, applications, and cloud platforms.
• Identity Governance & Administration (IGA): Implement access reviews, role-based access control (RBAC), and identity lifecycle management to help clients enforce security policies and regulatory compliance.
• Consultative Engagement: Work closely with clients to capture requirements, understand business challenges, and design IAM & Identity Governance solutions that align with security and compliance needs.
• Architecture & Design: Develop IAM and IGA architectures tailored to customer environments, leveraging best practices from previous IAM implementations.
• Configuration & Customization: Configure OpenIAM components, develop custom workflows, and implement automation for identity lifecycle management and governance processes.
• Customer Collaboration: Guide clients through workshops, requirement sessions, and technical discussions, ensuring a smooth implementation process.
• Technical Troubleshooting: Diagnose and resolve issues related to authentication, authorization, provisioning, governance, and access controls.
• Documentation: Create high-quality documentation, including design documents, implementation guides, and customer-facing reports.
• Mentorship & Best Practices: Share IAM & IGA best practices with clients and internal teams, mentoring junior engineers when needed.
Required Skills & Experience:
• 4+ years of hands-on IAM experience, implementing solutions from major vendors such as Okta, SailPoint, Saviynt, ForgeRock, Oracle IAM, Ping Identity, or similar.
• Strong understanding of IAM and Identity Governance concepts, including:
• Access certification and review processes
• Role-based access control (RBAC) and attribute-based access control (ABAC)
• Identity lifecycle management and policy enforcement
• Separation of duties (SoD) controls and compliance
• Experience working with LDAP directories (OpenLDAP, Active Directory) and database systems (PostgreSQL, MySQL, or similar).
• Proficiency in Linux administration, shell scripting, and troubleshooting IAM-related issues in Linux environments.
• Hands-on experience with Java, JavaScript, and Python for custom development, scripting, or integrations.
• Knowledge of REST APIs, SCIM, SAML, OIDC, and FIDO2.
• Strong problem-solving skills and ability to work independently in a fast-paced consulting environment.
• Excellent communication and interpersonal skills, with the ability to work directly with clients in a consultative manner.
• Strong documentation skills to produce high-quality technical reports and client deliverables.
Preferred Qualifications:
• Prior experience deploying IAM & IGA solutions in cloud environments (AWS, Azure, GCP).
• Knowledge of Kubernetes and containerized applications.
• Experience integrating IAM with enterprise applications such as ServiceNow, Workday, Salesforce, or SAP.
• Previous consulting experience working with enterprise customers.
We are looking for a Lead QA Engineer to spearhead our quality assurance initiatives, ensuring the reliability, security, and performance of our platform. This role is instrumental in refining our testing strategy, expanding test automation, and enhancing overall QA processes.
Key Responsibilities
- Test Strategy & Execution: Design, develop, and maintain test plans, test cases, and automated test scripts for API, UI, and performance testing.
- Automated Testing: Develop and maintain automated API and UI tests using industry-standard frameworks.
- Continuous Improvement: Identify gaps in the QA process, drive enhancements, and implement best practices in test automation and software quality.
- Collaboration: Work closely with developers, product managers, and DevOps teams to integrate automated testing into the CI/CD pipeline.
- Defect Management: Investigate, document, and track defects while ensuring timely resolution with development teams.
- Performance & Security Testing: Contribute to performance testing and security validation to ensure platform robustness.
- Mentorship: Guide and mentor junior QA engineers, fostering a culture of quality and excellence.
Required Skills
- 4+ years of experience in Quality Assurance, with a strong focus on test automation.
- Expertise in designing and executing automated API tests using tools like Postman, RestAssured, Cypress, or Playwright.
- Proficiency in UI automation frameworks such as Selenium, Cypress, or Playwright.
- Strong knowledge of test automation tools, scripting languages (Python, Java, JavaScript), and CI/CD integration.
- Familiarity with containerized environments (Docker, Kubernetes).
- Experience with performance and security testing.
- Excellent problem-solving skills and the ability to work independently in a fast-paced environment.
- Strong communication skills to effectively collaborate with cross-functional teams.
Preferred Qualifications
- Experience in Identity and Access Management (IAM) or knowledge of authentication protocols such as OIDC, SAML, SCIM, or FIDO2.
- Familiarity with performance testing tools like JMeter, K6, or similar frameworks.
- Experience working in a SaaS, cloud-native, or enterprise security environment.
If you are passionate about driving quality and automation, we would love to hear from you!
Role: Lead AI Engineer
Exp: 3-6 Years
CTC: 35.00-40.00 LPA
Work Mode :WFH
Mandatory Criteria (Can't be neglected during screening) :
• Need Excellent Communication skills as the company is dealing with US Clients also
• 3+ years in AI development, with experience in multi-agent systems, logistics, or related fields.
• Proven experience in conducting A/B testing and beta testing for AI systems.
• Hands-on experience with CrewAI and LangChain tools.
• Should have hands-on experience working with end-to-end chatbot development, specifically with Agentic and RAG-based chatbots. It is essential that the candidate has been involved in the entire lifecycle of chatbot creation, from design to deployment.
• Should have practical experience with LLM application deployment.
• Proficiency in Python and machine learning frameworks (e.g., TensorFlow, PyTorch).
• Experience in setting up monitoring dashboards with tools like Grafana, Tableau, or similar.
• Proficiency with cloud platforms (AWS, Azure, GCP)

Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are looking for an experienced Backend and Data Developer with expertise in Java, SQL, BigQuery development working on public clouds, mainly GCP. As a Senior Data Developer, you will play a vital role in designing, building, and maintaining robust systems to support our data analytics. This position offers the opportunity to work on complex services, collaborating closely with cross-functional teams to drive successful project delivery.
Responsibilities:
Development and maintenance of data pipelines and automation scripts with Python.
Creation of data queries and optimization of database processes with SQL.
Use of bash scripts for system administration, automation and deployment processes.
Database and cloud technologies.
Managing, optimizing and querying large amounts of data in an Exasol database (prospectively Snowflake).
Google Cloud Platform (GCP): Operation and scaling of cloud-based BI solutions, in particular.
Composer (Airflow): Orchestration of data pipelines for ETL processes.
Cloud Functions: Development of serverless functions for data processing and automation.
Cloud Scheduler: Planning and automation of recurring cloud jobs.
Cloud Secret Manager: Secure storage and management of sensitive access data and API keys.
BigQuery: Processing, analyzing and querying large amounts of data in the cloud.
Cloud Storage: Storage and management of structured and unstructured data.
Cloud monitoring: monitoring the performance and stability of cloud-based applications.
Data visualization and reporting.
Creation of interactive dashboards and reports for the analysis and visualization of business data with Power BI.
Requirements:
Minimum of 4-6 years of experience in backend development, with strong expertise in BigQuery, Python and MongoDB or SQL.
Strong knowledge of database design, querying, and optimization with SQL and MongoDB and designing ETL and orchestration of data pipelines.
Expierience of minimum of 2 years with at least one hyperscaler, in best case GCP.
Combined with cloud storage technologies, cloud monitoring and cloud secret management
Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
Knowledge of agile methodologies and working in cross-functional, collaborative teams.
Skills & Requirements
SQL, BigQuery, GCP, Python, MongoDB, Exasol, Snowflake, Bash scripting, Airflow, Cloud Functions, Cloud Scheduler, Cloud Secret Manager, Cloud Storage, Cloud Monitoring, ETL, Data Pipelines, Power BI, Database Optimization, Cloud-Based BI Solutions, Data Processing, Data Automation, Agile Methodologies, Cross-Functional Collaboration.

Need Excellent Communication skills as the company is dealing with US Clients also
• 3+ years in AI development, with experience in multi-agent systems, logistics, or related fields.
• Proven experience in conducting A/B testing and beta testing for AI systems.
• Hands-on experience with CrewAI and LangChain tools.
• Should have hands-on experience working with end-to-end chatbot development, specifically with Agentic and RAG-based chatbots. It is essential that the candidate has been involved in the entire lifecycle of chatbot creation, from design to deployment.
• Should have practical experience with LLM application deployment.
• Proficiency in Python and machine learning frameworks (e.g., TensorFlow, PyTorch).
• Experience in setting up monitoring dashboards with tools like Grafana, Tableau, or similar.
• Proficiency with cloud platforms (AWS, Azure, GCP)


What You'll Do:
- Designed, architected, and built the core AI-powered application from the ground up.
- You will join a small team with an unparalleled 'fire-in-belly' to close deliverables and a "hungry for more"
- Collaborate with visionary founders to define the product roadmap and bring innovative ideas to life.
- Leverage your expertise in Azure Cloud, full-stack development, and automation to develop scalable, secure, and high-performing solutions.
- Take ownership of the technical stack, system integrations, and deployment pipelines.
- Establish best development, testing, and deployment practices to ensure the product's success in a competitive market.
Who You Are:
- Proficient in full-stack technologies, including React.js, Node.js, Python/Django, and similar frameworks.
- Tech leader who thrives in startup environments and is fully hands-on in coding and working under minimal supervision.
- Highly results-driven. (very important)
- 8+ years of proven experience in the technology product space, including building and scaling applications.
- Strong knowledge of Cloud platforms, Azure Cloud architecture, and automation tools.
- Experienced in developing and deploying enterprise-grade applications.
- Passionate about AI and its potential to transform the enterprise landscape.
- Eager to solve complex problems and create products with tangible business impact.
Why Join Us?:
- Be part of a founding team, playing a critical role in shaping a cutting-edge AI product.
- Collaborate with industry leaders and visionaries who are passionate about innovation.
- This is an opportunity to grow alongside the company and lead a high-impact engineering team.
- Equity and ownership in the product’s success.
We want to hear from you if you’re ready to make a real difference, push boundaries, and build something extraordinary!


- Experience in Python
- Experience in any Framework like Django, and Flask.
- Primary and Secondary skills - Python, OOPs and Data Structure
- Good understanding of Rest Api
- Familiarity with event-driven programming in Python
- Good analytical and troubleshooting skills

Immediate Joiners Preferred. Notice Period - Immediate to 30 Days
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
About Us
adesso India is a dynamic and innovative IT Services and Consulting company based in Kochi. We are committed to delivering cutting-edge solutions that make a meaningful impact on our clients. As we continue to expand our development team, we are seeking a talented and motivated Backend Developer to join us in creating scalable and high-performance backend systems.
Job Description
We are looking for an experienced Backend and Data Developer with expertise in Java, SQL, BigQuery development working on public clouds, mainly GCP. As a Senior Data Developer, you will play a vital role in designing, building, and maintaining robust systems to support our data analytics. This position offers the opportunity to work on complex services, collaborating closely with cross-functional teams to drive successful project delivery.
Responsibilities
- Development and maintenance of data pipelines and automation scripts with Python
- Creation of data queries and optimization of database processes with SQL
- Use of bash scripts for system administration, automation and deployment processes
- Database and cloud technologies
- Managing, optimizing and querying large amounts of data in an Exasol database (prospectively Snowflake)
- Google Cloud Platform (GCP): Operation and scaling of cloud-based BI solutions, in particular
- Composer (Airflow): Orchestration of data pipelines for ETL processes
- Cloud Functions: Development of serverless functions for data processing and automation
- Cloud Scheduler: Planning and automation of recurring cloud jobs
- Cloud Secret Manager: Secure storage and management of sensitive access data and API keys
- BigQuery: Processing, analyzing and querying large amounts of data in the cloud
- Cloud Storage: Storage and management of structured and unstructured data
- Cloud monitoring: monitoring the performance and stability of cloud-based applications
- Data visualization and reporting
- Creation of interactive dashboards and reports for the analysis and visualization of business data with Power BI
Requirements
- Minimum of 4-6 years of experience in backend development, with strong expertise in BigQuery, Python and MongoDB or SQL.
- Strong knowledge of database design, querying, and optimization with SQL and MongoDB and designing ETL and orchestration of data pipelines.
- Expierience of minimum of 2 years with at least one hyperscaler, in best case GCP
- Combined with cloud storage technologies, cloud monitoring and cloud secret management
- Excellent communication skills to effectively collaborate with team members and stakeholders.
Nice-to-Have:
- Knowledge of agile methodologies and working in cross-functional, collaborative teams.



Dear Professionals!
We are HiringGENAIML Developer!
Key Skills & Qualifications
- Strong proficiency in Python, with a focus on GenAI best practices and frameworks.
- Expertise in machine learning algorithms, data modeling, and model evaluation.
- Experience with NLP techniques, computer vision, or generative AI.
- Deep knowledge of LLMs, prompt engineering, and GenAI technologies.
- Proficiency in data analysis tools like Pandas and NumPy.
- Hands-on experience with vector databases such as Weaviate or Pinecone.
- Familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment.
- Strong problem-solving skills and critical-thinking abilities.
- Experience with AI model fairness, bias detection, and adversarial testing.
- Excellent communication skills to translate business needs into technical solutions.
Preferred Qualifications
- Bachelors or Masters degree in Computer Science, AI, or a related field.
- Experience with MLOps practices for model deployment and maintenance.
- Strong understanding of data pipelines, APIs, and cloud infrastructure.
- Advanced degree in Computer Science, Machine Learning, or a related field (preferred).

Elevate our quality assurance through automation
At Jules AI we're on a mission to revolutionize the recycled materials industry with cutting-edge technology solutions. As an Automation Engineer within our QA team, you will play a crucial role in enhancing our product development lifecycle by implementing and optimizing automated testing frameworks. Your work will ensure our software products are reliable, efficient, and meet the highest quality standards before reaching our clients.
What You Will Do
- Develop Automated Testing Frameworks: Design, build, and maintain automated testing frameworks and systems across various platforms and technologies.
- Collaborate on Test Planning: Work closely with QA analysts and engineers to understand system requirements and features, translating these into detailed, scalable, and robust automated tests.
- Continuous Integration and Deployment (CI/CD): Integrate automation tests with CI/CD pipelines, ensuring continuous testing and feedback in the software development lifecycle.
- Performance and Scalability Testing: Implement automated scripts to test performance and scalability of our software products, identifying bottlenecks and optimization opportunities.
- Bug Detection and Reporting: Utilize automated tests to detect and document bugs and issues within software products, collaborating with development teams to ensure timely resolutions.
- Tool and Technology Evaluation: Stay informed on the latest trends and tools in automated testing, recommending and implementing new technologies to improve testing efficiency and effectiveness.
- Quality Metrics and Reporting: Monitor, analyze, and report on quality metrics generated from automated tests to inform quality improvements and decision-making.
Who We Are Looking For
Skills and Qualifications:
- Proven Experience: 3+ years of previous experience in QA automation or a similar role, with a strong portfolio of successful automation projects.
- Technical Proficiency: Expertise in automation tools (e.g., Selenium, Playwright, TestComplete, Appium) and programming languages (e.g., Python, Java, JavaScript).
- Understanding of QA Methodologies: Deep knowledge of QA methodologies, tools, and processes, with the ability to apply this knowledge to automation practices.
- Problem-Solving Skills: Excellent analytical and problem-solving skills, with a keen attention to detail.
- Collaboration and Communication: Strong communication skills and the ability to work collaboratively within the QA team and across departments.
- Agile Environment Experience: Experience working in an Agile/Scrum development process, with an understanding of its impact on QA and automation practices.
What we offer
Work closely with a global team helping bring automation and technological intelligence to the recycling world.
You can also expect:
- As a global company, we treasure and encourage diversity, perspective, interest, and representation through inclusivity. The more we have, the better the solution.
- Connect and work with leading minds from the recycling industry and be part of a growing, energetic global team, across time zones, regions, offices and screens.
- Exposure to developments and tools within your field ensures evolution in your career and skill building.
- We adopt a Bring Your Own Device policy and encourage flexibility and freedom in how you work through competitive compensation and yearly appraisals
- Health insurance coverage, paid vacation days and flexible work hours help you maintain a work-life balance
- Have the opportunity to network and collaborate in a diverse community.
Are You Ready for the Challenge?
- Become a key player in our mission to transform the recycling industry through technological excellence. If you're passionate about quality assurance, automation, and making a difference, we'd love to hear from you.

Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.




We’re Building the Future of Immigration Tech
We are developing a high-performance, AI-driven immigration platform that automates visa assessments and guidance for high-skilled immigrants. Our focus is on speed, accuracy, and scalability—not a flashy UI, but a powerful decision-making engine that delivers real value.
We need top-tier engineers who build for performance over aesthetics. If you love AI, automation, and disrupting old systems, this is for you.
🛠 Open Roles
1️⃣ AI/ML Engineer (Visa Assessment AI)
- Develop a cutting-edge AI model for visa eligibility assessments.
- Use NLP to process immigration laws, policies, and case precedents.
- Optimize for accuracy, efficiency, and scale (real-time processing).
2️⃣ Full-Stack Developer (Lean & Scalable Web App)
- Build a high-performance, no-frills web app (React/Next.js preferred).
- Integrate the AI model seamlessly into a secure and scalable backend (Python/Django or Node.js).
- Implement fast data retrieval for applicant evaluations.
🔍 Who We’re Looking For
✔ AI/ML Engineer: Strong experience in NLP, AI automation, and structured data processing. Experience with TensorFlow/PyTorch/OpenAI APIs is a plus.
✔ Full-Stack Developer: Expertise in React (Next.js preferred), Python/Django, or Node.js. Must prioritize performance & security.
✔ Both: You’re a problem-solver, performance-obsessed, and thrive in lean environments.
💻 Tech Stack (Recommended, Open to Suggestions)
- AI/ML: Python (FastAPI, TensorFlow, OpenAI APIs, Hugging Face NLP)
- Frontend: React, Next.js (for speed & SEO)
- Backend: Python/Django or Node.js (for performance & scalability)
- Database: PostgreSQL or Firebase

Location: Remote / Hybrid (Silicon Valley)
? Job Type: Full-Time
? Experience: 4+ years
About Altimate AI
At Altimate AI, we’re revolutionizing enterprise data operations with agentic AI—intelligent AI teammates that seamlessly integrate into existing workflows, helping data teams build pipelines, automate documentation, optimize infrastructure, and accelerate delivery.
Backed by top-tier investors and led by Silicon Valley veterans, we’re on a mission to automate and streamline data workflows, allowing data professionals to focus on innovation rather than repetitive tasks.
Role Overview
We are looking for an SDET (Software Development Engineer in Test) with expertise in Python, automation, data, and AI to help ensure the reliability, performance, and scalability of our AI-powered data solutions. You will work closely with engineering and data science teams to develop test automation frameworks, validate complex AI-driven data pipelines, and integrate testing into CI/CD workflows.
Key Responsibilities
✅ Develop and maintain automation frameworks for testing AI-driven data applications
✅ Design, implement, and execute automated test strategies for data pipelines and infrastructure
✅ Validate AI-driven insights and data transformations to ensure accuracy and reliability
✅ Integrate automated tests into CI/CD pipelines for continuous testing and deployment
✅ Collaborate with engineering and data science teams to improve software quality
✅ Identify performance bottlenecks and optimize automated testing approaches
✅ Ensure data integrity and compliance with industry best practices
Required Skills & Experience
? Strong Python programming skills with experience in test automation (PyTest, Selenium, or similar frameworks)
? Hands-on experience with data testing – validating ETL pipelines, SQL queries, and AI-generated outputs
? Proficiency in modern data stacks (SQL, Snowflake, dbt, Spark, Kafka, etc.)
? Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI
? Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes)
? Excellent problem-solving and analytical skills
? Strong communication skills to work effectively with cross-functional teams
Nice-to-Have (Bonus Points)
⭐ Prior experience in a fast-paced startup environment
⭐ Knowledge of machine learning model validation and AI-driven testing approaches
⭐ Experience with performance testing and security testing for AI applications
Why Join Altimate AI?
? Cutting-Edge AI & Automation – Work with next-gen AI-driven data automation technologies
? High-Impact Role – Be part of an early-stage, fast-growing startup shaping the future of enterprise AI
? Competitive Salary + Equity – Own a meaningful stake in a company with massive potential
? Collaborative Culture – Work with top-tier engineers, AI researchers, and data experts
⚡ Opportunity for Growth – Play a key role in scaling AI-powered data operations

Are you passionate about building scalable AI-driven systems and leveraging technologies like RAG, prompt engineering, and multi-agentic architectures? Do you have a strong foundation in Python and FastAPI, with experience in integrating AI solutions using the CrewAI framework and Weaviate DB? If so, Pullse is the place for you!
We are looking for an AI Developer with at least one year of experience in AI-driven solutions to join our team. This role involves designing, developing, and optimizing AI-powered backend services using Python, FastAPI, and integrating AI capabilities for advanced tasks.
About Pullse
Pullse is a cutting-edge SaaS startup on a mission to revolutionize customer support with AI-driven solutions. Our platform centralizes support channels, streamlines workflows, and enhances customer experiences with automation. We believe in empowering our team with freedom, transparency, and the opportunity to make a meaningful impact.
The Role
As an AI Developer, you will play a crucial role in designing, developing, and optimizing our AI-powered backend services, ensuring high performance, scalability, and reliability. Your primary responsibilities will include:
- API Development: Design, build, and maintain scalable APIs using Python and FastAPI.
- AI Integration: Integrate AI technologies like RAG (Retrieve, Augment, Generate) and CrewAI for advanced data processing and prompt engineering to enhance AI model interactions.
- Multi-Agent Systems: Develop and implement multi-agentic architectures to simulate complex interactions and decision-making processes.
- Vector Databases: Work with Weaviate DB to store and query dense vector representations for efficient similarity searches and AI model outputs.
- Real-Time Functionality: Implement real-time updates using WebSockets or similar technologies.
- Database Management: Develop and optimize database schemas and queries using PostgreSQL.
- Collaboration: Collaborate with cross-functional teams to design and implement new features.
- Code Quality: Ensure high code quality, security, and performance optimization.
- Testing: Participate in code reviews and contribute to improving engineering processes.
Our Tech Stack
- Backend: Python, FastAPI
- Database: PostgreSQL
- AI Focus: RAG, Prompt Engineering, Multi-Agent Systems, Weaviate DB, CrewAI
- Real-Time Communication: WebSockets
Who You Are
- Experience: At least one year of experience in backend development with Python and FastAPI.
- AI Expertise: Strong experience with AI technologies, including RAG, prompt engineering, and multi-agentic architectures. Familiarity with the CrewAI framework and Weaviate DB is a plus.
- Database Skills: Familiarity with PostgreSQL and database design principles.
- Problem-Solver: Analytical thinker with a knack for solving complex challenges.
- Team Player: Excellent communication skills and a collaborative mindset.
- Learner: Passion for learning and staying updated with the latest AI technologies.
What We Offer
- Competitive Salary: Up to INR 10 LPA.
- Equity: Additional equity options to share in our growth journey.
- Growth Opportunities: Be part of an early-stage startup where your work directly impacts the product and company.
- Flexibility: Work remotely with a supportive and dynamic team.
- AI Focus: Opportunity to work on cutting-edge AI projects and contribute to the future of customer support.
Join us in redefining customer support with AI! 🚀


Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.


Role & Responsibilities
Lead and mentor a team of data engineers, ensuring high performance and career growth.
Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
Drive the development and implementation of data governance frameworks and best practices.
Work closely with cross-functional teams to define and execute a data roadmap.
Optimize data processing workflows for performance and cost efficiency.
Ensure data security, compliance, and quality across all data platforms.
Foster a culture of innovation and technical excellence within the data team.

Key Responsibilities:
- Develop and maintain both front-end and back-end components of web applications.
- Collaborate with product managers, designers, and other developers to build user-friendly features.
- Write clean, maintainable, and efficient code that adheres to coding standards and best practices.
- Build reusable code and libraries for future use.
- Optimize applications for maximum speed and scalability.
- Implement responsive design to ensure consistent user experience across all devices.
- Work with databases (SQL/NoSQL) and integrate with third-party services and APIs.
- Troubleshoot, debug, and optimize application performance.
- Participate in code reviews, ensuring code quality and consistency across the team.
- Stay updated on the latest industry trends and best practices in full-stack development.
- Contribute to an agile development process, attending sprints, standups, and retrospectives.
Required Skills and Qualifications:
- Proven experience as a Full Stack Developer or similar role.
- Proficiency in front-end technologies such as HTML, CSS, JavaScript, and modern frameworks (React.js, Next.js, Angular, or Vue.js).
- Strong experience in back-end technologies such as Node.js, Python, Ruby, Java, or PHP.
- Familiarity with database technologies (e.g., MySQL, PostgreSQL, MongoDB).
- Experience with version control systems, particularly Git.
- Knowledge of RESTful API design and integration.
- Familiarity with cloud platforms like AWS, Azure, or Google Cloud is a plus.
- Experience in microservices based architecture is a plus.
- Strong problem-solving skills and attention to detail.
- Ability to work independently as well as part of a team.
- Excellent communication skills, both verbal and written.
Interested candidates are requested to email their resumes with the subject line "Application for [Job Title]".
Only applications received via email will be reviewed. Applications through other channels will not be considered.
Overview
Adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job description
The client’s department DPS, Digital People Solutions, offers a sophisticated portfolio of IT applications, providing a strong foundation for professional and efficient People & Organization (P&O) and Business Management, both globally and locally, for a well-known German company listed on the DAX-40 index, which includes the 40 largest and most liquid companies on the Frankfurt Stock Exchange.
We are seeking talented Java Application Developers to join our dynamic DPS team. In this role, you will design and implement change requests for existing applications or develop new projects using Jakarta EE (Java Enterprise Technologies) and Angular for the frontend. Your responsibilities will include end-to-end process mapping within the HR application landscape, analyzing developed functionalities, and addressing potential issues.
As part of our dynamic international cross-functional team you will be responsible for the design, development and deployment of modern high quality software solutions and applications as an experienced and skilled Full-stack developer.
Responsibilities:
Design, develop, and maintain the application.
Write clean, efficient, and reusable code.
Implement new features and functionality based on business requirements.
Participate in system and application architecture discussions.
Create technical designs and specifications for new features or enhancements.
Write and execute unit tests to ensure code quality.
Debug and resolve technical issues and software defects.
Conduct code reviews to ensure adherence to best practices.
Identify and fix vulnerabilities to ensure application integrity.
Working with other developers to ensure seamless integration backend and frontend elements.
Collaborating with DevOps teams for deployment and scaling.
Requirements:
Bachelor’s degree in computer science or information technology, or a related field.
Proven experience as a skilled Full-stack developer. Experience in Utilities / Energy domain is appreciated.
Strong experience with Java (Springboot), AWS / Azure or GCP, GitLab and Angular and / or React. Additional technologies like Python, Go, Kotlin, Rust or similar are welcome
Excellent problem-solving and debugging skills.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Java, Spring Boot, Jakarta EE, Angular, React, AWS, Azure, GCP, GitLab, Python, Go, Kotlin, Rust, Full-stack Development, Unit Testing, Debugging, Code Review, DevOps, Software Architecture, Microservices, HR Applications, Cloud Computing, Frontend Development, Backend Development, System Integration, Technical Design, Deployment, Problem-Solving, Communication, Collaboration.


Role: Data Scientist
Location: Bangalore (Remote)
Experience: 3+ years
Skills Required - Radiology, visual images, text, classical model, LLM multi model
JOB DESCRIPTION
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment
Job Title: Big Data Engineer (Java Spark Developer – JAVA SPARK EXP IS MUST)
Location: Chennai, Hyderabad, Pune, Bangalore (Bengaluru) / NCR Delhi
Client: Premium Tier 1 Company
Payroll: Direct Client
Employment Type: Full time / Perm
Experience: 7+ years
Job Description:
We are looking for a skilled Big Data Engineers using Java Spark with 7+ years of experience in Big Data / legacy platforms, who can join immediately. Desired candidate should have design, development and optimization of real-time & batch data pipelines experience in Big Data environment at an enterprise scale applications. You will work on building scalable and high-performance data processing solutions, integrating real-time data streams, and building a reliable Data platforms. Strong troubleshooting, performance tuning, and collaboration skills are key for this role.
Key Responsibilities:
· Develop data pipelines using Java Spark and Kafka.
· Optimize and maintain real-time data pipelines and messaging systems.
· Collaborate with cross-functional teams to deliver scalable data solutions.
· Troubleshoot and resolve issues in Java Spark and Kafka applications.
Qualifications:
· Experience in Java Spark is must
· Knowledge and hands-on experience using distributed computing, real-time data streaming, and big data technologies
· Strong problem-solving and performance optimization skills
· Looking for immediate joiners
If interested, please share your resume along with the following details
1) Notice Period
2) Current CTC
3) Expected CTC
4) Have Experience in Java Spark - Y / N (this is must)
5) Any offers in hand
Thanks & Regards,
LION & ELEPHANTS CONSULTANCY PVT LTD TEAM
SINGAPORE | INDIA



Job Description
The Opportunity
The Springboard engineering team is looking for software engineers with strong backend & frontend technical expertise. In this role, you would be responsible for building exciting features aimed at improving our student experience and expanding our student base, using the latest technologies like GenAI, as relevant. You would also contribute to making our platform more robust, flexible and scalable. This is a great opportunity to create a meaningful impact as well as grow in your career.
We are looking for engineers with different levels of experience and expertise. Depending on your proficiency levels, you will join our team as a Software Engineer II, Senior Software Engineer or Lead Software Engineer.
Responsibilities
- Design and develop features for the Springboard platform, which enriches the learning experience of thousands through human guided learning at scale
- Own quality and reliability of the product by getting hands on with code and design reviews, debugging complex issues and so on
- Contribute to the platform architecture through redesign of complex features based on evolving business needs
- Influence and establish best engineering practices through solid design decisions, processes and tools
- Provide technical mentoring to team members
You
- You have experience with web application development, on both, backend and frontend.
- You have a solid understanding of software design principles and best practices.
- You have hands-on experience in,
- Coding and debugging complex systems, with frontend integration.
- Code review, responsible for production deployments.
- Building scalable and fault-tolerant applications.
- Re-architecting / re-designing complex systems / features (i.e. managing technical debt).
- Defining and following best practices for frontend and backend systems.
- You have excellent problem solving skills and are comfortable handling ambiguity.
- You are able to analyze various alternatives and reach optimal decisions.
- You are willing to challenge the status quo, express your opinion and drive change.
- You are able to plan reasonably complex pieces of work and can handle changing priorities, unknowns and challenges with support. You want to contribute to the platform roadmap, aligning with the organization priorities and goals.
- You enjoy mentoring others and helping them solve challenging problems.
- You have excellent written and verbal communication skills with the ability to present complex technical information in a clear and concise manner. You are able to communicate with various stakeholders to understand their requirements.
- You are a proponent of quality - building best practices, introducing new processes and improvements to make the team more efficient.
Non-negotiables
Must have
- Expertise in Backend development (Python & Django experience preferred)
- Expertise in Frontend development (AngularJS / ReactJS / VueJS experience preferred)
- Experience working with SQL databases
- Experience building multiple significant features for web applications
Good to have
- Experience with Google Cloud Platform (or any cloud platform)
- Experience working with any Learning Management System (LMS), such as Canvas
- Experience working with GenAI ecosystem, including usage of AI tools such as code completion
- Experience with CI/CD pipelines and applications deployed on Kubernetes
- Experience with refactoring (redesigning complex systems / features, breaking monolith into services)
- Experience working with NoSQL databases
- Experience with Web performance optimization, SEO, Gatsby and FE Analytics
- Delivery skills, specifically planning open ended projects
- Mentoring skills
Expectations
- Able to work with open ended problems and come up with efficient solutions
- Able to communicate effectively with business stakeholders to clarify requirements for small to medium tasks and own end to end delivery
- Able to communicate estimations, plan deviations and blockers in an efficient and timely manner to all project stakeholders

The Sr AWS/Azure/GCP Databricks Data Engineer at Koantek will use comprehensive
modern data engineering techniques and methods with Advanced Analytics to support
business decisions for our clients. Your goal is to support the use of data-driven insights
to help our clients achieve business outcomes and objectives. You can collect, aggregate, and analyze structured/unstructured data from multiple internal and external sources and
patterns, insights, and trends to decision-makers. You will help design and build data
pipelines, data streams, reporting tools, information dashboards, data service APIs, data
generators, and other end-user information portals and insight tools. You will be a critical
part of the data supply chain, ensuring that stakeholders can access and manipulate data
for routine and ad hoc analysis to drive business outcomes using Advanced Analytics. You are expected to function as a productive member of a team, working and
communicating proactively with engineering peers, technical lead, project managers, product owners, and resource managers. Requirements:
Strong experience as an AWS/Azure/GCP Data Engineer and must have
AWS/Azure/GCP Databricks experience. Expert proficiency in Spark Scala, Python, and spark
Must have data migration experience from on-prem to cloud
Hands-on experience in Kinesis to process & analyze Stream Data, Event/IoT Hubs, and Cosmos
In depth understanding of Azure/AWS/GCP cloud and Data lake and Analytics
solutions on Azure. Expert level hands-on development Design and Develop applications on Databricks. Extensive hands-on experience implementing data migration and data processing
using AWS/Azure/GCP services
In depth understanding of Spark Architecture including Spark Streaming, Spark Core, Spark SQL, Data Frames, RDD caching, Spark MLib
Hands-on experience with the Technology stack available in the industry for data
management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc
Hands-on knowledge of data frameworks, data lakes and open-source projects such
asApache Spark, MLflow, and Delta Lake
Good working knowledge of code versioning tools [such as Git, Bitbucket or SVN]
Hands-on experience in using Spark SQL with various data sources like JSON, Parquet and Key Value Pair
Experience preparing data for Data Science and Machine Learning with exposure to- model selection, model lifecycle, hyperparameter tuning, model serving, deep
learning, etc
Demonstrated experience preparing data, automating and building data pipelines for
AI Use Cases (text, voice, image, IoT data etc. ). Good to have programming language experience with. NET or Spark/Scala
Experience in creating tables, partitioning, bucketing, loading and aggregating data
using Spark Scala, Spark SQL/PySpark
Knowledge of AWS/Azure/GCP DevOps processes like CI/CD as well as Agile tools
and processes including Git, Jenkins, Jira, and Confluence
Working experience with Visual Studio, PowerShell Scripting, and ARM templates. Able to build ingestion to ADLS and enable BI layer for Analytics
Strong understanding of Data Modeling and defining conceptual logical and physical
data models. Big Data/analytics/information analysis/database management in the cloud
IoT/event-driven/microservices in the cloud- Experience with private and public cloud
architectures, pros/cons, and migration considerations. Ability to remain up to date with industry standards and technological advancements
that will enhance data quality and reliability to advance strategic initiatives
Working knowledge of RESTful APIs, OAuth2 authorization framework and security
best practices for API Gateways
Guide customers in transforming big data projects, including development and
deployment of big data and AI applications
Guide customers on Data engineering best practices, provide proof of concept, architect solutions and collaborate when needed
2+ years of hands-on experience designing and implementing multi-tenant solutions
using AWS/Azure/GCP Databricks for data governance, data pipelines for near real-
time data warehouse, and machine learning solutions. Over all 5+ years' experience in a software development, data engineering, or data
analytics field using Python, PySpark, Scala, Spark, Java, or equivalent technologies. hands-on expertise in Apache SparkTM (Scala or Python)
3+ years of experience working in query tuning, performance tuning, troubleshooting, and debugging Spark and other big data solutions. Bachelor's or Master's degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience
Ability to manage competing priorities in a fast-paced environment
Ability to resolve issues
Basic experience with or knowledge of agile methodologies
AWS Certified: Solutions Architect Professional
Databricks Certified Associate Developer for Apache Spark
Microsoft Certified: Azure Data Engineer Associate
GCP Certified: Professional Google Cloud Certified



What You’ll Be Doing
- 🛠 Write code for web and mobile apps, fix bugs, and work on features that people will actually use.
- 💡 Join brainstorming sessions and help shape our products.
- 🚀 Things move fast here, and you’ll learn as you go.
- 🤝 Work closely with everyone—designers, developers, and even marketing folks.
- 🔧 Diving into Our Tech Stack: React, React Native Node, Express, Python, FastAPI, and PostgreSQL.
What We’re Looking For
We’re not looking for perfection, but if you’re curious, motivated, and excited to learn, you’ll fit right in!
For Backend Engineers
- 💻 Strong knowledge of Python, FastAPI, and PostgreSQL.
- 🔍 Solid understanding of Low-Level Design (LLD) and High-Level Design (HLD).
- ⚡ Ability to optimize APIs, manage databases efficiently, and handle real-world scaling challenges.
For Frontend Engineers
- 💻 Expertise in React Native.
- 🎯 Knowledge of native Kotlin (Android) and Swift (iOS) is a big bonus.
- 🚀 Comfortable with state management, performance optimization, and handling platform-specific quirks.
General Expectations for All Engineers
- 🛠 While you’ll be specialized in either frontend or backend, you should be good enough to fix bugs in both.
- 🔍 You enjoy figuring things out and experimenting until you get it right.
- 🤝 Great communication skills and a collaborative mindset.
- 🚀 You’re ready to dive in and make things happen.
Interview Process
If we like your application, Be ready to:
- Solve a data structures and algorithms (DSA) problem in your preferred programming language.
- Answer questions about your specialized area (frontend/backend) to showcase your depth of knowledge.
- Discuss a real-world problem and how you’d debug & fix an issue in both frontend and backend
Why Join Us?
- 💡 Your work will matter here—no busy work, just real projects with real outcomes.
- 🚀 Help shape the future of our company.
- 🎉 We’re all about solving cool problems and having fun while we do it.
Job Title: Data Engineer (Python, AWS, ETL)
Experience: 6+ years
Location: PAN India (Remote / Work From Home)
Employment Type: Full-time
Preferred domain: Real Estate
Key Responsibilities:
Develop and optimize ETL workflows using Python, Pandas, and PySpark.
Design and implement SQL queries for data extraction, transformation, and optimization.
Work with JSON and REST APIs for data integration and automation.
Manage and optimize Amazon S3 storage, including partitioning and lifecycle policies.
Utilize AWS Athena for SQL-based querying, performance tuning, and cost optimization.
Develop and maintain AWS Lambda functions for serverless processing.
Manage databases using Amazon RDS and Amazon DynamoDB, ensuring performance and scalability.
Orchestrate workflows with AWS Step Functions for efficient automation.
Implement Infrastructure as Code (IaC) using AWS CloudFormation for resource provisioning.
Set up AWS Data Pipelines for CI/CD deployment of data workflows.
Required Skills:
Programming & Scripting: Python (ETL, Automation, REST API Integration).
Databases: SQL (Athena / RDS), Query Optimization, Schema Design.
Big Data & Processing: Pandas, PySpark (Data Transformation, Aggregation).
Cloud & Storage: AWS (S3, Athena, RDS, DynamoDB, Step Functions, CloudFormation, Lambda, Data Pipelines).
Good to Have Skills:
Experience with Azure services such as Table Storage, AI Search, Cognitive Services, Functions, Service Bus, and Storage.
Qualifications:
Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
6+ years of experience in data engineering, ETL, and cloud-based data processing.


About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 3+ years of hands-on software development experience, particularly in ReactJS and Python at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 5+ years of Object-Oriented Programming with Python or equivalent
- 5+ years of experience working with relational (SQL) databases
- 5+ years of experience using Git to contribute code as part of a team of Software Craftspeople
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.

Job description
“I DESIGN MY LIFE” is an Online Business Consulting/ Coaching Company headed by renowned Business Coach – Sumit Agarwal. We provide online consulting and trainings to Business Owners of SMEs, MSMEs across India.
You can find more about us here: https://idesignmylife.net/careers/
This is a hands-on position. The role will have the following aspects:
POSITION: Software Developer
LOCATION: Full time(permanent) work from home opportunity
LANGUAGES: JavaScript, MySQL, Python, Erp Next, HTML, CSS, and Bootstrap
ROLE : We are looking for people who can
- Code well
- Have written complex software
- Self-starters - Can read the docs and don't need hand-holding
- Experience in Python/Javascript/jQuery/Vue/MySQL will be a plus
- Functional knowledge of ERP will be a plus
Basic Qualifications
- BE / B.Tech - IT/ CS
- 1 / 2+ years of professional experience
- Strong C# and SQL skills
- Strong skills in React and TypeScript
- Familiarity with AWS services or experience working in other cloud computing environments.
- Experience with SQL Server and PostgreSQL.
- Experience with automated unit testing frameworks.
- Experience in designing and implementing REST APIs & micro services-based solutions.
Job Types: Full-time, Permanent
Pay: ₹336,354.85 - ₹691,451.90 per year
Schedule:
- Day shift
- Weekend availability
Education:
- Bachelor's (Required)
Experience:
- JavaScript: 1 year (Required)
- MySQL: 1 year (Required)
Work Location: Remote
Expected Start Date: 01/03/2025


We are a dynamic and innovative technology company dedicated to delivering cutting-edge solutions that empower businesses and individuals. As we continue to grow and expand our offerings, we are seeking a coding fanatic, who is interested in working on and learning new technologies.
Position - Backend developer
Job type - Freelance or on contract
Location - Remote
Roles and Responsibilities:
- Plan,create and test REST APIs for back-end services such as authentication and authorization.
- Deploy and maintain backend systems on the cloud.
- Research and develop solutions for real life business problems.
- Creating and maintaining our apps' essential business logic, providing correct data processing and flawless user experiences.
- Database design, implementation, and management, including schema design, query optimisation and data integrity.
- Participating in code reviews, providing constructive input, and ensuring that code quality standards are met.
- Keep abreast of industry developments and best practices to bring new solutions to our initiatives.
Required skills and experience -
Must have skills : -
- Bachelor’s degree in computer programming, computer science, or a related field.
- 3 + years of experience in backend development.
- Proficient in Python,Mongodb,postgres/sql,Django and Flask
- Knowledge on nginx.
- C++/C +Cython for creating python modules
- Knowledge on Redis
- Familiarity with using AI provider apis and prompt engineering
- Experience in working with linux based instances on the cloud.
- Strong problem solving and verbal and written communication skills.
- Ability to work independently or with a group.
Good to have skills :-
- Experience in node.js and Java
- AWS and Google cloud knowledge.

position: Data Scientist
Job Category: Embedded HW_SW
Job Type: Full Time
Job Location: Pune
Experience: 3 - 5 years
Notice period: 0-30 days
Must have skills: Python, Linux-Ubuntu based OS, cloud-based platforms
Education Required: Bachelor’s / Masters / PhD:
Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering
Bachelors with 5 years or Masters with 3 years
Mandatory Skills
- Bachelor’s or master’s in computer science, Statistics, Mathematics, Data Science, Engineering, or related field
- 3-5 years of experience as a data scientist, with a strong foundation in machine learning fundamentals (e.g., supervised and unsupervised learning, neural networks)
- Experience with Python programming language (including libraries such as NumPy, pandas, scikit-learn) is essential
- Deep hands-on experience building computer vision and anomaly detection systems involving algorithm development in fields such as image-segmentation
- Some experience with open-source OCR models
- Proficiency in working with large datasets and experience with feature engineering techniques is a plus
Key Responsibilities
- Work closely with the AI team to help build complex algorithms that provide unique insights into our data using images.
- Use agile software development processes to make iterative improvements to our back-end systems.
- Stay up to date with the latest developments in machine learning and data science, exploring new techniques and tools to apply within Customer’s business context.
Optional Skills
- Experience working with cloud-based platforms (e.g., Azure, AWS, GCP)
- Knowledge of computer vision techniques and experience with libraries like OpenCV
- Excellent Communication skills, especially for explaining technical concepts to nontechnical business leaders.
- Ability to work on a dynamic, research-oriented team that has concurrent projects.
- Working knowledge of Git/version control.
- Expertise in PyTorch, Tensor Flow, Keras.
- Excellent coding skills, especially in Python.
- Experience with Linux-Ubuntu based OS

Job Title: Embedded HW/SW Engineer (Python Expert)
Location: Pune, India (Hybrid work culture – 3-4 days from office)
Job Type: Full-Time
Job Category: Embedded HW/SW
Experience: 5-7 years
Job Overview:
We are looking for a highly skilled Embedded HW/SW Engineer with expertise in Python to join our team in Pune. The ideal candidate will have 5-7 years of experience in the automotive IVI (In-Vehicle Infotainment) or audio amplifier embedded domain, with a strong focus on Python programming. The role involves working on innovative automotive solutions in a hybrid work culture, with at least 3-4 days required in the office.
Key Responsibilities:
- Embedded System Development: Work on the development, integration, and testing of embedded systems for automotive applications, focusing on IVI and audio amplifier domains.
- Python Programming: Develop and maintain Python-based scripts and applications to support embedded system development, testing, and automation.
- Collaboration: Collaborate with cross-functional teams, including hardware engineers, firmware developers, and system architects, to design and implement robust embedded solutions.
- Debugging & Testing: Conduct debugging, unit testing, and validation of embedded systems to ensure performance, security, and reliability.
- System Optimization: Optimize embedded software and hardware systems to improve performance and efficiency in automotive environments.
- Documentation: Create and maintain technical documentation, including system specifications, test plans, and reports.
- Continuous Improvement: Stay up to date with the latest trends and technologies in embedded systems, automotive, and Python programming to continuously improve system performance and capabilities.
Qualifications:
- Experience: 5-7 years of experience in embedded hardware/software development, preferably in automotive IVI or audio amplifier embedded domain.
- Python Expertise: Strong hands-on experience with Python, especially in embedded development and testing automation.
- Embedded Systems: Strong understanding of embedded system design, hardware-software integration, and real-time constraints.
- Automotive Domain Knowledge: Familiarity with automotive industry standards and practices, particularly in IVI and audio amplifier systems.
- Debugging & Testing: Experience with debugging tools, testing frameworks, and quality assurance practices in embedded systems.
- Teamwork: Ability to work effectively in a collaborative environment, sharing knowledge and contributing to team success.
- Communication Skills: Strong verbal and written communication skills to clearly document work and present findings.
Preferred Skills:
- Experience with C/C++ for embedded software development.
- Familiarity with embedded Linux or RTOS environments.
- Knowledge of automotive communication protocols such as CAN and Ethernet.
- Experience in audio signal processing or related areas within the automotive industry.