50+ Python Jobs in Chennai | Python Job openings in Chennai
Apply to 50+ Python Jobs in Chennai on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.


Work Mode: Hybrid
Need B.Tech, BE, M.Tech, ME candidates - Mandatory
Must-Have Skills:
● Educational Qualification :- B.Tech, BE, M.Tech, ME in any field.
● Minimum of 3 years of proven experience as a Data Engineer.
● Strong proficiency in Python programming language and SQL.
● Experience in DataBricks and setting up and managing data pipelines, data warehouses/lakes.
● Good comprehension and critical thinking skills.
● Kindly note Salary bracket will vary according to the exp. of the candidate -
- Experience from 4 yrs to 6 yrs - Salary upto 22 LPA
- Experience from 5 yrs to 8 yrs - Salary upto 30 LPA
- Experience more than 8 yrs - Salary upto 40 LPA

What you’ll do
- Design, build, and maintain robust ETL/ELT pipelines for product and analytics data
- Work closely with business, product, analytics, and ML teams to define data needs
- Ensure high data quality, lineage, versioning, and observability
- Optimize performance of batch and streaming jobs
- Automate and scale ingestion, transformation, and monitoring workflows
- Document data models and key business metrics in a self-serve way
- Use AI tools to accelerate development, troubleshooting, and documentation
Must-Haves:
- 2–4 years of experience as a data engineer (product or analytics-focused preferred)
- Solid hands-on experience with Python and SQL
- Experience with data pipeline orchestration tools like Airflow or Prefect
- Understanding of data modeling, warehousing concepts, and performance optimization
- Familiarity with cloud platforms (GCP, AWS, or Azure)
- Bachelor's in Computer Science, Data Engineering, or a related field
- Strong problem-solving mindset and AI-native tooling comfort (Copilot, GPTs)
Job Title : Automation Quality Engineer (Gen AI)
Experience : 3 to 5+ Years
Location : Bangalore / Chennai / Pune
Role Overview :
We’re hiring a Quality Engineer to lead QA efforts for AI models, applications, and infrastructure.
You'll collaborate with cross-functional teams to design test strategies, implement automation, ensure model accuracy, and maintain high product quality.
Key Responsibilities :
- Develop and maintain test strategies for AI models, APIs, and user interfaces.
- Build automation frameworks and integrate into CI/CD pipelines.
- Validate model accuracy, robustness, and monitor model drift.
- Perform regression, performance, load, and security testing.
- Log and track issues; collaborate with developers to resolve them.
- Ensure compliance with data privacy and ethical AI standards.
- Document QA processes and testing outcomes.
Mandatory Skills :
- Test Automation : Selenium, Playwright, or Deep Eval
- Programming/Scripting : Python, JavaScript
- API Testing : Postman, REST Assured
- Cloud & DevOps : Azure, Azure Kubernetes, CI/CD pipelines
- Performance Testing : JMeter
- Bug Tracking : Azure DevOps
- Methodologies : Agile delivery experience
- Soft Skills : Strong communication and problem-solving abilities

Job Title: AI Engineer - NLP/LLM Data Product Engineer Location: Chennai, TN- Hybrid
Duration: Full time
Job Summary:
About the Role:
We are growing our Data Science and Data Engineering team and are looking for an
experienced AI Engineer specializing in creating GenAI LLM solutions. This position involves collaborating with clients and their teams, discovering gaps for automation using AI, designing customized AI solutions, and implementing technologies to streamline data entry processes within the healthcare sector.
Responsibilities:
· Conduct detailed consultations with clients functional teams to understand client requirements, one use case is related to handwritten medical records.
· Analyze existing data entry workflows and propose automation opportunities.
Design:
· Design tailored AI-driven solutions for the extraction and digitization of information from handwritten medical records.
· Collaborate with clients to define project scopes and objectives.
Technology Selection:
· Evaluate and recommend AI technologies, focusing on NLP, LLM and machine learning.
· Ensure seamless integration with existing systems and workflows.
Prototyping and Testing:
· Develop prototypes and proof-of-concept models to demonstrate the feasibility of proposed solutions.
· Conduct rigorous testing to ensure accuracy and reliability.
Implementation and Integration:
· Work closely with clients and IT teams to integrate AI solutions effectively.
· Provide technical support during the implementation phase.
Training and Documentation:
· Develop training materials for end-users and support staff.
· Create comprehensive documentation for implemented solutions.
Continuous Improvement:
· Monitor and optimize the performance of deployed solutions.
· Identify opportunities for further automation and improvement.
Qualifications:
· Advanced degree in Computer Science, Artificial Intelligence, or related field (Masters or PhD required).
· Proven experience in developing and implementing AI solutions for data entry automation.
· Expertise in NLP, LLM and other machine-learning techniques.
· Strong programming skills, especially in Python.
· Familiarity with healthcare data privacy and regulatory requirements.
Additional Qualifications( great to have):
An ideal candidate will have expertise in the most current LLM/NLP models, particularly in the extraction of data from clinical reports, lab reports, and radiology reports. The ideal candidate should have a deep understanding of EMR/EHR applications and patient-related data.

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG


Dear Candidate;
Greetings from!!
Vinga Software Solutions
Product Details:
Bautomate is a Intelligent Business Process Automation Software comprehensive hyper automation platform housed within a single software system. | AI-Powered Process Automation Solution
Vinga Software Solutions is the Parent Company of Bautomate Product https://www.vinga.biz/about/ and you'll be in the role of Vinga Software Solutions.
About the Product:
Bautomate offers cognitive automation solutions designed to automate repetitive tasks, eliminate bottlenecks, and enable seamless workflows.
The product combines artificial intelligence, business process management, robotic process automation (RPA), and optical character recognition (OCR) to streamline and optimize various business processes. It provides a transformative solution to empower businesses of all sizes across industries to achieve unprecedented levels of productivity and success.
Unique features of Bautomate's business process automation solutions include:
Workflow Automation: Bautomate's intuitive drag-and-drop interface enables users to easily automate complex workflows, leveraging pre-built components for effective intelligent automation.
Data Integration: Seamless integration with all existing systems and applications ensures smooth data transfer and real-time information exchange, enhancing collaboration and decision-making.
Intelligent Analytics: By harnessing advanced analytics capabilities, businesses can gain valuable insights into their processes, identify areas for improvement, and make data-driven decisions. It allows organizations to optimize their operations and drive growth based on comprehensive data analysis.
Cognitive Automation: Our comprehensive solution encompasses Intelligent Document Capture utilizing OCR & NLP, Predictive Analytics for Forecasting, Computer Vision and Image Processing, Anomaly Detection, and an Intelligent Decision Engine.
Scalability and Flexibility: Bautomate platform is highly scalable, accommodating the evolving needs of businesses as they grow. It offers flexible deployment options, including on-premises and cloud-based solutions.
About Us: We are leading provider of business process automation, aiding firms to streamline operations, boost efficiency, and spur growth. Our suite includes AP automation, purchase order automation, P2P automation, invoice automation, IVR testing automation, form etc.
AI/ML Developer – Lead (LLM & Gen AI)
Experience Required: 5 to 9 years
Job Location: Madurai
Role Overview:
We are looking for a Senior AI/ML Developer with expertise in Large Language Models (LLM) & Generative AI. The ideal candidate should have experience in developing and deploying AI-driven solutions.
Key Responsibilities:
- Design and develop AI/ML models focusing on LLMs & Generative AI.
- Collaborate with data scientists to optimize model performance.
- Deploy AI solutions on cloud platforms (AWS, GCP, Azure).
- Lead AI projects and mentor junior developers.
Required Skills:
- Expertise in LLMs, Gen AI, NLP, and Deep Learning.
- Strong coding skills in Python, TensorFlow, PyTorch.
- Experience in ML model deployment using Docker/Kubernetes.
- Knowledge of cloud-based AI/ML services.
Can you pls fill these details
Total work experience :
Experience in AI/ML :
Exp in LLM & Generative AI :
Exp in NLP, deep learning, python / pytorch / tensorflow :
Exp in
Current CTC ;
Expected CTC :
Last working day :
Notice Period:
Current Location :
Native Place :
Reason for Job Change :
Marital Status :
Do you have any offer in hand :
About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Work you’ll do
As an AI Engineer at Moative, you will be at the forefront of applying cutting-edge AI to solve real-world problems. You will be instrumental in designing and developing intelligent software solutions, leveraging the power of foundation models to automate and optimize critical workflows. Collaborating closely with domain experts, data scientists, and ML engineers, you will integrate advanced ML and AI technologies into both existing and new systems. This role offers a unique opportunity to explore innovative ideas, experiment with the latest foundation models, and build impactful products that directly enhance the lives of citizens by transforming how government services are delivered. You'll be working on challenging and impactful projects that move the needle on traditionally difficult-to-automate processes.
Responsibilities
- Utilize and adapt foundation models, particularly in vision and data extraction, as the core building blocks for developing impactful products aimed at improving government service delivery. This includes prompt engineering, fine-tuning, and evaluating model performance
- Architect, build, and deploy intelligent AI agent-driven workflows that automate and optimize key processes within government service delivery. This encompasses the full lifecycle from conceptualization and design to implementation and monitoring
- Contribute directly to enhancing our model evaluation and monitoring methodologies to ensure robust and reliable system performance. Proactively identify areas for improvement and implement solutions to optimize model accuracy and efficiency
- Continuously learn and adapt to the rapidly evolving landscape of AI and foundation models, exploring new techniques and technologies to enhance our capabilities and solutions
Who you are
You are a passionate and results-oriented engineer who is driven by the potential of AI/ML to revolutionize processes, enhance products, and ultimately improve user experiences. You thrive in dynamic environments and are comfortable navigating ambiguity. You possess a strong sense of ownership and are eager to take initiative, advocating for your technical decisions while remaining open to feedback and collaboration.
You are adept at working with real-world, often imperfect data, and have a proven ability to develop, refine, and deploy AI/ML models into production in a cost-effective and scalable manner. You are excited by the prospect of directly impacting government services and making a positive difference in the lives of citizens
Skills & Requirements
- 3+ years of experience in programming languages such as Python or Scala
- Proficient knowledge of cloud platforms (e.g., AWS, Azure, GCP) and containerization, DevOps (Docker, Kubernetes)
- Tuning and deploying foundation models, particularly for vision tasks and data extraction
- Excellent analytical and problem-solving skills with the ability to break down complex challenges into actionable steps
- Strong written and verbal communication skills, with the ability to effectively articulate technical concepts to both technical and non-technical audiences
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, fix or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that out loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek experienced ML/AI professionals with strong backgrounds in computer science, software engineering, or related elds to join our Azure-focused MLOps team. If you’re passionate about deploying complex machine learning models in real-world settings, bridging the gap between research and production, and working on high-impact projects, this role is for you.
Work you’ll do
As an operations engineer, you’ll oversee the entire ML lifecycle on Azure—spanning initial proofs-of-concept to large-scale production deployments. You’ll build and maintain automated training, validation, and deployment pipelines using Azure DevOps, Azure ML, and related services, ensuring models are continuously monitored, optimized for performance, and cost-eective. By integrating MLOps practices such as MLow and CI/CD, you’ll drive rapid iteration and experimentation. In close collaboration with senior ML engineers, data scientists, and domain experts, you’ll deliver robust, production-grade ML solutions that directly impact business outcomes.
Responsibilities
- ML-focused DevOps: Set up robust CI/CD pipelines with a strong emphasis on model versioning, automated testing, and advanced deployment strategies on Azure.
- Monitoring & Maintenance: Track and optimize the performance of deployed models through live metrics, alerts, and iterative improvements.
- Automation: Eliminate repetitive tasks around data preparation, model retraining, and inference by leveraging scripting and infrastructure as code (e.g., Terraform, ARM templates).
- Security & Reliability: Implement best practices for securing ML workows on Azure, including identity/access management, container security, and data encryption.
- Collaboration: Work closely with the data science teams to ensure model performance is within agreed SLAs, both for training and inference.
Skills & Requirements
- 2+ years of hands-on programming experience with Python (PySpark or Scala optional).
- Solid knowledge of Azure cloud services (Azure ML, Azure DevOps, ACI/AKS).
- Practical experience with DevOps concepts: CI/CD, containerization (Docker, Kubernetes), infrastructure as code (Terraform, ARM templates).
- Fundamental understanding of MLOps: MLow or similar frameworks for tracking and versioning.
- Familiarity with machine learning frameworks (TensorFlow, PyTorch, XGBoost) and how to operationalize them in production.
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, eiciency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less.
Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selsh about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.

About Moative
Moative, an Applied AI Services company, designs AI roadmaps, builds co-pilots and predictive AI solutions for companies in energy, utilities, packaging, commerce, and other primary industries. Through Moative Labs, we aspire to build micro-products and launch AI startups in vertical markets.
Our Past: We have built and sold two companies, one of which was an AI company. Our founders and leaders are Math PhDs, Ivy League University Alumni, Ex-Googlers, and successful entrepreneurs.
Role
We seek skilled and experienced data science/machine learning professionals with a strong background in at least one of mathematics, financial engineering, and electrical engineering, to join our Energy & Utilities team. If you are interested in artificial intelligence, excited about solving real business problems in the energy and utilities industry, and keen to contribute to impactful projects, this role is for you!
Work you’ll do
As a data scientist in the energy and utilities industry, you will perform quantitative analysis and build mathematical models to forecast energy demand, supply and strategies of efficient load balancing. You will work on models for short term and long term pricing, improving operational efficiency, reducing costs, and ensuring reliable power supply. You’ll work closely with cross-functional teams to deploy these models in solutions that provide insights/ solutions to real-world business problems. You will also be involved in conducting experiments, building POCs and prototypes.
Responsibilities
- Develop and implement quantitative models for load forecasting, energy production and distribution optimization.
- Analyze historical data to identify and predict extreme events, and measure impact of extreme events. Enhance existing pricing and risk management frameworks.
- Develop and implement quantitative models for energy pricing and risk management. Monitor market conditions and adjust models as needed to ensure accuracy and effectiveness.
- Collaborate with engineering and operations teams to provide quantitative support for energy projects. Enhance existing energy management systems and develop new strategies for energy conservation.
- Maintain and improve quantitative tools and software used in energy management.
- Support end-to-end ML/ AI model lifecycle - from data preparation, data analysis and feature engineering to model development, validation and deployment
- Collaborate with domain experts, engineers, and stakeholders in translating business problems into data-driven solutions
- Document methodologies and results, present findings and communicate insights to non-technical audiences
Skills & Requirements
- Strong background in mathematics, econometrics, electrical engineering, or a related eld.
- Experience data analysis, and quantitative modeling using programming languages such as Python or R.
- Excellent analytical and problem-solving skills.
- Strong understanding and experience with data analysis, statistical and mathematical concepts and ML algorithms
- Proficiency in Python and familiarity with basic Python libraries for data analysis and ML algorithms (such as NumPy, Pandas, ScikitLearn, NLTK).
- Strong communication skills
- Strong collaboration skills, ability to work with engineering and operations teams.
- A continuous learning attitude and a problem solving mind-set
Good to have -
- Knowledge of energy markets, regulations, and utility operation.
- Working knowledge of cloud platforms (e.g., AWS, Azure, GCP).
- Broad understanding of data structures and data engineering.
Working at Moative
Moative is a young company, but we believe strongly in thinking long-term, while acting with urgency. Our ethos is rooted in innovation, efficiency and high-quality outcomes. We believe the future of work is AI-augmented and boundary less. Here are some of our guiding principles:
- Think in decades. Act in hours. As an independent company, our moat is time. While our decisions are for the long-term horizon, our execution will be fast – measured in hours and days, not weeks and months.
- Own the canvas. Throw yourself in to build, x or improve – anything that isn’t done right, irrespective of who did it. Be selfish about improving across the organization – because once the rot sets in, we waste years in surgery and recovery.
- Use data or don’t use data. Use data where you ought to but not as a ‘cover-my-back’ political tool. Be capable of making decisions with partial or limited data. Get better at intuition and pattern-matching. Whichever way you go, be mostly right about it.
- Avoid work about work. Process creeps on purpose, unless we constantly question it. We are deliberate about committing to rituals that take time away from the actual work. We truly believe that a meeting that could be an email, should be an email and you don’t need a person with the highest title to say that loud.
- High revenue per person. We work backwards from this metric. Our default is to automate instead of hiring. We multi-skill our people to own more outcomes than hiring someone who has less to do. We don’t like squatting and hoarding that comes in the form of hiring for growth. High revenue per person comes from high quality work from everyone. We demand it.
If this role and our work is of interest to you, please apply here. We encourage you to apply even if you believe you do not meet all the requirements listed above.
That said, you should demonstrate that you are in the 90th percentile or above. This may mean that you have studied in top-notch institutions, won competitions that are intellectually demanding, built something of your own, or rated as an outstanding performer by your current or previous employers.
The position is based out of Chennai. Our work currently involves significant in-person collaboration and we expect you to work out of our offices in Chennai.


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Responsibilities
- Design and implement advanced solutions utilizing Large Language Models (LLMs).
- Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
- Conduct research and stay informed about the latest developments in generative AI and LLMs.
- Develop and maintain code libraries, tools, and frameworks to support generative AI development.
- Participate in code reviews and contribute to maintaining high code quality standards.
- Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
- Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
- Possess strong analytical and problem-solving skills.
- Demonstrate excellent communication skills and the ability to work effectively in a team environment.
Primary Skills
- Generative AI: Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.
- Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and Huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.
- Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.
- Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.
- Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.
- Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.


Job description
We are looking for an experienced Python developer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers.
Responsibilities:
- Coordinating with development teams to determine application requirements.
- Writing scalable code using Python programming language.
- Testing and debugging applications.
- Developing back-end components.
- Integrating user-facing elements using server-side logic.
- Assessing and prioritizing client feature requests.
- Integrating data storage solutions.
- Coordinating with front-end developers.
- Reprogramming existing databases to improve functionality.
- Developing digital tools to monitor online traffic.
Requirements:
- Bachelor's degree in Computer Science, Computer Engineering, or related field.
- 2-7 years of experience as a Python Developer.
- Expert knowledge of Python and Flask framework and Fast API.
- Solid experience in MongoDB, Elastic Search.
- Work experience in Restful API
- A deep understanding and multi-process architecture and the threading limitations of Python.
- Ability to integrate multiple data sources into a single system.
- Familiarity with testing tools.
- Ability to collaborate on projects and work independently when required.
- Excellent troubleshooting skills.
- Good project management skills.
SKILLS:
- PHYTHON
- MONGODB
- FLASK
- REST API DEVELOPMENT
- TWILIO
Job Type: Full-time
Pay: ₹10,000.00 - ₹30,000.00 per month
Benefits:
- Flexible schedule
- Paid time off
Schedule:
- Day shift
Supplemental Pay:
- Overtime pay
Ability to commute/relocate:
- Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required)
Experience:
- Python: 1 year (Required)
Work Location: In person

We are seeking a highly skilled and experienced Offshore Data Engineer . The role involves designing, implementing, and testing data pipelines and products.
Qualifications & Experience:
bachelor's or master's degree in computer science, Information Systems, or a related field.
5+ years of experience in data engineering, with expertise in data architecture and pipeline development.
☁️ Proven experience with GCP, Big Query, Databricks, Airflow, Spark, DBT, and GCP Services.
️ Hands-on experience with ETL processes, SQL, PostgreSQL, MySQL, MongoDB, Cassandra.
Strong proficiency in Python and data modelling.
Experience in testing and validation of data pipelines.
Preferred: Experience with eCommerce systems, data visualization tools (Tableau, Looker), and cloud certifications.
If you meet the above criteria and are interested, please share your updated CV along with the following details:
Total Experience:
Current CTC:
Expected CTC:
Current Location:
Preferred Location:
Notice Period / Last Working Day (if serving notice):
⚠️ Kindly share your details only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!

DevOps Test & Build Engineer
Shift timing: General Shift
Relevant Experience required: 5+ years
Education Required: Bachelor’s / Masters / PhD: Bachelor’s
Work Mode: EIC Office( 5 Days)
Must have skills: Python, Jenkins, Groovy script
Required Technologies
- Strong analytical abilities to analyze the effectiveness of the test and build environment and make the appropriate improvements
- Effective communication and leadership skills to collaborate with and support engineering
- Experience with managing Windows, Linux & OS X systems
- Strong understanding of CI/CD principles
- Strong coding knowledge on at least one programming language (python, java, perl or groovy)
- Hands-on experience with Jenkins master, plugins and node management
- Working knowledge on Docker and Kubernetes (CLI)
- Proficiency with scripting languages: bash, PowerShell, python or groovy
- Familiarity with build systems: Make, CMake, Conan
- Familiar with git CLI
- Basic understanding of embedded software, C/C++ language
- Quickly adapt new technology and complete assign tasks in defined timeline
Preferred
- Familiarity with Artifactory (conan or docker registry)
- Knowledge on ElectricFlow
- CI/CD in Gitlab or Github actions
- Hands on with Nagios, Grafana
- Exposure to Ansible or similar systems
- Worked on Jira integration with CI/CD
- General knowledge on AWS tools and technologies
- Fundamental understanding of embedded devices and its integration with CI/CD
- Exposure to Agile methodologies and CI/CD SDLC best practice

We are looking for skilled Data Engineer to design, build, and maintain robust data pipelines and infrastructure. You will play a pivotal role in optimizing data flow, ensuring scalability, and enabling seamless access to structured/unstructured data across the organization. This role requires technical expertise in Python, SQL, ETL/ELT frameworks, and cloud data warehouses, along with strong collaboration skills to partner with cross-functional teams.
Company: BigThinkCode Technologies
URL:
Location: Chennai (Work from office / Hybrid)
Experience: 4 - 6 years
Key Responsibilities:
- Design, develop, and maintain scalable ETL/ELT pipelines to process structured and unstructured data.
- Optimize and manage SQL queries for performance and efficiency in large-scale datasets.
- Experience working with data warehouse solutions (e.g., Redshift, BigQuery, Snowflake) for analytics and reporting.
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions.
- Experience in Implementing solutions for streaming data (e.g., Apache Kafka, AWS Kinesis) is preferred but not mandatory.
- Ensure data quality, governance, and security across pipelines and storage systems.
- Document architectures, processes, and workflows for clarity and reproducibility.
Required Technical Skills:
- Proficiency in Python for scripting, automation, and pipeline development.
- Expertise in SQL (complex queries, optimization, and database design).
- Hands-on experience with ETL/ELT tools (e.g., Apache Airflow, dbt, AWS Glue).
- Experience working with structured data (RDBMS) and unstructured data (JSON, Parquet, Avro).
- Familiarity with cloud-based data warehouses (Redshift, BigQuery, Snowflake).
- Knowledge of version control systems (e.g., Git) and CI/CD practices.
Preferred Qualifications:
- Experience with streaming data technologies (e.g., Kafka, Kinesis, Spark Streaming).
- Exposure to cloud platforms (AWS, GCP, Azure) and their data services.
- Understanding of data modelling (dimensional, star schema) and optimization techniques.
Soft Skills:
- Team player with a collaborative mindset and ability to mentor junior engineers.
- Strong stakeholder management skills to align technical solutions with business goals.
- Excellent communication skills to explain technical concepts to non-technical audiences.
- Proactive problem-solving and adaptability in fast-paced environments.
If interested, apply / reply by sharing your updated profile to connect and discuss.
Regards

Clients located in Bangalore,Chennai &Pune Location

Role: Ab Initio Developer
Experience: 2.5 (mandate) - 8 years
Skills: Ab Initio Development
Location: Chennai/Bangalore/Pune
only Immediate to 15 days joiners
should be available for in person interview only
Its a long term contract role with IBM and Arnold is the payrolling company.
JOB DESCRIPTION:
We are seeking a skilled Ab Initio Developer to join our dynamic team and contribute to the development and maintenance of critical data integration solutions. As an Ab Initio Developer, you will be responsible for designing, developing, and implementing robust and efficient data pipelines using Ab Initio's powerful ETL capabilities.
Key Responsibilities:
· Design, develop, and implement complex data integration solutions using Ab Initio's graphical interface and command-line tools.
· Analyze complex data requirements and translate them into effective Ab Initio designs.
· Develop and maintain efficient data pipelines, including data extraction, transformation, and loading processes.
· Troubleshoot and resolve technical issues related to Ab Initio jobs and data flows.
· Optimize performance and scalability of Ab Initio jobs.
· Collaborate with business analysts, data analysts, and other team members to understand data requirements and deliver solutions that meet business needs.
· Stay up-to-date with the latest Ab Initio technologies and industry best practices.
Required Skills and Experience:
· 2.5 to 8 years of hands-on experience in Ab Initio development.
· Strong understanding of Ab Initio components, including Designer, Conductor, and Monitor.
· Proficiency in Ab Initio's graphical interface and command-line tools.
· Experience in data modeling, data warehousing, and ETL concepts.
· Strong SQL skills and experience with relational databases.
· Excellent problem-solving and analytical skills.
· Ability to work independently and as part of a team.
· Strong communication and documentation skills.
Preferred Skills:
· Experience with cloud-based data integration platforms.
· Knowledge of data quality and data governance concepts.
· Experience with scripting languages (e.g., Python, Shell scripting).
· Certification in Ab Initio or related technologies.

About the Role:
We are seeking a skilled and driven Backend Developer with expertise in Python (Django/FastAPI) and Node.js (TypeScript) to join our team. The ideal candidate will have experience in database design (RDBMS and NoSQL), REST API and GraphQL development, cloud services, and AI-driven applications. You will be responsible for designing and implementing scalable backend solutions, ensuring high performance, security, and reliability.
If you’re passionate about backend development, Generative AI, and data engineering, this is the role for you!
Key Responsibilities:
Backend Development:
- Develop and maintain robust, scalable backend services using Node.js (TypeScript) and Python (Django/FastAPI).
- Build APIs with REST and GraphQL, ensuring high security and performance.
- Implement authentication mechanisms such as OAuth2.0, SAML, JWT, MFA, and passkeys (optional).
- Research and integrate Generative AI (Gen AI) models and OpenAI APIs into backend systems.
Database & Data Engineering:
- Design and optimize schemas for both relational (PostgreSQL, YSQL) and NoSQL (DynamoDB, MongoDB) databases.
- Work with Redshift, BigQuery, and Snowflake to manage large-scale data processing.
- Develop ETL pipelines for data ingestion and transformation.
- Utilize Apache Airflow for workflow automation.
Cloud Services & Serverless Architecture:
- Work extensively with AWS Cloud services, and optionally Azure and GCP.
- Design and implement serverless architectures and event-driven systems using frameworks like AWS Lambda or equivalent on Azure/GCP.
- Configure and manage webhooks for event notifications and integrations.
- Integrate Apache Pulsar for real-time event streaming and messaging.
Programming & AI Integration:
- Apply design patterns, SOLID principles, and functional programming practices.
- Develop Python-based AI/ML solutions, leveraging Django/FastAPI for backend services.
- Manage AI/ML environments using Conda.
DevOps & Deployment:
- Utilize Docker and Kubernetes (K8s) for containerization and orchestration.
- Collaborate with DevOps teams for CI/CD pipelines and scalable deployments.
Tools & Utilities:
- Use Postman, Swagger, and cURL for API testing and documentation.
- Demonstrate strong knowledge of Unix commands for troubleshooting and development.
- Work with Git for versioning and code management.
Key Skills & Qualifications:
Must-Have:
✔ Proficiency in Python (Django/FastAPI) and Node.js (TypeScript).
✔ Experience with NestJS framework.
✔ Expertise in RDBMS and NoSQL database design and optimization.
✔ Hands-on experience with REST API and GraphQL development.
✔ Familiarity with authentication protocols such as OAuth2.0, SAML, JWT, and MFA.
✔ Strong understanding of AWS Cloud Services and Serverless Architecture.
✔ Experience with Gen AI, OpenAI APIs, and AI model integration.
✔ Hands-on knowledge of Python and Conda environments.
✔ Expertise in Redshift, BigQuery, Snowflake, and Apache Airflow for Data Engineering.
✔ Exposure to Apache Pulsar for event streaming.
Nice-to-Have:
➕ Exposure to Azure and GCP serverless frameworks.
➕ Knowledge of webhooks for event handling.
➕ Experience with passkeys as an authentication option.
Soft Skills:
✅ Problem-solving mindset with a passion for tackling complex challenges.
✅ Ability to learn and adapt to new tools, frameworks, and programming languages.
✅ Collaborative attitude and strong communication skills.
What We Offer:
💰 Competitive compensation and benefits package.
🚀 Opportunity to work with cutting-edge technologies in a fast-paced environment.
📚 A culture of learning, growth, and collaboration.
🌍 Exposure to large-scale systems, AI/ML integrations, and exciting technical challenges.


Job Title: AI/ML Intern (Future AI Rockstar)
Internship Duration: 6 months
Location: Chennai
About the Company:
F22 Labs is a startup software studio based out of Chennai. We are the rocket fuel for other startups across the world, powering them with extremely high-quality software. We help entrepreneurs build their vision into beautiful software products (web/mobile). If you're into creating beautiful software and solving real problems, you’ll fit right in with us. Let’s make cool things happen!
Position Overview:
Ready to level up your AI/ML skills? As an AI/ML Intern at F22 Labs, you’ll be working with a team of super-talented engineers and data scientists, diving into the world of machine learning, training models, and solving cool problems for our startup clients. Get ready to put your learning into action and help create AI/ML solutions that actually matter. If you’re eager to get your hands dirty in real-world AI projects, this is your chance to shine!
Key Responsibilities:
- Help build, train, and test machine learning models that power real-world applications.
- Tame massive datasets and transform them into something beautiful (and useful).
- Collaborate with our talented team to integrate your ML models into software products.
- Dive into data preprocessing, feature engineering, and all the magical stuff that makes models work.
- Stay on top of the latest AI/ML trends and bring those new ideas to life.
- Write clean, efficient code to process, analyze, and improve data (your code will be as smooth as butter).
- Document your work like a pro—because we’re all about keeping things organized and sharable!
- Jump into brainstorming sessions and contribute fresh, wild ideas to our AI/ML efforts.
Qualifications and Requirements:
- Basic understanding of machine learning algorithms and concepts (bonus points if you’re already working with them!).
- Proficiency in Python
- Familiarity with AI/ML libraries and frameworks like TensorFlow, PyTorch, or Scikit-learn (if you’re into something else, we’re cool with that too).
- A good understanding of data cleaning, preprocessing, and feature engineering (or a strong desire to learn).
- Sharp problem-solving and analytical skills (you can see patterns in chaos).
- Strong communication and teamwork skills (because teamwork makes the dream work).
- A thirst for knowledge and a passion for learning new things!
Why Join Us (Perks & Benefits):
- Flexible work timings.
- Job Offer based on performance
- A supercharged learning culture—you’ll grow faster here than you thought possible.
- Rapid career growth opportunities (move up the ladder as quickly as your skills will take you).
- Work with a fun and interesting team who makes every day a little more exciting.
- Learn from the best in the industry (and maybe even teach us something new along the way!).
Selection Process:
- 1-2 rounds of interviews (come prepared to wow us!).
If you’re looking to work in a dynamic, fast-growing startup and want to make an impact on the software products of tomorrow, we’d love to have you onboard! Apply today!
6+ years of experience with deployment and management of Kubernetes clusters in production environment as DevOps engineer.
• Expertise in Kubernetes fundamentals like nodes, pods, services, deployments etc., and their interactions with the underlying infrastructure.
• Hands-on experience with containerization technologies such as Docker or RKT to package applications for use in a distributed system managed by Kubernetes.
• Knowledge of software development cycle including coding best practices such as CI/CD pipelines and version control systems for managing code changes within a team environment.
• Must have Deep understanding on different aspects related to Cloud Computing and operations processes needed when setting up workloads on top these platforms
• Experience with Agile software development and knowledge of best practices for agile Scrum team.
• Proficient with GIT version control
• Experience working with Linux and cloud compute platforms.
• Excellent problem-solving skills and ability to troubleshoot complex issues in distributed systems.
• Excellent communication & interpersonal skills, effective problem-solving skills and logical thinking ability and strong commitment to professional and client service excellence.


Responsibilities
- Develop and maintain robust APIs to support various applications and services.
- Design and implement scalable solutions using AWS cloud services.
- Utilize Python frameworks such as Flask and Django to build efficient and high-performance applications.
- Collaborate with cross-functional teams to gather and analyze requirements for new features and enhancements.
- Ensure the security and integrity of applications by implementing best practices and security measures.
- Optimize application performance and troubleshoot issues to ensure smooth operation.
- Provide technical guidance and mentorship to junior team members.
- Conduct code reviews to ensure adherence to coding standards and best practices.
- Participate in agile development processes including sprint planning daily stand-ups and retrospectives.
- Develop and maintain documentation for code processes and procedures.
- Stay updated with the latest industry trends and technologies to continuously improve skills and knowledge.
- Contribute to the overall success of the company by delivering high-quality software solutions that meet business needs.
- Foster a collaborative and inclusive work environment that promotes innovation and continuous improvement.
Qualifications
- Possess strong expertise in developing and maintaining APIs.
- Demonstrate proficiency in AWS cloud services and their application in scalable solutions.
- Have extensive experience with Python frameworks such as Flask and Django.
- Exhibit strong analytical and problem-solving skills to address complex technical challenges.
- Show ability to collaborate effectively with cross-functional teams and stakeholders.
- Display excellent communication skills to convey technical concepts clearly.
- Have a background in the Consumer Lending domain is a plus.
- Demonstrate commitment to continuous learning and staying updated with industry trends.
- Possess a strong understanding of agile development methodologies.
- Show experience in mentoring and guiding junior team members.
- Exhibit attention to detail and a commitment to delivering high-quality software solutions.
- Demonstrate ability to work effectively in a hybrid work model.
- Show a proactive approach to identifying and addressing potential issues before they become problems.
Dear Candidate,
We are urgently Hiring AWS Cloud Engineer for Bangalore Location.
Position: AWS Cloud Engineer
Location: Bangalore
Experience: 8-11 yrs
Skills: Aws Cloud
Salary: Best in Industry (20-25% Hike on the current ctc)
Note:
only Immediate to 15 days Joiners will be preferred.
Candidates from Tier 1 companies will only be shortlisted and selected
Candidates' NP more than 30 days will get rejected while screening.
Offer shoppers will be rejected.
Job description:
Description:
Title: AWS Cloud Engineer
Prefer BLR / HYD – else any location is fine
Work Mode: Hybrid – based on HR rule (currently 1 day per month)
Shift Timings 24 x 7 (Work in shifts on rotational basis)
Total Experience in Years- 8+ yrs, 5 yrs of relevant exp is required.
Must have- AWS platform, Terraform, Redshift / Snowflake, Python / Shell Scripting
Experience and Skills Requirements:
Experience:
8 years of experience in a technical role working with AWS
Mandatory
Technical troubleshooting and problem solving
AWS management of large-scale IaaS PaaS solutions
Cloud networking and security fundamentals
Experience using containerization in AWS
Working Data warehouse knowledge Redshift and Snowflake preferred
Working with IaC – Terraform and Cloud Formation
Working understanding of scripting languages including Python and Shell
Collaboration and communication skills
Highly adaptable to changes in a technical environment
Optional
Experience using monitoring and observer ability toolsets inc. Splunk, Datadog
Experience using Github Actions
Experience using AWS RDS/SQL based solutions
Experience working with streaming technologies inc. Kafka, Apache Flink
Experience working with a ETL environments
Experience working with a confluent cloud platform
Certifications:
Minimum
AWS Certified SysOps Administrator – Associate
AWS Certified DevOps Engineer - Professional
Preferred
AWS Certified Solutions Architect – Associate
Responsibilities:
Responsible for technical delivery of managed services across NTT Data customer account base. Working as part of a team providing a Shared Managed Service.
The following is a list of expected responsibilities:
To manage and support a customer’s AWS platform
To be technical hands on
Provide Incident and Problem management on the AWS IaaS and PaaS Platform
Involvement in the resolution or high priority Incidents and problems in an efficient and timely manner
Actively monitor an AWS platform for technical issues
To be involved in the resolution of technical incidents tickets
Assist in the root cause analysis of incidents
Assist with improving efficiency and processes within the team
Examining traces and logs
Working with third party suppliers and AWS to jointly resolve incidents
Good to have:
Confluent Cloud
Snowflake
Best Regards,
Minakshi Soni
Executive - Talent Acquisition (L2)
Rigel Networks
Worldwide Locations: USA | HK | IN


Cloud Architect
Role: Software Engineer/Senior Software Engineer
Openings: 2
YOE: 5+ years | 8+ years
Notice Period: Immediate to 60 days
Mandatory skills: Python/Go development with strong cloud engineering expertise in AWS/Azure
Responsibilities:
- Design, architect, and implement cloud solutions on AWS and Azure platforms.
- Collaborate with stakeholders to gather requirements and develop cloud strategies aligned with business objectives.
- Lead the technical design and implementation of cloud infrastructure, ensuring scalability, security, and performance.
- Provide guidance and best practices to development teams for cloud-native application development.
- Evaluate and recommend cloud services, tools, and technologies to optimize cloud infrastructure.
- Develop automation scripts and templates for provisioning and managing cloud resources.
- Conduct performance tuning, troubleshooting, and optimization of cloud environments.
- Stay current with industry trends and emerging technologies in cloud computing.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or related field. Master's degree preferred.
- 5+ years of experience in cloud architecture and design.
- Proficiency in AWS and Azure cloud platforms, including but not limited to EC2, S3, Lambda, VPC, IAM, Azure VM, Azure Storage, Azure Functions, etc.
- Strong understanding of cloud-native architectures, microservices, containers, and serverless computing.
- Experience with infrastructure-as-code tools such as Terraform, CloudFormation, ARM templates, etc.
- Hands-on experience with DevOps practices and tools, including CI/CD pipelines, Docker, Kubernetes, etc.
- Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
- Certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, or equivalent are a plus.
We Build Responsible AI Solutions. We are:
- A Software/Product Development Organization delivering next-generation technologies.
- Great Place To Work (GPTW) Certified, fostering a culture of trust and collaboration.
- Recognized as ET's Future Ready Organization, driving innovation responsibly.
Why join us Multicoreware Inc?
● Shape the Future of AI: At MulticoreWare Inc., you’ll work on cutting-edge technologies like computer vision, natural language processing, and generative AI, redefining industries and advancing humanity's digital future.
● Be Part of a Visionary Team: Collaborate with some of the brightest minds in AI and software engineering, solving complex challenges and driving innovation in high-performance computing and AI-powered solutions.
● Accelerate Your Growth: Experience unparalleled career development with exposure to groundbreaking projects, advanced tools, and a culture that values continuous learning and professional excellence.
● Make an Impact Globally: Work on products and solutions that influence industries like automotive, media, and healthcare, delivering responsible AI-driven innovation worldwide.
● Innovate with Purpose: Join a team where your passion for technology meets a mission-driven approach, creating solutions that blend innovation, efficiency, and sustainability.

CoinFantasy is looking for an experienced Senior AI Architect to lead both the decentralised protocol development and the design of AI-driven applications on this network. As a visionary in AI and distributed computing, you will play a central role in shaping the protocol’s technical direction, enabling efficient task distribution, and scaling AI use cases across a heterogeneous, decentralised infrastructure.
Job Responsibilities
- Architect and oversee the protocol’s development, focusing on dynamic node orchestration, layer-wise model sharding, and secure, P2P network communication.
- Drive the end-to-end creation of AI applications, ensuring they are optimised for decentralised deployment and include use cases with autonomous agent workflows.
- Architect AI systems capable of running on decentralised networks, ensuring they balance speed, scalability, and resource usage.
- Design data pipelines and governance strategies for securely handling large-scale, decentralised datasets.
- Implement and refine strategies for swarm intelligence-based task distribution and resource allocation across nodes. Identify and incorporate trends in decentralised AI, such as federated learning and swarm intelligence, relevant to various industry applications.
- Lead cross-functional teams in delivering full-precision computing and building a secure, robust decentralised network.
- Represent the organisation’s technical direction, serving as the face of the company at industry events and client meetings.
Requirements
- Bachelor’s/Master’s/Ph.D. in Computer Science, AI, or related field.
- 12+ years of experience in AI/ML, with a track record of building distributed systems and AI solutions at scale.
- Strong proficiency in Python, Golang, and machine learning frameworks (e.g., TensorFlow, PyTorch).
- Expertise in decentralised architecture, P2P networking, and heterogeneous computing environments.
- Excellent leadership skills, with experience in cross-functional team management and strategic decision-making.
- Strong communication skills, adept at presenting complex technical solutions to diverse audiences.
About Us
CoinFantasy is a Play to Invest platform that brings the world of investment to users through engaging games. With multiple categories of games, it aims to make investing fun, intuitive, and enjoyable for users. It features a sandbox environment in which users are exposed to the end-to-end investment journey without risking financial losses.
Building on this foundation, we are now developing a groundbreaking decentralised protocol that will transform the AI landscape.
Website:
Benefits
- Competitive Salary
- An opportunity to be part of the Core team in a fast-growing company
- A fulfilling, challenging and flexible work experience
- Practically unlimited professional and career growth opportunities
About koolio.ai
Website: www.koolio.ai
Koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Internship Position
We are looking for a motivated Backend Development Intern to join our innovative team. As an intern at koolio.ai, you’ll have the opportunity to work on a next-gen AI-powered platform and gain hands-on experience developing and optimizing backend systems that power our platform. This internship is ideal for students or recent graduates who are passionate about backend technologies and eager to learn in a dynamic, fast-paced startup environment.
Key Responsibilities:
- Assist in the development and maintenance of backend systems and APIs.
- Write reusable, testable, and efficient code to support scalable web applications.
- Work with cloud services and server-side technologies to manage data and optimize performance.
- Troubleshoot and debug existing backend systems, ensuring reliability and performance.
- Collaborate with cross-functional teams to integrate frontend features with backend logic.
Requirements and Skills:
- Education: Currently pursuing or recently completed a degree in Computer Science, Engineering, or a related field.
- Technical Skills:
- Good understanding of server-side technologies like Python
- Familiarity with REST APIs and database systems (e.g., MySQL, PostgreSQL, or NoSQL databases).
- Exposure to cloud platforms like AWS, Google Cloud, or Azure is a plus.
- Knowledge of version control systems such as Git.
- Soft Skills:
- Eagerness to learn and adapt in a fast-paced environment.
- Strong problem-solving and critical-thinking skills.
- Effective communication and teamwork capabilities.
- Other Skills: Familiarity with CI/CD pipelines and basic knowledge of containerization (e.g., Docker) is a bonus.
Why Join Us?
- Gain real-world experience working on a cutting-edge platform.
- Work alongside a talented and passionate team committed to innovation.
- Receive mentorship and guidance from industry experts.
- Opportunity to transition to a full-time role based on performance and company needs.
This internship is an excellent opportunity to kickstart your career in backend development, build critical skills, and contribute to a product that has a real-world impact.



About koolio.ai
Website: www.koolio.ai
koolio Inc. is a cutting-edge Silicon Valley startup dedicated to transforming how stories are told through audio. Our mission is to democratize audio content creation by empowering individuals and businesses to effortlessly produce high-quality, professional-grade content. Leveraging AI and intuitive web-based tools, koolio.ai enables creators to craft, edit, and distribute audio content—from storytelling to educational materials, brand marketing, and beyond—easily. We are passionate about helping people and organizations share their voices, fostering creativity, collaboration, and engaging storytelling for a wide range of use cases.
About the Full-Time Position
We are seeking experienced Full Stack Developers to join our innovative team on a full-time, hybrid basis. As part of koolio.ai, you will work on a next-gen AI-powered platform, shaping the future of audio content creation. You’ll collaborate with cross-functional teams to deliver scalable, high-performance web applications, handling client- and server-side development. This role offers a unique opportunity to contribute to a rapidly growing platform with a global reach and thrive in a fast-moving, self-learning startup environment where adaptability and innovation are key.
Key Responsibilities:
- Collaborate with teams to implement new features, improve current systems, and troubleshoot issues as we scale
- Design and build efficient, secure, and modular client-side and server-side architecture
- Develop high-performance web applications with reusable and maintainable code
- Work with audio/video processing libraries for JavaScript to enhance multimedia content creation
- Integrate RESTful APIs with Google Cloud Services to build robust cloud-based applications
- Develop and optimize Cloud Functions to meet specific project requirements and enhance overall platform performance
Requirements and Skills:
- Education: Degree in Computer Science or a related field
- Work Experience: Minimum of 6+ years of proven experience as a Full Stack Developer or similar role, with demonstrable expertise in building web applications at scale
- Technical Skills:
- Proficiency in front-end languages such as HTML, CSS, JavaScript, jQuery, and ReactJS
- Strong experience with server-side technologies, particularly REST APIs, Python, Google Cloud Functions, and Google Cloud services
- Familiarity with NoSQL and PostgreSQL databases
- Experience working with audio/video processing libraries is a strong plus
- Soft Skills:
- Strong problem-solving skills and the ability to think critically about issues and solutions
- Excellent collaboration and communication skills, with the ability to work effectively in a remote, diverse, and distributed team environment
- Proactive, self-motivated, and able to work independently, balancing multiple tasks with minimal supervision
- Keen attention to detail and a passion for delivering high-quality, scalable solutions
- Other Skills: Familiarity with GitHub, CI/CD pipelines, and best practices in version control and continuous deployment
Compensation and Benefits:
- Total Yearly Compensation: ₹25 LPA based on skills and experience
- Health Insurance: Comprehensive health coverage provided by the company
- ESOPs: An opportunity for wealth creation and to grow alongside a fantastic team
Why Join Us?
- Be a part of a passionate and visionary team at the forefront of audio content creation
- Work on an exciting, evolving product that is reshaping the way audio content is created and consumed
- Thrive in a fast-moving, self-learning startup environment that values innovation, adaptability, and continuous improvement
- Enjoy the flexibility of a full-time hybrid position with opportunities to grow professionally and expand your skills
- Collaborate with talented professionals from around the world, contributing to a product that has a real-world impact


Responsibilities:
• Analyze and understand business requirements and translate them into efficient, scalable business logic.
• Develop, test, and maintain software that meets new requirements and integrates well with existing systems.
• Troubleshoot and debug software issues and provide solutions.
• Collaborate with cross-functional teams to deliver high-quality products, including product managers, designers, and developers.
• Write clean, maintainable, and efficient code.
• Participate in code reviews and provide constructive feedback to peers.
• Communicate effectively with team members and stakeholders to understand requirements and provide updates.
Required Skills:
• Strong problem-solving skills with the ability to analyze complex issues and provide solutions.
• Ability to quickly understand new problem statements and translate them into functional business logic.
• Proficiency in at least one programming language such as Java, Node.js, or C/C++.
• Strong understanding of software development life cycle (SDLC).
• Excellent communication skills, both verbal and written.
• Team player with the ability to collaborate effectively with different teams.
Preferred Qualifications:
• Experience with Java, Golang, or Rust is a plus.
• Familiarity with cloud platforms, microservices architecture, and API development.
• Prior experience working in an agile environment.
• Strong debugging and optimization skills.
Educational Qualifications:
• Bachelor's degree in Computer Science, Engineering, related field, or equivalent work experience.

Role Overview:
We are seeking a highly skilled and motivated Data Scientist to join our growing team. The ideal candidate will be responsible for developing and deploying machine learning models from scratch to production level, focusing on building robust data-driven products. You will work closely with software engineers, product managers, and other stakeholders to ensure our AI-driven solutions meet the needs of our users and align with the company's strategic goals.
Key Responsibilities:
- Develop, implement, and optimize machine learning models and algorithms to support product development.
- Work on the end-to-end lifecycle of data science projects, including data collection, preprocessing, model training, evaluation, and deployment.
- Collaborate with cross-functional teams to define data requirements and product taxonomy.
- Design and build scalable data pipelines and systems to support real-time data processing and analysis.
- Ensure the accuracy and quality of data used for modeling and analytics.
- Monitor and evaluate the performance of deployed models, making necessary adjustments to maintain optimal results.
- Implement best practices for data governance, privacy, and security.
- Document processes, methodologies, and technical solutions to maintain transparency and reproducibility.
Qualifications:
- Bachelor's or Master's degree in Data Science, Computer Science, Engineering, or a related field.
- 5+ years of experience in data science, machine learning, or a related field, with a track record of developing and deploying products from scratch to production.
- Strong programming skills in Python and experience with data analysis and machine learning libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch).
- Experience with cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker).
- Proficiency in building and optimizing data pipelines, ETL processes, and data storage solutions.
- Hands-on experience with data visualization tools and techniques.
- Strong understanding of statistics, data analysis, and machine learning concepts.
- Excellent problem-solving skills and attention to detail.
- Ability to work collaboratively in a fast-paced, dynamic environment.
Preferred Qualifications:
- Knowledge of microservices architecture and RESTful APIs.
- Familiarity with Agile development methodologies.
- Experience in building taxonomy for data products.
- Strong communication skills and the ability to explain complex technical concepts to non-technical stakeholders.


Company: Optimum Solutions
About the company: Optimum solutions is a leader in a sheet metal industry, provides sheet metal solutions to sheet metal fabricators with a proven track record of reliable product delivery. Starting from tools through software, machines, we are one stop shop for all your technology needs.
Role Overview:
- Creating and managing database schemas that represent and support business processes, Hands-on experience in any SQL queries and Database server wrt managing deployment.
- Implementing automated testing platforms, unit tests, and CICD Pipeline
- Proficient understanding of code versioning tools, such as GitHub, Bitbucket, ADO
- Understanding of container platform, such as Docker
Job Description
- We are looking for a good Python Developer with Knowledge of Machine learning and deep learning framework.
- Your primary focus will be working the Product and Usecase delivery team to do various prompting for different Gen-AI use cases
- You will be responsible for prompting and building use case Pipelines
- Perform the Evaluation of all the Gen-AI features and Usecase pipeline
Position: AI ML Engineer
Location: Chennai (Preference) and Bangalore
Minimum Qualification: Bachelor's degree in computer science, Software Engineering, Data Science, or a related field.
Experience: 4-6 years
CTC: 16.5 - 17 LPA
Employment Type: Full Time
Key Responsibilities:
- Take care of entire prompt life cycle like prompt design, prompt template creation, prompt tuning/optimization for various Gen-AI base models
- Design and develop prompts suiting project needs
- Lead and manage team of prompt engineers
- Stakeholder management across business and domains as required for the projects
- Evaluating base models and benchmarking performance
- Implement prompt gaurdrails to prevent attacks like prompt injection, jail braking and prompt leaking
- Develop, deploy and maintain auto prompt solutions
- Design and implement minimum design standards for every use case involving prompt engineering
Skills and Qualifications
- Strong proficiency with Python, DJANGO framework and REGEX
- Good understanding of Machine learning framework Pytorch and Tensorflow
- Knowledge of Generative AI and RAG Pipeline
- Good in microservice design pattern and developing scalable application.
- Ability to build and consume REST API
- Fine tune and perform code optimization for better performance.
- Strong understanding on OOP and design thinking
- Understanding the nature of asynchronous programming and its quirks and workarounds
- Good understanding of server-side templating languages
- Understanding accessibility and security compliance, user authentication and authorization between multiple systems, servers, and environments
- Integration of APIs, multiple data sources and databases into one system
- Good knowledge in API Gateways and proxies, such as WSO2, KONG, nginx, Apache HTTP Server.
- Understanding fundamental design principles behind a scalable and distributed application
- Good working knowledge on Microservices architecture, behaviour, dependencies, scalability etc.
- Experience in deploying on Cloud platform like Azure or AWS
- Familiar and working experience with DevOps tools like Azure DEVOPS, Ansible, Jenkins, Terraform



Job Title - Senior Backend Engineer
About Tazapay
Tazapay is a cross border payment service provider. They offer local collections via local payment methods, virtual accounts and cards in over 70 markets. The merchant does not need to create local entities anywhere and Tazapay offers the additional compliance framework to take care of local regulations and requirements. This results in decreased transaction costs, fx transparency and higher auth rates.
They are licensed and backed by leading investors. www.tazapay.com
What’s exciting waiting for You?
This is an amazing opportunity for you to join a fantastic crew before the rocket ship launch. It will be a story you will carry with you through your life and have the unique experience of building something ground up and have the satisfaction of seeing your product being used and paid for by thousands of customers. You will be a part of a growth story be it anywhere - Sales, Software Development, Marketing, HR, Accounting etc.
We believe in a culture of openness, innovation & great memories together.
Are you ready for the ride?
Find what interesting things you could do with us.
About the Backend Engineer role
Responsibilities (not exhaustive)
- Design, write and deliver highly scalable, reliable and fault tolerant systems with minimal guidance
- Participate in code and design reviews to maintain our high development standards
- Partner with the product management team to define and execute the feature roadmap
- Translate business requirements into scalable and extensible design
- Proactively manage stakeholder communication related to deliverables, risks, changes and dependencies
- Coordinate with cross functional teams (Mobile, DevOps, Data, UX, QA etc.) on planning and execution
- Continuously improve code quality, product execution, and customer delight
- Willingness to learn new languages and methodologies
- An enormous sense of ownership
- Engage in service capacity and demand planning, software performance analysis, tuning and optimization
The Ideal Candidate
Education
- Degree in Computer Science or equivalent with 5+ years of experience in commercial software development in large distributed systems
Experience
- Hands-on experience in designing, developing, testing and deploying applications on Golang, Ruby,Python, .Net Core or Java for large scale applications
- Deep knowledge of Linux as a production environment
- Strong knowledge of data structures, algorithms, distributed systems, and asynchronous architectures
- Expert in at least 1 of the following languages: Golang, Python, Ruby, Java, C, C++
- Proficient in OOP, including design patterns.
- Ability to design and implement low latency RESTful services
- Hands-on coder who has built backend services that handle high volume traffic.
- Strong understanding of system performance and scaling
- Possess excellent communication, sharp analytical abilities with proven design skills, able to think critically of the current system in terms of growth and stability
- Data modeling experience in both Relational and NoSQL databases
- Continuously refactor applications to ensure high-quality design
- Ability to plan, prioritize, estimate and execute releases with good degree of predictability
- Ability to scope, review and refine user stories for technical completeness and to alleviate dependency risks
- Passion for learning new things, solving challenging problems
- Ability to get stuff done!
Nice to have
- Familiarity with Golang ecosystem
- Familiarity with running web services at scale; understanding of systems internals and networking are a plus
- Be familiar with HTTP/HTTPS communication protocols.
Abilities and Traits
- Ability to work under pressure and meet deadlines
- Ability to provide exceptional attention to details of the product.
- Ability to focus for extended periods of repetitious activity.
- Ability to think ahead and anticipate problems, issues and solutions
- Work well as a team player and help the team members to resolve issues
- Be committed to quality and be structured in approach
- Excellent and demonstrable concept formulation, logical and analytical skills
- Excellent planning, organizational, and prioritization skills
Location - Chennai - India
Join our team and let's groove together to the rhythm of innovation and opportunity!
Your Buddy
Tazapay
Technical Skills:
- Ability to understand and translate business requirements into design.
- Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
- Experience in creating ETL jobs using Python/PySpark.
- Proficiency in creating AWS Lambda functions for event-based jobs.
- Knowledge of automating ETL processes using AWS Step Functions.
- Competence in building data warehouses and loading data into them.
Responsibilities:
- Understand business requirements and translate them into design.
- Assess AWS infrastructure needs for development work.
- Develop ETL jobs using Python/PySpark to meet requirements.
- Implement AWS Lambda for event-based tasks.
- Automate ETL processes using AWS Step Functions.
- Build data warehouses and manage data loading.
- Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.

Role Summary
As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.
Requirements:
· Around 4 years of working experience in data warehousing / BI system.
· Strong hands-on experience with Snowflake AND strong programming skills in Python
· Strong hands-on SQL skills
· Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.
· Knowledge on debt for cloud databases
· AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions
· Solid understanding of ETL processes, and data warehousing concepts
· Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework
· Experience with scrum methodologies
· Infrastructure build tools such as CFT / Terraform is a plus.
· Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.
· Strong team player with good communication skills.
Overview Optisol Business Solutions
OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.
Benefits, working with Optisol
· Great Learning & Development program
· Flextime, Work-at-Home & Hybrid Options
· A knowledgeable, high-achieving, experienced & fun team.
· Spot Awards & Recognition.
· The chance to be a part of next success story.
· A competitive base salary.
More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

5-7 years of experience in Data Engineering with solid experience in design, development and implementation of end-to-end data ingestion and data processing system in AWS platform.
2-3 years of experience in AWS Glue, Lambda, Appflow, EventBridge, Python, PySpark, Lake House, S3, Redshift, Postgres, API Gateway, CloudFormation, Kinesis, Athena, KMS, IAM.
Experience in modern data architecture, Lake House, Enterprise Data Lake, Data Warehouse, API interfaces, solution patterns, standards and optimizing data ingestion.
Experience in build of data pipelines from source systems like SAP Concur, Veeva Vault, Azure Cost, various social media platforms or similar source systems.
Expertise in analyzing source data and designing a robust and scalable data ingestion framework and pipelines adhering to client Enterprise Data Architecture guidelines.
Proficient in design and development of solutions for real-time (or near real time) stream data processing as well as batch processing on the AWS platform.
Work closely with business analysts, data architects, data engineers, and data analysts to ensure that the data ingestion solutions meet the needs of the business.
Troubleshoot and provide support for issues related to data quality and data ingestion solutions. This may involve debugging data pipeline processes, optimizing queries, or troubleshooting application performance issues.
Experience in working in Agile/Scrum methodologies, CI/CD tools and practices, coding standards, code reviews, source management (GITHUB), JIRA, JIRA Xray and Confluence.
Experience or exposure to design and development using Full Stack tools.
Strong analytical and problem-solving skills, excellent communication (written and oral), and interpersonal skills.
Bachelor's or master's degree in computer science or related field.


Position Overview: We are seeking a talented Data Engineer with expertise in Power BI to join our team. The ideal candidate will be responsible for designing and implementing data pipelines, as well as developing insightful visualizations and reports using Power BI. Additionally, the candidate should have strong skills in Python, data analytics, PySpark, and Databricks. This role requires a blend of technical expertise, analytical thinking, and effective communication skills.
Key Responsibilities:
- Design, develop, and maintain data pipelines and architectures using PySpark and Databricks.
- Implement ETL processes to extract, transform, and load data from various sources into data warehouses or data lakes.
- Collaborate with data analysts and business stakeholders to understand data requirements and translate them into actionable insights.
- Develop interactive dashboards, reports, and visualizations using Power BI to communicate key metrics and trends.
- Optimize and tune data pipelines for performance, scalability, and reliability.
- Monitor and troubleshoot data infrastructure to ensure data quality, integrity, and availability.
- Implement security measures and best practices to protect sensitive data.
- Stay updated with emerging technologies and best practices in data engineering and data visualization.
- Document processes, workflows, and configurations to maintain a comprehensive knowledge base.
Requirements:
- Bachelor’s degree in Computer Science, Engineering, or related field. (Master’s degree preferred)
- Proven experience as a Data Engineer with expertise in Power BI, Python, PySpark, and Databricks.
- Strong proficiency in Power BI, including data modeling, DAX calculations, and creating interactive reports and dashboards.
- Solid understanding of data analytics concepts and techniques.
- Experience working with Big Data technologies such as Hadoop, Spark, or Kafka.
- Proficiency in programming languages such as Python and SQL.
- Hands-on experience with cloud platforms like AWS, Azure, or Google Cloud.
- Excellent analytical and problem-solving skills with attention to detail.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
- Ability to work independently and manage multiple tasks simultaneously in a fast-paced environment.
Preferred Qualifications:
- Advanced degree in Computer Science, Engineering, or related field.
- Certifications in Power BI or related technologies.
- Experience with data visualization tools other than Power BI (e.g., Tableau, QlikView).
- Knowledge of machine learning concepts and frameworks.

Role: Python-Django Developer
Location: Noida, India
Description:
- Develop web applications using Python and Django.
- Write clean and maintainable code following best practices and coding standards.
- Collaborate with other developers and stakeholders to design and implement new features.
- Participate in code reviews and maintain code quality.
- Troubleshoot and debug issues as they arise.
- Optimize applications for maximum speed and scalability.
- Stay up-to-date with emerging trends and technologies in web development.
Requirements:
- Bachelor's or Master's degree in Computer Science, Computer Engineering or a related field.
- 4+ years of experience in web development using Python and Django.
- Strong knowledge of object-oriented programming and design patterns.
- Experience with front-end technologies such as HTML, CSS, and JavaScript.
- Understanding of RESTful web services.
- Familiarity with database technologies such as PostgreSQL or MySQL.
- Experience with version control systems such as Git.
- Ability to work in a team environment and communicate effectively with team members.
- Strong problem-solving and analytical skills.

Backend - Software Development Engineer III
Experience - 7+ yrs
About Wekan Enterprise Solutions
Wekan Enterprise Solutions is a leading Technology Consulting company and a strategic investment partner of MongoDB. We help companies drive innovation in the cloud by adopting modern technology solutions that help them achieve their performance and availability requirements. With strong capabilities around Mobile, IOT and Cloud environments, we have an extensive track record helping Fortune 500 companies modernize their most critical legacy and on-premise applications, migrating them to the cloud and leveraging the most cutting-edge technologies.
Job Description
We are looking for passionate software engineers eager to be a part of our growth journey. The right candidate needs to be interested in working in high-paced and challenging environments leading technical teams, designing system architecture and reviewing peer code. Interested in constantly upskilling, learning new technologies and expanding their domain knowledge to new industries. This candidate needs to be a team player and should be looking to help build a culture of excellence. Do you have what it takes?
You will be working on complex data migrations, modernizing legacy applications and building new applications on the cloud for large enterprise and/or growth stage startups. You will have the opportunity to contribute directly into mission critical projects directly interacting with business stakeholders, customer’s technical teams and MongoDB solutions Architects.
Location - Chennai or Bangalore
- Relevant experience of 7+ years building high-performance back-end applications with at least 3 or more projects delivered using the required technologies
- Good problem solving skills
- Strong mentoring capabilities
- Good understanding of software development life cycle
- Strong experience in system design and architecture
- Strong focus on quality of work delivered
- Excellent verbal and written communication skills
Required Technical Skills
- Extensive hands-on experience building high-performance web back-ends using Node.Js and Javascript/Typescript
- Strong experience with Express.Js framework
- Working experience with Python web app development or python scripting
- Implementation experience in monolithic and microservices architecture
- Hands-on experience with data modeling on MongoDB and any other Relational or NoSQL databases
- Experience integrating with any 3rd party services such as cloud SDKs (AWS, Azure) authentication, etc.
- Hands-on experience with Kafka, RabbitMQ or any similar technologies.
- Exposure into unit testing with frameworks such as Mocha, Chai, Jest or others
- Strong experience writing and maintaining clear documentation
Good to have skills:
- Experience working with common services in any of the major cloud providers - AWS or GCP or Azure
Technical certifications in AWS / Azure / GCP / MongoDB or other relevant technologies



Key Tasks & Accountability:
- Collaborate with development teams and product managers to create innovative software solutions.
- Able to develop entire architecture, responsive design, user interaction, and user experience.
- The ability to use databases, proxies, APIs, version control systems, and third-party applications.
- Keep track of new development-related tools, frameworks, methods, and architectures.
- The developer is in charge of creating APIs depending on the architecture of the production application.
- Keeping up with the latest advancements in programming languages and server apps.
Skills:
- Comfortable around both front-end and back-end coding languages, development frameworks and third-party libraries.
- Knowledge of React, Redux and API Integration.
- Experience with backend technology like NodeJs, Microservices, MVC Framework and Data Base connection.
- Knowledge on SQL/NoSQL such as MySql, mongoDB.
- Knowledge of cloud such as AWS.
- Team player with a knack for visual design and utility.

Role : Web Scraping Engineer
Experience : 2 to 3 Years
Job Location : Chennai
About OJ Commerce:
OJ Commerce (OJC), a rapidly expanding and profitable online retailer, is headquartered in Florida, USA, with a fully-functional office in Chennai, India. We deliver exceptional value to our customers by harnessing cutting-edge technology, fostering innovation, and establishing strategic brand partnerships to enable a seamless, enjoyable shopping experience featuring high-quality products at unbeatable prices. Our advanced, data-driven system streamlines operations with minimal human intervention.
Our extensive product portfolio encompasses over a million SKUs and more than 2,500 brands across eight primary categories. With a robust presence on major platforms such as Amazon, Walmart, Wayfair, Home Depot, and eBay, we directly serve consumers in the United States.
As we continue to forge new partner relationships, our flagship website, www.ojcommerce.com, has rapidly emerged as a top-performing e-commerce channel, catering to millions of customers annually.
Job Summary:
We are seeking a Web Scraping Engineer and Data Extraction Specialist who will play a crucial role in our data acquisition and management processes. The ideal candidate will be proficient in developing and maintaining efficient web crawlers capable of extracting data from large websites and storing it in a database. Strong expertise in Python, web crawling, and data extraction, along with familiarity with popular crawling tools and modules, is essential. Additionally, the candidate should demonstrate the ability to effectively utilize API tools for testing and retrieving data from various sources. Join our team and contribute to our data-driven success!
Responsibilities:
- Develop and maintain web crawlers in Python.
- Crawl large websites and extract data.
- Store data in a database.
- Analyze and report on data.
- Work with other engineers to develop and improve our web crawling infrastructure.
- Stay up to date on the latest crawling tools and techniques.
Required Skills and Qualifications:
- Bachelor's degree in computer science or a related field.
- 2-3 years of experience with Python and web crawling.
- Familiarity with tools / modules such as
- Scrapy, Selenium, Requests, Beautiful Soup etc.
- API tools such as Postman or equivalent.
- Working knowledge of SQL.
- Experience with web crawling and data extraction.
- Strong problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Excellent communication and documentation skills.
What we Offer
• Competitive salary
• Medical Benefits/Accident Cover
• Flexi Office Working Hours
• Fast paced start up


Function: Software Engineering → Backend Development
- Python
- Flask
Requirements:
- Should be a go-getter, ready to shoulder more responsibilities, shows enthusiasm and interest in work.
- Excellent core Python skills including threading, dictionary, OOPS Concept, Data structure, Web service.
- Should have work experience on following stacks/libraries: Flask
- Familiarity with some ORM (Object Relational Mapper) libraries
- Able to integrate multiple data sources and databases into one system
- Understanding of the threading limitations of Python, and multi-process architecture Familiarity with event-driven programming in Python
- Basic understanding of front-end technologies, such as Angular, JavaScript, HTML5 and CSS3
- Writing reusable, testable, and efficient code
- Design and implementation of low-latency, high-availability, and performant applications
- Understanding of accessibility and security compliance
- Experience in both RDBMS(MySQL), NoSQL databases (MongoDB, HDFS, HIVE etc) or in-memory caching technologies such as ehcache etc is preferable.



- A Natural Language Processing (NLP) expert with strong computer science fundamentals and experience in working with deep learning frameworks. You will be working at the cutting edge of NLP and Machine Learning.
Roles and Responsibilities
- Work as part of a distributed team to research, build and deploy Machine Learning models for NLP.
- Mentor and coach other team members
- Evaluate the performance of NLP models and ideate on how they can be improved
- Support internal and external NLP-facing APIs
- Keep up to date on current research around NLP, Machine Learning and Deep Learning
Mandatory Requirements
- Any graduation with at least 2 years of demonstrated experience as a Data Scientist.
Behavioral Skills
Strong analytical and problem-solving capabilities.
- Proven ability to multi-task and deliver results within tight time frames
- Must have strong verbal and written communication skills
- Strong listening skills and eagerness to learn
- Strong attention to detail and the ability to work efficiently in a team as well as individually
Technical Skills
Hands-on experience with
- NLP
- Deep Learning
- Machine Learning
- Python
- Bert
Preferred Requirements
- Experience in Computer Vision is preferred


Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.

AWS Glue Developer
Work Experience: 6 to 8 Years
Work Location: Noida, Bangalore, Chennai & Hyderabad
Must Have Skills: AWS Glue, DMS, SQL, Python, PySpark, Data integrations and Data Ops,
Job Reference ID:BT/F21/IND
Job Description:
Design, build and configure applications to meet business process and application requirements.
Responsibilities:
7 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
Technical Experience:
Hands-on experience on developing Data platform and its components Data Lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies.
➢ Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
➢ Create data pipeline architecture by designing and implementing data ingestion solutions.
➢ Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow.
➢ Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena.
➢ Author ETL processes using Python, Pyspark.
➢ Build Redshift Spectrum direct transformations and data modelling using data in S3.
➢ ETL process monitoring using CloudWatch events.
➢ You will be working in collaboration with other teams. Good communication must.
➢ Must have experience in using AWS services API, AWS CLI and SDK
Professional Attributes:
➢ Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology.
➢ Must have 6+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment.
➢ Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired.
Qualification:
➢ Degree in Computer Science, Computer Engineering or equivalent.
Salary: Commensurate with experience and demonstrated competence
Greetings!!!!
We are looking for a data engineer for one of our premium clients for their Chennai and Tirunelveli location
Required Education/Experience
● Bachelor’s degree in computer Science or related field
● 5-7 years’ experience in the following:
● Snowflake, Databricks management,
● Python and AWS Lambda
● Scala and/or Java
● Data integration service, SQL and Extract Transform Load (ELT)
● Azure or AWS for development and deployment
● Jira or similar tool during SDLC
● Experience managing codebase using Code repository in Git/GitHub or Bitbucket
● Experience working with a data warehouse.
● Familiarity with structured and semi-structured data formats including JSON, Avro, ORC, Parquet, or XML
● Exposure to working in an agile work environment


About the job
Whirldata Inc. is an AI/Data Sciences/App Dev company established in 2017 to provide management and technology consulting services to small and medium enterprises across the globe. We are headquartered in California and our team works out of Chennai. The specific focus is on
- Helping clients identify areas where Data Sciences and AI-based approaches can increase revenues, decrease costs or enhance customer engagement experience
- Help clients translate business needs into process flows for exploratory and cutting-edge applications
- Provide appropriate and targeted solutions that can achieve the planned objective
Whirldatas management team consists of individuals with a combined 45+ years of management and technology consulting experience, spanning multiple verticals such as finance, energy, retail, manufacturing and supply chain/logistics. Please look up our website and go through the blogs/videos/articles to find out more about us.
Working Philosophy
Whirldata works on the principle that, larger business goals come first and any technology-intensive solutions need to support necessary business goals. Hence all engagements start as a management consulting exercise and solution building follows after a thorough understanding of business needs.At Whirldata, we put our minds together, mix technology, art & math to deliver a viable, scalable and affordable business solution. We are passionate about what we do because we know that our work has the power to improve businesses. You can join our team working at the forefront of new technology, solving the challenges that impact both the front-end and back-end architectures, and ultimately, delivering amazing global user experiences.
Who we are looking for:
Full Stack Engineer (Position based in Chennai)
The following criteria are mandatory requirements and we strongly encourage that you apply only if you meet all these criteria:
1. Minimum 3 years of work experience
2. Proven capability to understand clients business needs
3. At least one demonstrable stint of architecting a database from scratch
4. Multiple programming language capabilities a must. In-depth knowledge of at least one programming language
5. Willingness to wear multiple hats depending on our business needs will be a priority
6. At least 2 years of hands-on experience in front-end development
7. Knowledge of current tools, platforms and languages an absolute must
8. Hands-on experience in cloud technologies and project management capabilities will be considered favourably
What do you get in return
- AI, Data Sciences and App dev require individuals who are both business and tech-savvy. We will turn you into one and make you a rockstar!
- You will get to work in an environment where your effort helps the customer and you will get to see it
- We will provide on-the-job training that will make you a doer and not a talker on data sciences, AI and app dev
- Opportunities to shine and pave your own way forward. We are good at identifying your talents and providing a platform where you will get immense satisfaction from demonstrating the same.
- Of course - we will pay you well too!
If you are interested - please apply with a small note with your thoughts on why you find this opportunity exciting and why you are willing to consider a move.


Are you interested in joining the team behind Amazon’s newest innovation? Come help us work on world class software for our customers!
The Amazon Kindle Reader and Shopping Support Engineering team provides production engineering support and is also responsible for providing multifaceted services to the Kindle digital product family of development teams and working with production operations teams for software product release coordination and deployment. This job requires you to hit the ground running and your ability to learn quickly and work on disparate and overlapping tasks that will define your success
Job responsibilities
- Provide support of incoming tickets, including extensive troubleshooting tasks, with responsibilities covering multiple products, features and services
- Work on operations and maintenance driven coding projects, primarily in Java and C++
- Software deployment support in staging and production environments
- Develop tools to aid operations and maintenance
- System and Support status reporting
- Ownership of one or more Digital products or components
- Customer notification and workflow coordination and follow-up to maintain service level agreements
- Work with support team for handing-off or taking over active support issues and creating a team specific knowledge base and skill set
BASIC QUALIFICATIONS
- 4+ years of software development, or 4+ years of technical support experience
- Experience troubleshooting and debugging technical systems
- Experience in Unix
- Experience scripting in modern program languages
- Experience in agile/scrum or related collaborative workflow
- Experience troubleshooting and documenting findings
PREFERRED QUALIFICATIONS
- Knowledge of distributed applications/enterprise applications
- Knowledge of UNIX/Linux operating system
- Experience analyzing and troubleshooting RESTful web API calls
- Exposure to iOS (SWIFT) and Android (Native) App support & development


A software developer's job description may vary depending on the organization and specific project, but generally, it includes:
- Bachelor's degree in computer science, software engineering, or a related field (sometimes, relevant experience can substitute for formal education).
- Proficiency in one or more programming languages and related technologies.
- Strong problem-solving skills and attention to detail.
- Knowledge of software development methodologies (e.g., Agile, Scrum).
- Familiarity with software development tools, IDEs, and frameworks.
- Excellent communication skills for effective collaboration with team members and stakeholders.
- Ability to work independently and in a team.
- Continuous learning to stay updated with the latest technology trends.


Key Responsibilities:
1. Design and Development: Lead the design and development of web
applications using Python and Flask, ensuring code quality, scalability, and
performance.
2. Architecture: Collaborate with the architecture team to design and implement
robust, maintainable, and scalable software solutions.
3. API Development: Develop RESTful APIs using Flask to support front-end
applications and external integrations.
4. Database Management: Design and optimize database schemas, write efficient
SQL queries, and work with databases like PostgreSQL, MySQL, or NoSQL
solutions.
5. Testing and Debugging: Write unit tests and perform code reviews to maintain
code quality. Debug and resolve complex issues as they arise.
6. Security: Implement security best practices, including data encryption,
authentication, and authorization mechanisms.
7. Performance Optimization: Identify and address performance bottlenecks in
the application, making improvements for speed and efficiency.
8. Documentation: Maintain clear and comprehensive documentation for code,
APIs, and development processes.
9. Collaboration: Collaborate with cross-functional teams, including front-end
developers, product managers, and QA engineers, to deliver high-quality
software.
10. Mentorship: Provide guidance and mentorship to junior developers, sharing your
knowledge and best practices.
Qualifications:
Bachelor's or Master's degree in Computer Science, Engineering, or a related
field.
Proven experience (4-5 years) as a Python developer, with a strong emphasis on
Flask.
Solid understanding of web development principles, RESTful APIs, and best
practices.
Proficiency in database design and SQL, as well as experience with database
systems like PostgreSQL or MySQL.
Familiarity with front-end technologies (HTML, CSS, JavaScript) and related
frameworks is a plus.
Strong problem-solving skills and the ability to work in a fast-paced, collaborative
environment.
Excellent communication skills and the ability to work effectively in a team.
Knowledge of containerization and deployment tools (e.g., Docker, Kubernetes)
is a plus.
Analytics Job Description
We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will
partner closely with leaders across the organization, working together to understand the how
and why of people, team and company challenges, workflows and culture. The team is
responsible for delivering data and insights that drive decision-making, execution, and
investments for our product initiatives.
You will work cross-functionally with product, marketing, sales, engineering, finance, and our
customer-facing teams enabling them with data and narratives about the customer journey.
You’ll also work closely with other data teams, such as data engineering and product analytics,
to ensure we are creating a strong data culture at Blend that enables our cross-functional partners
to be more data-informed.
Role : DataEngineer
Please find below the JD for the DataEngineer Role..
Location: Guindy,Chennai
How you’ll contribute:
• Develop objectives and metrics, ensure priorities are data-driven, and balance short-
term and long-term goals
• Develop deep analytical insights to inform and influence product roadmaps and
business decisions and help improve the consumer experience
• Work closely with GTM and supporting operations teams to author and develop core
data sets that empower analyses
• Deeply understand the business and proactively spot risks and opportunities
• Develop dashboards and define metrics that drive key business decisions
• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,
and Workato
• Design our Analytics and Business Intelligence architecture, assessing and
implementing new technologies that fitting
• Work with our engineering teams to continually make our data pipelines and tooling
more resilient
Who you are:
• Bachelor’s degree or equivalent required from an accredited institution with a
quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist
• Must have strong SQL and data modeling skills, with experience applying skills to
thoughtfully create data models in a warehouse environment.
• A proven track record of using analysis to drive key decisions and influence change
• Strong storyteller and ability to communicate effectively with managers and
executives
• Demonstrated ability to define metrics for product areas, understand the right
questions to ask and push back on stakeholders in the face of ambiguous, complex
problems, and work with diverse teams with different goals
• A passion for documentation.
• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a
dynamic environment.
• A bias towards communication and collaboration with business and technical
stakeholders.
• Quantitative rigor and systems thinking.
• Prior startup experience is preferred, but not required.
• Interest or experience in machine learning techniques (such as clustering, decision
tree, and segmentation)
• Familiarity with a scientific computing language, such as Python, for data wrangling
and statistical analysis
• Experience with a SQL focused data transformation framework such as dbt
• Experience with a Business Intelligence Tool such as Mode/Tableau
Mandatory Skillset:
-Very Strong in SQL
-Spark OR pyspark OR Python
-Shell Scripting
Design, implement, and improve the analytics platform
Implement and simplify self-service data query and analysis capabilities of the BI platform
Develop and improve the current BI architecture, emphasizing data security, data quality
and timeliness, scalability, and extensibility
Deploy and use various big data technologies and run pilots to design low latency
data architectures at scale
Collaborate with business analysts, data scientists, product managers, software development engineers,
and other BI teams to develop, implement, and validate KPIs, statistical analyses, data profiling, prediction,
forecasting, clustering, and machine learning algorithms
Educational
At Ganit we are building an elite team, ergo we are seeking candidates who possess the
following backgrounds:
7+ years relevant experience
Expert level skills writing and optimizing complex SQL
Knowledge of data warehousing concepts
Experience in data mining, profiling, and analysis
Experience with complex data modelling, ETL design, and using large databases
in a business environment
Proficiency with Linux command line and systems administration
Experience with languages like Python/Java/Scala
Experience with Big Data technologies such as Hive/Spark
Proven ability to develop unconventional solutions, sees opportunities to
innovate and leads the way
Good experience of working in cloud platforms like AWS, GCP & Azure. Having worked on
projects involving creation of data lake or data warehouse
Excellent verbal and written communication.
Proven interpersonal skills and ability to convey key insights from complex analyses in
summarized business terms. Ability to effectively communicate with multiple teams
Good to have
AWS/GCP/Azure Data Engineer Certification

Title: Platform Engineer Location: Chennai Work Mode: Hybrid (Remote and Chennai Office) Experience: 4+ years Budget: 16 - 18 LPA
Responsibilities:
- Parse data using Python, create dashboards in Tableau.
- Utilize Jenkins for Airflow pipeline creation and CI/CD maintenance.
- Migrate Datastage jobs to Snowflake, optimize performance.
- Work with HDFS, Hive, Kafka, and basic Spark.
- Develop Python scripts for data parsing, quality checks, and visualization.
- Conduct unit testing and web application testing.
- Implement Apache Airflow and handle production migration.
- Apply data warehousing techniques for data cleansing and dimension modeling.
Requirements:
- 4+ years of experience as a Platform Engineer.
- Strong Python skills, knowledge of Tableau.
- Experience with Jenkins, Snowflake, HDFS, Hive, and Kafka.
- Proficient in Unix Shell Scripting and SQL.
- Familiarity with ETL tools like DataStage and DMExpress.
- Understanding of Apache Airflow.
- Strong problem-solving and communication skills.
Note: Only candidates willing to work in Chennai and available for immediate joining will be considered. Budget for this position is 16 - 18 LPA.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities:
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Implement consistent observability, deployment and IaC setups
- Patch production systems to fix security/performance issues
- Actively respond to escalations/incidents in the production environment from customers or the support team
- Mentor other Infrastructure engineers, review their work and continuously ship improvements to production infrastructure.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Participate in infrastructure security audits
Requirements:
- At least 5 years of experience in handling/building Production environments in AWS.
- At least 2 years of programming experience in building API/backend services for customer-facing applications in production.
- Demonstrable knowledge of TCP/IP, HTTP and DNS fundamentals.
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts using any scripting language such as Python, Ruby, Bash etc.,
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – if you have experience with Nginx, Postgres, Redis, and Mongo systems in production.