11+ Data processing Jobs in India
Apply to 11+ Data processing Jobs on CutShort.io. Find your next job, effortlessly. Browse Data processing Jobs and apply today!

About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases.
Data Axle India is recognized as a Great Place to Work! This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive.
General Summary:
As a Digital Data Management Architect, you will design, implement, and optimize advanced data management systems that support processing billions of digital transactions, ensuring high availability and accuracy. You will leverage your expertise in developing identity graphs, real-time data processing, and API integration to drive insights and enhance user experiences across digital platforms. Your role is crucial in building scalable and secure data architectures that support real-time analytics, identity resolution, and seamless data flows across multiple systems and applications.
Roles and Responsibilities:
- Data Architecture & System Design:
- Design and implement scalable data architectures capable of processing billions of digital transactions in real-time, ensuring low latency and high availability.
- Architect data models, workflows, and storage solutions to enable seamless real-time data processing, including stream processing and event-driven architectures.
- Identity Graph Development:
- Lead the development and maintenance of a comprehensive identity graph to unify disparate data sources, enabling accurate identity resolution across channels.
- Develop algorithms and data matching techniques to enhance identity linking, while maintaining data accuracy and privacy.
- Real-Time Data Processing & Analytics:
- Implement real-time data ingestion, processing, and analytics pipelines to support immediate data availability and actionable insights.
- Work closely with engineering teams to integrate and optimize real-time data processing frameworks such as Apache Kafka, Apache Flink, or Spark Streaming.
- API Development & Integration:
- Design and develop real-time APIs that facilitate data access and integration across internal and external platforms, focusing on security, scalability, and performance.
- Collaborate with product and engineering teams to define API specifications, data contracts, and SLAs to meet business and user requirements.
- Data Governance & Security:
- Establish data governance practices to maintain data quality, privacy, and compliance with regulatory standards across all digital transactions and identity graph data.
- Ensure security protocols and access controls are embedded in all data workflows and API integrations to protect sensitive information.
- Collaboration & Stakeholder Engagement:
- Partner with data engineering, analytics, and product teams to align data architecture with business requirements and strategic goals.
- Provide technical guidance and mentorship to junior architects and data engineers, promoting best practices and continuous learning.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- 10+ years of experience in data architecture, digital data management, or a related field, with a proven track record in managing billion+ transactions.
- Deep experience with identity resolution techniques and building identity graphs.
- Strong proficiency in real-time data processing technologies (e.g., Kafka, Flink, Spark) and API development (RESTful and/or GraphQL).
- In-depth knowledge of database systems (SQL, NoSQL), data warehousing solutions, and cloud-based platforms (AWS, Azure, or GCP).
- Familiarity with data privacy regulations (e.g., GDPR, CCPA) and data governance best practices.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.

Description
Come Join Us
Experience.com - We make every experience matter more
Position: Senior GCP Data Engineer
Job Location: Chennai (Base Location) / Remote
Employment Type: Full Time
Summary of Position
A Senior Data Engineer is a professional who specializes in preparing big data infrastructure for analytical or operational uses. He/She is responsible for develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. They collaborate with data scientists and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organisation.
Responsibilities:
- Collaborate with cross-functional teams to define, prioritize, and execute data engineering initiatives aligned with business objectives.
- Design and implement scalable, reliable, and secure data solutions by industry best practices and compliance requirements.
- Drive the adoption of cloud-native technologies and architectural patterns to optimize the performance, cost, and reliability of data pipelines and analytics solutions.
- Mentor and lead a team of Data Engineers.
- Demonstrate a drive to learn and master new technologies and techniques.
- Apply strong problem-solving skills with an emphasis on building data-driven or AI-enhanced products.
- Coordinate with ML/AI and engineering teams to understand data requirements.
Experience & Skills:
- 8+ years of Strong experience in ETL and ELT data from various sources in Data Warehouses
- 8+ years of experience in Python, Pandas, Numpy, and SciPy.
- 5+ years of Experience in GCP
- 5+ years of Experience in BigQuery, PySpark, and Pub/Sub
- 5+ years of Experience working with and creating data architectures.
- Certified in Google Cloud Professional Data Engineer.
- Advanced proficiency in Google Cloud services such as Dataflow, Dataproc, Dataprep, Data Studio, and Cloud Composer.
- Proficient in writing complex Spark (PySpark) User Defined Functions (UDFs), Spark SQL, and HiveQL.
- Good understanding of Elastic search.
- Experience in assessing and ensuring data quality, data testing, and addressing data quality issues.
- Excellent understanding of Spark architecture and underlying frameworks including storage management.
- Solid background in database design and development, database administration, and software engineering across full life cycles.
- Experience with NoSQL data stores like MongoDB, DocumentDB, and DynamoDB.
- Knowledge of data governance principles and practices, including data lineage, metadata management, and access control mechanisms.
- Experience in implementing and optimizing data security controls, encryption, and compliance measures in GCP environments.
- Ability to troubleshoot complex issues, perform root cause analysis, and implement effective solutions in a timely manner.
- Proficiency in data visualization tools such as Tableau, Looker, or Data Studio to create insightful dashboards and reports for business users.
- Strong communication and interpersonal skills to effectively collaborate with technical and non-technical stakeholders, articulate complex concepts, and drive consensus.
- Experience with agile methodologies and project management tools like Jira or Asana for sprint planning, backlog grooming, and task tracking.
Computer Operator duties and responsibilities
- Identifying and correcting file and system errors.
- Performing data processing operations according to a business production schedule.
- Performing backup procedures to reduce the risk of data loss.
- Maintaining computer equipment and inventory and organizing repairs as needed.

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.

GCP Data Analyst profile must have below skills sets :
- Knowledge of programming languages like https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Ftutorials%2Fsql-tutorial%2Fhow-to-become-sql-developer&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EImfaJAD1KHOyrBQ7FkbaPl1STtfnf4QdQlbjw72%2BmE%3D&reserved=0" target="_blank">SQL, Oracle, R, MATLAB, Java and https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.simplilearn.com%2Fwhy-learn-python-a-guide-to-unlock-your-python-career-article&data=05%7C01%7Ca_anjali%40hcl.com%7C4ae720b3f3cc45c3e04608da3346b335%7C189de737c93a4f5a8b686f4ca9941912%7C0%7C0%7C637878675987971859%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Z2n1Xy%2F3YN6nQqSweU5T7EfUTa1kPAAjbCMTWxDCh%2FY%3D&reserved=0" target="_blank">Python
- Data cleansing, data visualization, data wrangling
- Data modeling , data warehouse concepts
- Adapt to Big data platform like Hadoop, Spark for stream & batch processing
- GCP (Cloud Dataproc, Cloud Dataflow, Cloud Datalab, Cloud Dataprep, BigQuery, Cloud Datastore, Cloud Datafusion, Auto ML etc)



Senior Engineer – Artificial Intelligence / Computer Vision
(Business Unit – Autonomous Vehicles & Automotive - AVA)
We are seeking an exceptional, experienced senior engineer with deep expertise in Computer Vision, Neural Networks, 3D Scene Understanding and Sensor Data Processing. The expectation is to lead a growing team of engineers to help them build and deliver customized solutions for our clients. A solid engineering as well as team management background is a must.
About MulticoreWare Inc
MulticoreWare Inc is a software and solutions development company with top-notch talent and skill in a variety of micro-architectures, including multi-thread, multi-core, and heterogeneous hardware platforms. It works in sectors including High Performance Computing (HPC), Media & AI Analytics, Video Solutions, Autonomous Vehicle and Automotive software, all of which are rapidly expanding. The Autonomous Vehicles & Automotive business unit specializes in delivering optimized solutions for sophisticated sensor fusion intelligence and the design of algorithms & implementation of software to be deployed on a variety of automotive grade hardware platforms.
Role Responsibilities
● Lead a team to solve the problems in a perception / autonomous-systems scope and turn ideas into code & products
● Drive all technical elements of development, such as project requirements definition, design, implementation, unit testing, integration, and software delivery
● Implementing cutting edge AI solutions on embedded platforms and optimizing them for performance. Hardware architecture aware algorithm design and development
● Contribute to the vision and long-term strategy of the business unit
Required Qualifications (Must Have)
● 3 - 7 years of experience with real world system building, including design, coding (C++/Python) and evaluation/testing (C++/Python)
● Solid experience in 2D / 3D Computer Vision algorithms, Machine Learning and Deep Learning fundamentals – Theory & Practice. Hands-on experience with Deep Learning frameworks like Caffe, TensorFlow or PyTorch
● Expert level knowledge in any of the courses related Signal Data Processing / Autonomous or Robotics software development (Perception, Localization, Prediction, Planning), multi-object tracking, sensor fusion algorithms and familiarity on Kalman filters, particle filters, clustering methods etc.
● Good project management and execution capabilities, as well as good communication and coordination ability
● Bachelor’s degree in Computer Science, Computer Engineering, Electrical Engineering, or related fields
Preferred Qualifications (Nice-to-Have)
● GPU architecture and CUDA programming experience, as well as knowledge of AI inference optimization using Quantization, Compression (or) Model Pruning
● Track record of research excellence with prior publication on top-tier conferences and journals
- Key responsibility is to design & develop a data pipeline for real-time data integration, processing, executing of the model (if required), and exposing output via MQ / API / No-SQL DB for consumption
- Provide technical expertise to design efficient data ingestion solutions to store & process unstructured data, such as Documents, audio, images, weblogs, etc
- Developing API services to provide data as a service
- Prototyping Solutions for complex data processing problems using AWS cloud-native solutions
- Implementing automated Audit & Quality assurance Checks in Data Pipeline
- Document & maintain data lineage from various sources to enable data governance
- Coordination with BIU, IT, and other stakeholders to provide best-in-class data pipeline solutions, exposing data via APIs, loading in down streams, No-SQL Databases, etc
Skills
- Programming experience using Python & SQL
- Extensive working experience in Data Engineering projects, using AWS Kinesys, AWS S3, DynamoDB, EMR, Lambda, Athena, etc for event processing
- Experience & expertise in implementing complex data pipeline
- Strong Familiarity with AWS Toolset for Storage & Processing. Able to recommend the right tools/solutions available to address specific data processing problems
- Hands-on experience in Unstructured (Audio, Image, Documents, Weblogs, etc) Data processing.
- Good analytical skills with the ability to synthesize data to design and deliver meaningful information
- Know-how on any No-SQL DB (DynamoDB, MongoDB, CosmosDB, etc) will be an advantage.
- Ability to understand business functionality, processes, and flows
- Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently
Functional knowledge
- Real-time Event Processing
- Data Governance & Quality assurance
- Containerized deployment
- Linux
- Unstructured Data Processing
- AWS Toolsets for Storage & Processing
- Data Security
About the Organization
Real Estate Syndicators leverage SyndicationPro to manage billions in real estate assets and thousands of investors. Growing at 17% MoM, SyndicationPro is #1 Platform To Automate Real Estate Fund Raising, Investor Relations, & Close More Deals!
What makes SyndicationPro unique is that it is cash flow positive while maintaining a healthy growth rate and is backed by seasoned investors and real estate magnates. We are also a FirstPrinciples.io Venture Studio Company, giving us the product engineering, growth, and operational backbone in India to scale our team.
Our story has been covered by Bloomberg Business, Yahoo Finance, and several other renowned news outlets.
Why this Role
We are scaling our Customer Support & Success organization and you will have the opportunity to see how the systems and processes are built for an SaaS company that is rapidly scaling. Our Customer Support & Success Team is led by expierened SaaS Operators who can provide you the guidance and mentorship to growth quickly. You will also have the oppurtunity for faster than normal promotion cycles.
Roles and Responsibilities
- Work on Migrating sensitive customer data from a third-party investment platform or Excel to SyndicationPro with minimal supervision.
- Understand the client’s needs and set up the SyndicationPro platform to meet customer expectations.
- Analyze customer expectations and data to share an expected completion time.
- To manage multiple migrations at the same time to ensure migrations are completed within the stipulated time.
- Work closely with internal and customer-facing teams to deep dive on a customer migration request and workflow using our systems to ensure nothing falls to the bottom of the to-do list
- Reviewing data for deficiencies or errors, correcting any incompatibilities, and checking the output.
- Keep current on product releases and updates
Desired candidate profile:
- Proven data entry or Migration work experience will be an asset
- Experience with MS Office and data programs
- Attention to detail
- Confidentiality
- Organization skills, with an ability to stay focused on assigned tasks
Expectations from the candidate:
- 0-2 years of experience. Prior experience in SAAS environments will be an added advantage
- Resolve processing migration problems working with the technical team
- Pay attention to detail to maintain Data accuracy
- Self Motivated and willing to excel in strict /short deadline
- Have the ability to multitask as needed and time management skills
- Must be comfortable working during night shifts.



An experienced and hands-on Technical Architect to lead our Video analytics & Surveillance product
• An ideal candidate would have worked in large scale video platforms (Youtube, Netflix, Hotstar, etc) or Surveillance softwares
• As a Technical Architect, you are hands-on and also a top contributor to the product development
• Leading teams under time-sensitive projects
Skills Required:
• Expert level Python programming language skills is a MUST
• Hands-on experience with Deep Learning & Machine learning projects is a MUST
• Has to experience in design and development of products
• Review code & mentor team in improving the quality and efficiency of the delivery
• Ability to troubleshoot and address complex technical problems.
• Has to be a quick learner & ability to adapt to increasing customer demands
• Hands-on experience in design and deploying large scale docker and Kubernetes
• Can lead a technically strong team in sharpening the product further
• Strong design capability with microservices-based architecture and its pitfalls
• Should have worked in large scale data processing systems
• Good understanding of DevOps processes
• Familiar with Identity management, Authorization & Authentication frameworks
• Possesses very strong Software Design, enterprise networking systems, advanced problem-solving skills
• Experience writing technical architecture documents
CANDIDATE WILL BE DEPLOYED IN A FINANCIAL CAPTIVE ORGANIZATION @ PUNE (KHARADI)
Below are the job Details :-
Experience 10 to 18 years
Mandatory skills –
- data migration,
- data flow
The ideal candidate for this role will have the below experience and qualifications:
- Experience of building a range of Services in a Cloud Service provider (ideally GCP)
- Hands-on design and development of Google Cloud Platform (GCP), across a wide range of GCP services including hands on experience of GCP storage & database technologies.
- Hands-on experience in architecting, designing or implementing solutions on GCP, K8s, and other Google technologies. Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
- Desired Skills within the GCP stack - Cloud Run, GKE, Serverless, Cloud Functions, Vision API, DLP, Data Flow, Data Fusion
- Prior experience of migrating on-prem applications to cloud environments. Knowledge and hands on experience on Stackdriver, pub-sub, VPC, Subnets, route tables, Load balancers, firewalls both for on premise and the GCP.
- Integrate, configure, deploy and manage centrally provided common cloud services (e.g. IAM, networking, logging, Operating systems, Containers.)
- Manage SDN in GCP Knowledge and experience of DevOps technologies around Continuous Integration & Delivery in GCP using Jenkins.
- Hands on experience of Terraform, Kubernetes, Docker, Stackdriver, Terraform
- Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Groovy, Scala
- Knowledge or experience in DevOps tooling such as Jenkins, Git, Ansible, Splunk, Jira or Confluence, AppD, Docker, Kubernetes
- Act as a consultant and subject matter expert for internal teams to resolve technical deployment obstacles, improve product's vision. Ensure compliance with centrally defined Security
- Financial experience is preferred
- Ability to learn new technologies and rapidly prototype newer concepts
- Top-down thinker, excellent communicator, and great problem solver
Exp:- 10 to 18 years
Location:- Pune
Candidate must have experience in below.
- GCP Data Platform
- Data Processing:- Data Flow, Data Prep, Data Fusion
- Data Storage:- Big Query, Cloud Sql,
- Pub Sub, GCS Bucket

REQUIREMENT:
- Previous experience of working in large scale data engineering
- 4+ years of experience working in data engineering and/or backend technologies with cloud experience (any) is mandatory.
- Previous experience of architecting and designing backend for large scale data processing.
- Familiarity and experience of working in different technologies related to data engineering – different database technologies, Hadoop, spark, storm, hive etc.
- Hands-on and have the ability to contribute a key portion of data engineering backend.
- Self-inspired and motivated to drive for exceptional results.
- Familiarity and experience working with different stages of data engineering – data acquisition, data refining, large scale data processing, efficient data storage for business analysis.
- Familiarity and experience working with different DB technologies and how to scale them.
RESPONSIBILITY:
- End to end responsibility to come up with data engineering architecture, design, development and then implementation of it.
- Build data engineering workflow for large scale data processing.
- Discover opportunities in data acquisition.
- Bring industry best practices for data engineering workflow.
- Develop data set processes for data modelling, mining and production.
- Take additional tech responsibilities for driving an initiative to completion
- Recommend ways to improve data reliability, efficiency and quality
- Goes out of their way to reduce complexity.
- Humble and outgoing - engineering cheerleaders.