7+ Data architecture Jobs in Pune | Data architecture Job openings in Pune
Apply to 7+ Data architecture Jobs in Pune on CutShort.io. Explore the latest Data architecture Job opportunities across top companies like Google, Amazon & Adobe.

About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales, and research for over 50 years in the USA. Data Axle now has an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business and consumer databases.
Data Axle India is recognized as a Great Place to Work! This prestigious designation is a testament to our collective efforts in fostering an exceptional workplace culture and creating an environment where every team member can thrive.
General Summary:
As a Digital Data Management Architect, you will design, implement, and optimize advanced data management systems that support processing billions of digital transactions, ensuring high availability and accuracy. You will leverage your expertise in developing identity graphs, real-time data processing, and API integration to drive insights and enhance user experiences across digital platforms. Your role is crucial in building scalable and secure data architectures that support real-time analytics, identity resolution, and seamless data flows across multiple systems and applications.
Roles and Responsibilities:
- Data Architecture & System Design:
- Design and implement scalable data architectures capable of processing billions of digital transactions in real-time, ensuring low latency and high availability.
- Architect data models, workflows, and storage solutions to enable seamless real-time data processing, including stream processing and event-driven architectures.
- Identity Graph Development:
- Lead the development and maintenance of a comprehensive identity graph to unify disparate data sources, enabling accurate identity resolution across channels.
- Develop algorithms and data matching techniques to enhance identity linking, while maintaining data accuracy and privacy.
- Real-Time Data Processing & Analytics:
- Implement real-time data ingestion, processing, and analytics pipelines to support immediate data availability and actionable insights.
- Work closely with engineering teams to integrate and optimize real-time data processing frameworks such as Apache Kafka, Apache Flink, or Spark Streaming.
- API Development & Integration:
- Design and develop real-time APIs that facilitate data access and integration across internal and external platforms, focusing on security, scalability, and performance.
- Collaborate with product and engineering teams to define API specifications, data contracts, and SLAs to meet business and user requirements.
- Data Governance & Security:
- Establish data governance practices to maintain data quality, privacy, and compliance with regulatory standards across all digital transactions and identity graph data.
- Ensure security protocols and access controls are embedded in all data workflows and API integrations to protect sensitive information.
- Collaboration & Stakeholder Engagement:
- Partner with data engineering, analytics, and product teams to align data architecture with business requirements and strategic goals.
- Provide technical guidance and mentorship to junior architects and data engineers, promoting best practices and continuous learning.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- 10+ years of experience in data architecture, digital data management, or a related field, with a proven track record in managing billion+ transactions.
- Deep experience with identity resolution techniques and building identity graphs.
- Strong proficiency in real-time data processing technologies (e.g., Kafka, Flink, Spark) and API development (RESTful and/or GraphQL).
- In-depth knowledge of database systems (SQL, NoSQL), data warehousing solutions, and cloud-based platforms (AWS, Azure, or GCP).
- Familiarity with data privacy regulations (e.g., GDPR, CCPA) and data governance best practices.
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.
Enterprise Data Architect - Dataeconomy (25+ Years Experience)
About Dataeconomy:
Dataeconomy is a rapidly growing company at the forefront of Information Technology. We are driven by data and committed to using it to make better decisions, improve our products, and deliver exceptional value to our customers.
Job Summary:
Dataeconomy seeks a seasoned and strategic Enterprise Data Architect to lead the company's data transformation journey. With 25+ years of experience in data architecture and leadership, you will be pivotal in shaping our data infrastructure, governance, and culture. You will leverage your extensive expertise to build a foundation for future growth and innovation, ensuring our data assets are aligned with business objectives and drive measurable value.
Responsibilities:
Strategic Vision and Leadership:
Lead the creation and execution of a long-term data strategy aligned with the company's overall vision and goals.
Champion a data-driven culture across the organization, fostering cross-functional collaboration and data literacy.
Advise senior leadership on strategic data initiatives and their impact on business performance.
Architecture and Modernization:
Evaluate and modernize the existing data architecture, recommending and implementing innovative solutions.
Design and implement a scalable data lake/warehouse architecture for future growth.
Advocate for and adopt cutting-edge data technologies and best practices.
ETL Tool Experience (8+ years):
Extensive experience in designing, developing, and implementing ETL (Extract, Transform, Load) processes using industry-standard tools such as Informatica PowerCenter, IBM DataStage, Microsoft SSIS, or open-source options like Apache Airflow.
Proven ability to build and maintain complex data pipelines that integrate data from diverse sources, transform it into usable formats, and load it into target systems.
Deep understanding of data quality and cleansing techniques to ensure the accuracy and consistency of data across the organization.
Data Governance and Quality:
Establish and enforce a comprehensive data governance framework ensuring data integrity, consistency, and security.
Develop and implement data quality standards and processes for continuous data improvement.
Oversee the implementation of master data management and data lineage initiatives.
Collaboration and Mentorship:
Mentor and guide data teams, including architects, engineers, and analysts, on data architecture principles and best practices.
Foster a collaborative environment where data insights are readily shared and acted upon across the organization.
Build strong relationships with business stakeholders to understand and translate their data needs into actionable solutions.
Qualifications:
Education: master’s degree in computer science, Information Systems, or related field; Ph.D. preferred.
Experience: 25+ years of experience in data architecture and design, with 10+ years in a leadership role.
Technical Skills:
Deep understanding of TOGAF, AWS, MDM, EDW, Hadoop ecosystem (MapReduce, Hive, HBase, Pig, Flume, Scoop), cloud data platforms (Azure Synapse, Google Big Query), modern data pipelines, streaming analytics, data governance frameworks.
Proficiency in programming languages (Java, Python, SQL), scripting languages (Bash, Python), data modelling tools (ER diagramming software), and BI tools.
Extensive expertise in ETL tools (Informatica PowerCenter, IBM DataStage, Microsoft SSIS, Apache Airflow)
Familiarity with emerging data technologies (AI/ML, blockchain), data security and compliance frameworks.
Soft Skills:
Outstanding communication, collaboration, and leadership skills.
Strategic thinking and problem-solving abilities with a focus on delivering impactful solutions.
Strong analytical and critical thinking skills.
Ability to influence and inspire teams to achieve goals.
- Design and implement effective database solutions and models to store and retrieve company data.
- Examine and identify database structural necessities by evaluating client operations, applications, and programming.
- Assess database implementation procedures to ensure they comply with internal and external regulations.
- Install and organize information systems to guarantee company functionality.
- Prepare accurate database design and architecture reports for management and executive teams.
- Oversee the migration of data from legacy systems to new solutions.
- Monitor the system performance by performing regular tests, troubleshooting, and integrating new features.
- Recommend solutions to improve new and existing database systems.
- Educate staff members through training and individual support.
- Offer support by responding to system problems in a timely manner.

Required Skills:
- Proven work experience as an Enterprise / Data / Analytics Architect - Data Platform in HANA XSA, XS, Data Intelligence and SDI
- Can work on new and existing architecture decision in HANA XSA, XS, Data Intelligence and SDI
- Well versed with data architecture principles, software / web application design, API design, UI / UX capabilities, XSA / Cloud foundry architecture
- In-depth understand of database structure (HANA in-memory) principles.
- In-depth understand of ETL solutions and data integration strategy.
- Excellent knowledge of Software and Application design, API, XSA, and microservices concepts
Roles & Responsibilities:
- Advise and ensure compliance of the defined Data Architecture principle.
- Identifies new technologies update and development tools including new release/upgrade/patch as required.
- Analyzes technical risks and advises on risk mitigation strategy.
- Advise and ensures compliance to existing and development required data and reporting standard including naming convention.
The time window is ideally AEST (8 am till 5 pm) which means starting at 3:30 am IST. We understand it can be very early for an SME supporting from India. Hence, we can consider the candidates who can support from at least 7 am IST (earlier is possible).
Company Profile:
Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.
We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.
Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.
Salary: As per company standards.
Designation: Data Engineering
Location: Pune
Experience with ETL, Data Modeling, and Data Architecture
Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.
Experience with AWS cloud data lake for development of real-time or near real-time use cases
Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing
Build data pipeline frameworks to automate high-volume and real-time data delivery
Create prototypes and proof-of-concepts for iterative development.
Experience with NoSQL databases, such as DynamoDB, MongoDB etc
Create and maintain optimal data pipeline architecture,
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow
Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.
Employment Type
Full-time


- 6+ years of recent hands-on Java development
- Developing data pipelines in AWS or Google Cloud
- Java, Python, JavaScript programming languages
- Great understanding of designing for performance, scalability, and reliability of data intensive application
- Hadoop MapReduce, Spark, Pig. Understanding of database fundamentals and advanced SQL knowledge.
- In-depth understanding of object oriented programming concepts and design patterns
- Ability to communicate clearly to technical and non-technical audiences, verbally and in writing
- Understanding of full software development life cycle, agile development and continuous integration
- Experience in Agile methodologies including Scrum and Kanban
Pingahla is recruiting business intelligence Consultants/Senior consultants who can help us with Information Management projects (domestic, onshore and offshore) as developers and team leads. The candidates are expected to have 3-6 years of experience with Informatica Power Center/Talend DI/Informatica Cloud and must be very proficient with Business Intelligence in general. The job is based out of our Pune office.
Responsibilities:
- Manage the customer relationship by serving as the single point of contact before, during and after engagements.
- Architect data management solutions.
- Provide technical leadership to other consultants and/or customer/partner resources.
- Design, develop, test and deploy data integration solutions in accordance with customer’s schedule.
- Supervise and mentor all intermediate and junior level team members.
- Provide regular reports to communicate status both internally and externally.
- Qualifications:
- A typical profile that would suit this position would be if the following background:
- A graduate from a reputed engineering college
- An excellent I.Q and analytical skills and should be able to grasp new concepts and learn new technologies.
- A willingness to work with a small team in a fast-growing environment.
- A good knowledge of Business Intelligence concepts
Mandatory Requirements:
- Knowledge of Business Intelligence
- Good knowledge of at least one of the following data integration tools - Informatica Powercenter, Talend DI, Informatica Cloud
- Knowledge of SQL
- Excellent English and communication skills
- Intelligent, quick to learn new technologies
- Track record of accomplishment and effectiveness with handling customers and managing complex data management needs