Job Description
About the Job:
🏢 Company Cornerstone
💼 Role Data Engineer
📍 Location Hyderabad
⏳ Experience 3–6 Years
🔖 Job Type Full Time
Job Description:
Cornerstone is seeking a skilled and driven Data Engineer to join its Hyderabad office, contributing to the development of scalable, data-driven solutions within its AI-powered workforce platform. As a Data Engineer, you will play a pivotal role in designing, building, and maintaining robust data pipelines that support analytics, reporting, and machine learning initiatives. This role is ideal for professionals passionate about transforming raw data into meaningful insights that empower business decisions across domains such as Finance, Human Resources, and Customer Success.
In this position, you will work with modern data technologies including Snowflake, dbt, Airflow, and Fivetran to create efficient ETL workflows and optimize data infrastructure. You will collaborate closely with data scientists, analysts, and business stakeholders to ensure that data is accurate, accessible, and actionable. The role also involves preparing datasets for both technical and non-technical users, enabling seamless consumption of data across the organization. Your ability to automate workflows and maintain high data quality standards will be crucial to success.
As part of Cornerstone’s innovation-driven culture, you will contribute to deploying machine learning models in production and building scalable data products on cloud platforms such as AWS, Azure, or GCP. You will also focus on ensuring data security, compliance, and performance optimization across systems. This is a high-impact role that offers exposure to large-scale data environments, cutting-edge tools, and global collaboration, making it an excellent opportunity for data professionals looking to grow in a dynamic and forward-thinking organization.
Roles & Responsibilities:
- Design, build, and maintain scalable batch and real-time data pipelines to support business analytics and machine learning use cases. Ensure high performance and reliability of data systems.
- Develop and manage ETL (Extract, Transform, Load) processes to efficiently handle data from multiple sources. Ensure seamless data flow and transformation across systems.
- Automate data workflows, including ingestion, aggregation, and transformation processes, to improve efficiency and reduce manual intervention.
- Optimize and maintain data infrastructure to ensure accurate, consistent, and timely data availability for stakeholders.
- Prepare and transform raw data into structured, consumable datasets for analytics, reporting, and business intelligence purposes.
- Collaborate with data scientists and business teams to deploy machine learning models into production environments. Support model lifecycle and performance monitoring.
- Build and manage data products and pipelines on cloud platforms such as AWS, Azure, or Google Cloud Platform, ensuring scalability and performance.
- Ensure data accuracy, integrity, security, and compliance by implementing quality control measures and governance practices.
- Monitor data systems performance and troubleshoot issues, implementing optimizations to enhance system efficiency.
- Work closely with cross-functional teams to understand data requirements and deliver solutions aligned with business goals.
Requirements & Eligibility:
- Bachelor’s degree in Computer Science, Engineering, or a related field. Strong academic foundation in data structures and systems is essential.
- Advanced proficiency in SQL and experience with relational database design and optimization. Ability to handle complex queries and large datasets.
- Hands-on experience with modern data tools such as dbt, Snowflake, Airflow, and Fivetran. Familiarity with ETL pipeline development is required.
- Experience working with cloud-based data platforms such as AWS, Azure, or GCP. Understanding of cloud architecture and services is महत्वपूर्ण.
- Strong programming skills in languages such as Python, Java, C++, or Scala. Ability to write efficient, scalable, and maintainable code.
- Experience with data pipeline orchestration tools like Airflow for workflow automation and scheduling.
- Knowledge of data warehousing concepts and experience with tools like Databricks or Apache Spark for large-scale data processing.
- Familiarity with machine learning workflows and experience deploying models in production environments is a plus.
- Understanding of NoSQL databases such as MongoDB, Cassandra, Redis, or Neo4j is an added advantage.
- Strong problem-solving, communication, and organizational skills, with the ability to work independently and collaboratively in a fast-paced environment.
Expected Salary:
The expected salary for a Data Engineer at Cornerstone in Hyderabad typically ranges between ₹12 LPA to ₹25 LPA, depending on experience, expertise in data engineering tools, and proficiency in cloud platforms. Candidates with strong skills in Snowflake, Airflow, and large-scale data processing may command higher compensation, along with additional benefits such as bonuses, health insurance, and career growth opportunities.
🚨 Stop Scrolling – This Could Be Your Shortcut to Interviews
Most candidates apply to 100+ jobs and never hear back.
The real reason? They don’t know where recruiters are actually hiring from.
Our March Hiring PDF includes verified HR emails and hiring details from companies like:
Dentsu, IBM, HCL, PwC, LTIMindtree, Wipro, Cognizant, Deloitte, Capgemini, Amazon, TCS, Infosys, EPAM, EY, NTT Data, Tech Mahindra, Fractal, GlobalLogic, Coforge, UST and many more.
Inside you’ll find:
✔ 200+ Fresher Job Opportunities
✔ 2500+ Verified HR Emails & Contacts
✔ Direct Hiring + Consultancy Openings
✔ IT & Non-IT Roles
🔥 60+ students placed recently using these hiring leads
👉 Grab the March Hiring List Now: March Hiring PDF


