top of page

Data Engineer


Coimbatore, Tamil Nadu, India


About the Role

We are looking for an experienced data engineer to join our growing team of data engineering experts. As a data engineer you will be responsible for developing, maintaining, and optimizing our data warehouse, data pipelines, and data products. You will support multiple stakeholders, including software developers, database engineers, data analysts, and data scientists, to ensure an optimal data delivery architecture. The ideal candidate should possess strong technical abilities to solve complex problems with data, a willingness to learn new technologies and tools if necessary, and be comfortable supporting the data needs of multiple teams, stakeholders, and products.
Real life experience in data-lake-house are a plus (AWS, GCP, Spark, pyspark, Kafka, Airflow, BigQuery, etc.)


1. Collaborate with the data team to design, develop, and maintain efficient data pipelines using Python and SQL.
2. Assist in data transformation and loading from diverse sources into our data lake and datawarehouse.
3. Work with large datasets and ensure data quality, accuracy, and reliability.
4. Dive into data and pinpoint tasks where manual participation can be eliminated with automation.
5. Learn to optimize and troubleshoot data pipelines for performance and scalability.
6. Apply SQL skills to write queries for data analysis and reporting.
7. Communicate effectively within the team and contribute innovative ideas.

Big Data Engineering:
1. Data lake house, design and implementation (AWS or GCP)
2. Create ETL/ELT and analytical pipelines in Pyspark
3. Real time data processing with Kafka and Spark
4. Data pipelines automation and orchestration (Airflow)


  • A minimum of 3 years experience in data related projects.

  • Experience with Big Data technology (ideally 2 years) In-depth knowledge of Structured Query Language (SQL), MySQL, and PostgreSQL.

  • Expertise in database server specification, performance tuning, analysis, and optimization.

Big data expertise/experience:

  • Master degree or specialized studies in Big Data Technology

  • Spark cluster specification

  • Spark and Pyspark programming Pipeline orchestration (Airflow or similar)

  • Real time data processing with Kafka and Spark

  • Data and ML operations: able to fully automatize all data process (ETL, ELT, DWH updating, analytics, reporting tables, machine learning)

  • Exceptional problem-solving and critical thinking skills.

  • Excellent collaboration and communication skills.

  • Experience in MongoDB, Hadoop components and RestAPI knowledge will be an added advantage.

About the Company

bottom of page