
Who can apply
1. Candidates with minimum 1 years of experience.
1. 1-3+ years of experience in data engineering, data architecture, or a related field
2. Strong proficiency in Python, SQL, and scripting for data processing
3. Experience with big data processing frameworks such as Apache Spark, Hadoop, or Flink
4. Hands-on experience with ETL tools like Apache Airflow, DBT, or Talend
5. Knowledge of cloud platforms (AWS, GCP, or Azure) and their data services (Redshift, BigQuery, Snowflake, etc.)
6. Familiarity with data modeling techniques, database indexing, and query optimization
7. Understanding of real-time data streaming using Kafka, Kinesis, or Pub/Sub
8. Experience with Docker and Kubernetes for deploying data pipelines
9. Strong problem-solving and analytical skills, with a focus on performance optimization
10. Knowledge of data security, governance, and compliance best practices
Annual CTC: ₹ 10,00,000 - 12,00,000 /year
OR