Data Engineer
- Income level not specified
-
Astana
Brief Description
DigitalTeam is hiring. We are a worldwide software service provider and currently looking for a Data Engineer to join our team.
Responsibilities
- Design, develop, and maintain efficient, scalable data pipelines for both batch and real-time processing
- Build and manage workflows using Airflow and ensure robust orchestration of ETL processes
- Work extensively with Apache Spark (preferably PySpark) to process large datasets
- Implement and manage data lakehouse solutions using Delta Lake and Databricks
- Integrate and optimize data pipelines into Snowflake or other cloud data warehouses
- Apply medallion architecture principles (Bronze → Silver → Gold layers) to structure and organize data
- Cleanse, standardize, and transform complex and semi-structured datasets
- Stay up-to-date with best practices in data engineering, infrastructure, and governance
Requirements
- 7+ years of hands-on experience as a Data Engineer or in a similar role
- Strong expertise with Apache Spark (preferably PySpark)
- Solid experience using Airflow for orchestration of data workflows
- Proven work with Delta Lake and Databricks environments
- Hands-on experience with Snowflake or other cloud-based data warehouses (e.g., BigQuery, Redshift)
- Deep understanding of data pipelines, including real-time streaming and batch ingestion
- Experience implementing medallion/lakehouse architecture
- Proficient in data modeling, cleansing, and transformation techniques
- Comfortable working with semi-structured and messy datasets
- Strong understanding of data lineage, quality controls, and monitoring best practices
- Excellent communication skills and experience working in distributed and cross-functional teams
Conditions
- Official employment;
- Flexible work schedule;
- Professional growth and development with friendly team;
- Remote work;
- Detailed onboarding.