Job description
Your Tasks
- Designing and developing our new data processing pipeline
- Optimization of queries executed in distributed systems by improving query plans and considering data partitioning and compression
- Engineering ETL processes through the whole data pipeline using Python (Apache Airflow)
- Developing and maintenance of clearly defined APIs that enable other developers to consume our micro-services through a self-service model
Your Skills
- Academic degree in computer science or related studies or
- First work experience in a similar position (e.g. internship)
- Basic SQL skills including experience with various database technologies such as MySQL / Postgres / MS SQL Server
- Fluency in Python or other object-oriented languages
- First experience with Hadoop frameworks would be a plus
- Intercultural competence and very good communication skills in English (verbal and written)