Looking to work for a FinTech company operating globally and growing at a fast pace, headquartered in Frankfurt?
They develop tailor-made and multi-asset class index solutions for ETFs and other index-linked investment products for the leading global investment banks and asset managers. Flexibility, efficiency and providing the highest quality are at the heart and soul of their business philosophy.
- Academic degree in computer science, related studies or equivalent professional experience
- Fluency in Python
- Advanced experience in implementing and optimizing data processing jobs in Hadoop / Spark and RDBM systems
- Very good communication skills in English (verbal and written)
- Previous experience with CI/DevOps tools/platforms (e.g. Jenkins, Chef, Puppet, Ansible) would be an advantage
- Intercultural competence, meticulousness and a passion for documentation
- Excellent SQL skills including experience with various database technologies such as MySQL / Postgres / MS SQL Server / Oracle
- Must want to mentor & train junior staff
Tasks /project details:
- Maintaining the ETL stack
- Implementing data models and storage solutions for high volume data in an accessible and scalable manner
- Building up a centralized data warehouse that acts as the unique source of all relevant data in the long run
- Engineering ETL processes through the whole data pipeline using Python (Apache Airflow)
- Optimizing queries executed in distributed systems by improving query plans and considering data partitioning and compression
- Being an in-house consultant for best track, store and access data.