Responsibilities
- Data pipelines & integration
- DataLakehouse ownership
- Monitoring & optimisation
- Cross-functional collaboration
- Staying ahead
Requirements
- degree in Computer Science, Engineering or Information Systems or bootcamp experience with a minimum of 5+ years of professional experience in data engineering
- proficiency in designing, implementing and optimising ETL processes and orchestrating ETL workflows in production environments
- hands-on experience with Databricks and AWS services such as S3, Kinesis and Lambda or similar technologies
- highly proficient in Python, SQL and PySpark for data manipulation and processing and write clean, maintainable and production-ready code
- excellent problem-solving skills, strong attention to detail, and the ability to work independently and collaboratively
- fluent in English both verbally and in writing
Nice to Have
- Experience with GitLab CI/CD pipelines and familiarity with PostgreSQL
Benefits
- Transport subsidy
- learning budget
- wellness and gym
- workation
- bi-weekly team lunches
Additional Information
- Please note: this is not a remote only position, we offer you a flexible hybrid model here in Hamburg, Germany - working from home on Mondays & Fridays, coming to the office on Tuesday, Wednesday & Thursday!
