Role Summary
Join us as a freelance Senior Data Pipeline Engineer during a critical growth phase. You'll play a central role in shaping and maintaining scalable data infrastructure, focusing on real-world AdTech applications. This is a hands-on position with full ownership over pipeline development, optimization, and production operations.
Key Responsibilities
- Design, implement, and maintain end-to-end data pipelines using Apache Spark
- Lead the development and deployment of machine learning models for audience and demographic analysis
- Optimize Spark workloads through profiling, benchmarking, and performance tuning
- Integrate and advocate for modern tooling, including LLM-powered coding assistants and automated testing frameworks
- Strengthen orchestration and monitoring practices across the data platform
- Collaborate with team members by documenting and sharing effective workflows
- Ensure systems remain maintainable and adaptable to future technical demands
What You Bring
- Degree in Computer Science, Data Engineering, or related field, plus 5+ years of relevant experience
- Deep expertise in Python and PySpark or Scala Spark
- Proven experience delivering robust, production-level data pipelines with strong validation mechanisms
- Hands-on background managing Spark environments and workflow schedulers such as Airflow or Dagster
Nice to Have
- Experience with the full ML lifecycle and deploying models into production
- Familiarity with audience segmentation, lookalike modeling, or similar AdTech techniques
- Proficiency in Rust, particularly in performance-sensitive contexts
Work Environment
This role supports flexible arrangements—fully remote within ±4 hours of Central European Time, or on-site in Zurich or Berlin. You’ll work alongside a distributed European team, contributing directly to platform evolution with room for technical leadership.
Compensation & Benefits
- Competitive freelance rates
- Opportunity to lead initiatives, not just execute tasks
- Collaborative, technically strong team culture
