About the Role
The role involves building and optimizing robust data infrastructure to support secure data processing, ensuring high performance, reliability, and compliance with privacy standards across distributed systems.
Responsibilities
- Design and implement scalable data processing workflows
- Ensure data integrity and consistency across pipeline stages
- Collaborate with data scientists and security experts
- Optimize data transfer and transformation performance
- Maintain and improve existing data infrastructure
- Troubleshoot and resolve pipeline failures promptly
- Integrate new data sources into existing systems
- Support secure data handling practices
- Monitor system performance and data quality
- Document architecture and operational procedures
- Contribute to system reliability and uptime
- Work with containerized environments and orchestration tools
- Implement automated testing for data workflows
- Ensure compliance with data protection standards
- Participate in code and design reviews
Nice to Have
- Experience with confidential computing technologies
- Familiarity with zero-knowledge proofs or cryptographic protocols
- Knowledge of healthcare or financial data regulations
- Contributions to open-source data projects
- Experience in agile or remote-first teams
Compensation
Competitive, based on experience and location
Work Arrangement
Remote, with options for Zurich or Berlin office presence
Team
Cross-functional engineering and data science teams
Why This Role Matters
- You will help shape infrastructure that enables secure data collaboration across organizations
- Your work directly impacts how sensitive data is processed without compromising privacy
Tech Stack
Python, Apache Airflow, Kubernetes, Docker, AWS, GCP, Protobuf, gRPC
Not applicable for freelance roles