About the Role
The role involves building and maintaining core components of the data platform, optimizing data flow, and supporting data reliability across systems.
Responsibilities
- Design and implement scalable data processing systems
- Develop and maintain ETL pipelines for large-scale data ingestion
- Ensure data accuracy, consistency, and availability across platforms
- Collaborate with data scientists and analysts to understand requirements
- Improve data observability and monitoring capabilities
- Optimize data storage and query performance
- Support real-time and batch data processing workflows
- Contribute to architecture decisions for data infrastructure
- Maintain data security and compliance standards
- Troubleshoot and resolve data pipeline issues
- Evaluate and integrate new data technologies
- Document system designs and operational procedures
- Participate in code reviews and ensure code quality
- Mentor junior engineers and share technical knowledge
- Drive automation in data operations and deployment processes
Nice to Have
- Experience with Apache Airflow or similar workflow tools
- Knowledge of data governance and metadata management
- Familiarity with machine learning data pipelines
- Contributions to open-source data projects
- Experience in high-growth technology environments
Compensation
Competitive salary and benefits package
Work Arrangement
Hybrid work model with flexibility for remote and office-based work
Team
Collaborative engineering team focused on scalable data systems
About the Team
The data platform team operates at the core of the organization’s data strategy, enabling data-driven decision-making across departments. Engineers work closely with stakeholders to deliver resilient, efficient, and scalable data solutions.
What We Value
Technical excellence, ownership, collaboration, and continuous learning are central to our engineering culture. We prioritize sustainable development practices and inclusive team dynamics.
Available for qualified candidates