What You'll Do
Take ownership of building and evolving data infrastructure that supports large-scale data products. You'll work directly with cloud-native tools to design systems that ingest, process, and deliver insights from rapidly growing datasets, handling thousands of events per second.
Partner closely with product and engineering stakeholders to shape the long-term vision for data platforms. You'll implement robust pipelines using modern orchestration and processing frameworks, ensuring data accuracy, observability, and operational efficiency.
Champion best practices in automation, testing, version control, and CI/CD to maintain high reliability across systems. Your work will directly influence the scalability and performance of data solutions that serve critical business functions.
Requirements
- Minimum of 5 years in data engineering roles with a focus on scalable data systems
- Proven background in data warehouse and data lake architectures
- Hands-on experience with AWS or GCP cloud platforms
- Strong proficiency in Python and SQL for developing and maintaining automated pipelines
- Familiarity with data processing tools such as Spark, Athena, or Pandas
- Working knowledge of orchestration platforms, particularly Airflow
- Understanding of relational database design and cloud data warehouses like Snowflake or Redshift
Benefits
You'll work in a technically driven environment that values autonomy, evidence-based decision-making, and open collaboration. The team operates with minimal hierarchy, allowing engineers to lead initiatives and contribute directly to strategic outcomes. There's strong support for continuous learning, especially in emerging data technologies and methodologies.
