About the Role
The engineer will build and optimize data infrastructure to ensure reliability, scalability, and performance while enabling data-driven decision-making across teams.
Responsibilities
- Design and implement robust data pipelines for ingestion and processing
- Ensure data accuracy, consistency, and availability across systems
- Collaborate with data scientists and analysts to support analytics needs
- Optimize data storage solutions for cost and performance
- Maintain data platform reliability and uptime
- Monitor system performance and troubleshoot issues
- Develop tools for data observability and monitoring
- Support real-time data streaming architectures
- Improve data security and access controls
- Work with distributed systems and cloud infrastructure
- Automate operational workflows for efficiency
- Document data platform architecture and processes
- Integrate third-party data sources securely
- Scale infrastructure to meet growing data demands
- Ensure compliance with data governance policies
Nice to Have
- Experience with real-time streaming platforms
- Knowledge of machine learning pipeline requirements
- Familiarity with data governance frameworks
- Contributions to open-source data projects
- Experience in fintech or fraud detection domains
Compensation
Competitive salary and equity package
Work Arrangement
Hybrid or remote options available
Team
Collaborative engineering team focused on data infrastructure
Our Data Stack
- We use Apache Kafka for event streaming, BigQuery for analytics, and Terraform for infrastructure as code
- Our platform runs on Google Cloud Platform with Kubernetes orchestration
Impact You’ll Make
- You will directly influence the scalability and reliability of our core data infrastructure
- Your work will enable faster insights and better model training for machine learning teams
Available for qualified candidates