About the Role
Design and maintain robust data infrastructure to enable efficient data ingestion, transformation, and access across the organization.
Responsibilities
- Develop and scale data pipelines for high-volume, low-latency data streams
- Design storage solutions optimized for time-series and structured data
- Ensure data accuracy, consistency, and reliability across systems
- Collaborate with research and engineering teams to support data-driven models
- Optimize data workflows for performance and cost efficiency
- Implement monitoring and alerting for data pipeline health
- Support data governance and metadata management practices
- Troubleshoot and resolve data quality issues
- Evaluate and integrate new data technologies and tools
- Document system architecture and data flow designs
Nice to Have
- Experience with real-time data processing systems
- Background in financial data or quantitative research environments
- Familiarity with time-series databases
- Knowledge of data lineage and provenance tracking
- Experience with containerization and orchestration tools
- Contributions to open-source data projects
- Understanding of data privacy and access controls
- Exposure to machine learning workflows and feature stores
Compensation
Competitive salary and equity package
Work Arrangement
On-site with flexibility
Team
Collaborative team of engineers and researchers focused on data systems and machine learning
Why This Role Matters
- The work directly enables faster iteration on predictive models by ensuring timely access to high-quality data.
- Engineers in this role bridge the gap between raw data sources and advanced analytics systems.
Growth and Impact
- Opportunities to lead design decisions and mentor junior engineers.
- Visible impact on core infrastructure used by cross-functional teams.
Available for qualified candidates