About the Role
The role involves leading data engineering initiatives, building data pipelines, ensuring data quality, and guiding team best practices in a high-growth environment.
Responsibilities
- Lead the design and implementation of data architecture
- Develop and maintain scalable data pipelines
- Ensure data accuracy, reliability, and accessibility
- Collaborate with analytics and product teams
- Optimize data storage and query performance
- Mentor engineers and promote technical excellence
- Evaluate and integrate new data technologies
- Support data governance and compliance standards
- Drive automation of data workflows
- Troubleshoot production data issues
- Define and enforce engineering best practices
- Participate in capacity planning for data systems
- Monitor system performance and reliability
- Lead incident response for data platform outages
- Document architecture and system changes
- Work cross-functionally to understand data needs
- Improve data lineage and observability
- Implement monitoring and alerting systems
- Support migration to modern data stack components
- Ensure secure access to sensitive data
- Contribute to data modeling standards
- Evaluate cost-efficiency of data operations
- Lead technical reviews and system design discussions
- Promote reusability and modular data components
- Drive adoption of data quality frameworks
Nice to Have
- Master’s degree in a technical field
- Experience leading data teams
- Contributions to open-source data projects
- Familiarity with machine learning pipelines
- Experience with data mesh architectures
- Knowledge of Kubernetes and containerization
- Background in fintech or high-scale systems
- Certifications in cloud data technologies
Compensation
Competitive salary and benefits package
Work Arrangement
Remote with flexible hours
Team
Collaborative team focused on scalable data infrastructure
Our Tech Stack
- We use AWS for cloud infrastructure
- Data pipelines are built with Apache Airflow
- Snowflake is our primary data warehouse
- We stream data using Apache Kafka
- Terraform manages infrastructure as code
- We monitor with Datadog and Prometheus
- Python and SQL are core development languages
- We leverage dbt for transformation workflows
- GitLab is used for version control and CI/CD
Growth Opportunities
- Opportunities to shape data strategy
- Lead initiatives across multiple teams
- Mentor engineers across departments
- Influence tooling and platform decisions
- Present technical designs to leadership
- Grow into executive technical roles
- Contribute to company-wide data culture
- Lead adoption of emerging technologies
Available for qualified candidates