About the Role
This role focuses on developing and maintaining data infrastructure, translating business requirements into reliable data solutions, and enabling self-service analytics through well-documented datasets and reporting tools.
Responsibilities
- Develop and optimize data models to support analytics and reporting needs
- Build and maintain ETL pipelines for reliable data ingestion and transformation
- Collaborate with data scientists and analysts to define data requirements
- Ensure data quality through testing, monitoring, and validation processes
- Design and implement reusable data transformation logic
- Support business intelligence tools with structured, accessible datasets
- Document data models, pipelines, and workflows for team reference
- Troubleshoot data issues and resolve root causes efficiently
- Work with stakeholders to understand reporting and metric needs
- Improve data accessibility and reduce query latency
- Maintain version control for data transformation code
- Participate in data governance and compliance initiatives
- Monitor pipeline performance and ensure uptime
- Refactor legacy data workflows for scalability and clarity
- Integrate new data sources into the analytics ecosystem
- Use SQL extensively for data manipulation and analysis
- Apply software engineering principles to data workflows
- Collaborate on defining key performance indicators
- Support dashboard development with clean, reliable data
- Contribute to the improvement of data documentation standards
- Evaluate and recommend data tooling improvements
- Ensure data consistency across reporting platforms
- Assist in onboarding team members to data systems
- Participate in agile development cycles
- Maintain awareness of analytics best practices
Nice to Have
- Experience with dbt in a production environment
- Familiarity with cloud infrastructure providers
- Background in software engineering or computer science
- Experience with data observability tools
- Knowledge of data lineage and metadata management
- Exposure to machine learning workflows
- Previous work in regulated industries
- Contributions to open-source data projects
- Experience mentoring junior team members
- Understanding of privacy-preserving data practices
Compensation
Competitive salary based on experience and location
Work Arrangement
Hybrid work model with flexible scheduling
Team
Collaborative data team working closely with analytics, engineering, and product groups
Our Data Stack
- We use Snowflake as our primary data warehouse
- dbt is central to our transformation layer
- Looker is our main business intelligence platform
- Data ingestion is managed through Airflow and Fivetran
- We host on Google Cloud Platform
- GitLab is used for version control and CI/CD
- We enforce code reviews and testing standards
- Data documentation is maintained in dbt and internal wikis
Growth Opportunities
- Opportunities to lead data design initiatives
- Mentorship from senior data professionals
- Access to training and conference budgets
- Pathways to technical or team leadership roles
- Cross-functional project involvement
- Regular feedback and performance reviews
- Internal mobility across data disciplines
Available for qualified candidates