Job Title: Data Engineer
Company: Arbor
Role OverviewThe Data Engineer will own the data infrastructure that powers Arbor's intelligence layer, serving as the connective tissue between production systems and business-critical insights. This high-impact, high-ownership role involves building and maintaining end-to-end data pipelines, transforming complex energy and customer data into reliable assets, and enabling data-driven decisions across pricing, marketplace performance, and customer outcomes.
Responsibilities- Build and maintain data pipeline infrastructure spanning ingestion through Fivetran and custom pipelines from GCP production systems into Snowflake
- Own the dbt transformation layer end-to-end, modeling energy market data, customer lifecycle events, marketplace results, and utility rate feeds
- Create clean, reliable, and well-documented data assets
- Partner with engineering, operations, and leadership to deliver analytics that inform business decisions including rate monitoring, supplier pricing trends, customer switching patterns, and marketplace performance
- Build and maintain dashboards in Hex to surface actionable intelligence for non-technical stakeholders
- Help define data contracts and schema standards to ensure trustworthiness of the Snowflake environment as the company scales
- Play a meaningful role in shaping how AI is used in data workflows, including automating data quality monitoring, accelerating development with AI-assisted SQL and Python, and surfacing anomalies in complex regulatory data feeds
- 3–6+ years of experience in a data engineering or analytics engineering role, ideally at a high-growth startup or in a domain involving complex, real-time data (energy, fintech, marketplace, or similar)
- Strong dbt fundamentals with a focus on designing maintainable models
- Thinking about testing, documentation, and downstream consumers as core responsibilities
- Solid SQL skills
- Python for pipeline development or data quality tooling
- Hands-on experience with Snowflake, including schema design, query performance, and understanding cost/performance tradeoffs of data structure and access
- Comfort with GCP data services (BigQuery, Cloud Storage, Pub/Sub)
- Experience building dashboards for business stakeholders in tools like Hex, Looker, or similar
- Understanding the difference between a dashboard that gets used and one that doesn't
- Genuine curiosity about electricity pricing, competitive markets, and the grid
- An AI-native approach to personal productivity using AI tools to accelerate development
dbt, Snowflake, Fivetran, GCP, BigQuery, Cloud Storage, Pub/Sub, Hex, SQL, Python
Benefits- Competitive salary
- Meaningful equity
- Benefits
- Flexible on location
- Value regular in-person collaboration with the team
Hybrid. Flexible on location but value regular in-person collaboration with the team.
Company Culture- AI-first development mindset
- Move fast
- High ownership
- Expectation to bring initiative in using AI tools for productivity




