Responsibilities
- Write and ship a lot of code.
- Work directly with analysts and stakeholders to refine requirements, nail down logic, and debug and qualify produced data sets to ensure they meet the underlying business needs.
- Drive architectural decisions and thoughtfully balance trade-offs in system design.
- Lead cross-functional data initiatives.
- Champion high standards in data quality, security, and discoverability.
- Translate complex technical challenges into clear solutions.
- Design and develop high-performance data pipelines. Adhering to SDLC, including CI/CD and best practices.
- Identify and drive opportunities to optimize and/or scale existing parts of our stack.
- Utilize tooling and automation to improve developer efficiency.
- Monitor data systems to ensure quality and availability while seeking to drive down costs.
- Contribute to the team processes and community.
- Mentor other Data Engineers.
- Be part of our innovation and transformation story.
Requirements
- Strong experience with data warehousing, data lakes, ELT process, and enterprise data platforms such as Snowflake (preferred), Redshift & BigQuery.
- Experience with building performant data pipelines across disparate systems.
- Experience with cloud platforms such as AWS (preferred), GCP & Azure.
- Mastery in SQL and experience in Python.
- Ability to work independently and collaboratively.
- Willingness and ability to be part of an on-call rotation, occasionally working non-business hours to address critical system alerts and maintain service uptime.
- An open mind and willingness to be flexible.
Nice to Have
- Experience with the above is a nice-to-have, and a desire to learn is a must.
- Experience with marketing data is preferred, but not required.
Additional Information
- Ability to travel up to 5 days per quarter for Together Weeks, team gatherings and other events, when applicable.