As a Senior Data Engineer, you will play a key role in shaping data platforms that power AI-driven insights across industries. You'll work independently within a small, agile team to translate business challenges into robust, scalable data architectures. Your focus will be on designing and deploying end-to-end data solutions that support both traditional analytics and emerging AI applications.
Key Responsibilities
- Design and implement cloud-based data platforms using modern tools like Snowflake, Databricks, and dbt to support AI-ready analytics
- Build and maintain ELT/ETL pipelines that process structured, semi-structured, and unstructured data at scale
- Develop data models emphasizing semantic consistency, metrics layers, and knowledge graphs for AI consumption
- Write production-grade code in SQL, Python, and Spark, following software engineering best practices including version control and CI/CD
- Collaborate with data science and analytics teams to integrate machine learning features and inference pipelines
- Apply AI-assisted engineering techniques for data profiling, transformation, documentation, and lineage tracking
- Help shape internal patterns, accelerators, and reusable architectures that advance our data engineering practice
- Guide clients by translating business needs into technical data strategies using clear, effective communication
What We’re Looking For
- Degree in Computer Science, Engineering, Mathematics, or equivalent practical experience
- At least 3 years of hands-on experience with relational databases, data modeling, and query languages
- Proven track record building and maintaining production data pipelines
- Strong coding skills in Python or similar languages, with familiarity with Git and CI/CD workflows
- Ability to work independently and lead technical delivery from design through deployment
- Clear communicator who can explain technical concepts to non-technical stakeholders
- Passion for modern data trends, including LLM-powered analytics, vectorized access, and metadata-driven systems
Preferred Experience
- Familiarity with dbt Core or Cloud
- Experience using AI tools like GitHub Copilot, Claude, or Snowflake Cortex for data engineering tasks
- Background in DevOps practices and cloud platforms (AWS, Azure, GCP)
- Hands-on work with Spark, containerization (Docker, Kubernetes), or cloud data warehouses
- Consulting experience or client-facing project delivery
Work Environment
This is a fully remote role with team members across the US and UK. Regional offices in Atlanta and London are available for those who prefer occasional in-person collaboration. You’ll have opportunities to earn certifications, work with cutting-edge data technologies, and contribute to evolving best practices in AI-forward data engineering.


