Role Overview
This position is for a seasoned data engineer to lead the architecture and implementation of scalable data solutions. You will be central to developing high-performance data pipelines, ETL workflows, and integration systems that support enterprise-level operations.
Key Responsibilities
- Design, develop, and maintain complex data pipelines to ensure reliable data flow across systems
- Build and optimize ETL processes for efficiency, scalability, and data integrity
- Manage end-to-end data integration across cloud and on-premise platforms
- Collaborate with cross-functional teams to define data requirements and implement solutions
- Monitor system performance and apply tuning strategies to enhance query speed and reduce costs
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, or a related technical field
- At least 10 years of hands-on experience in data engineering roles
- Five or more years working professionally with MSSQL
- Two or more years of commercial experience using a major cloud provider, with AWS preferred
- Experience functioning in a DBA or database management capacity within development teams
- Demonstrated expertise in building and maintaining complex data pipelines and integrations
- Strong command of Snowflake, including architecture, optimization, and best practices
- Proficiency in SQL and Python for data transformation and automation
- Proven ability to diagnose and resolve intricate data engineering issues
- Excellent communication skills and a collaborative mindset
Preferred Qualifications
- Master’s degree in a relevant field
- Holding recognized data engineering certifications
Technology Environment
Core tools and platforms include Snowflake, SQL Server, Python, SQL, MSSQL, and AWS. You should be comfortable navigating this ecosystem and optimizing workflows within it.

