Data Engineer Wanted: Build Powerful Data Pipelines That Drive Real Impact
We're on the hunt for a sharp Data Engineer to supercharge our data infrastructure. In this role, you'll be the architect behind robust, scalable data pipelines that power our advanced analytics and real-time processing capabilities.
Responsibilities
- Craft slick, secure data pipelines (ETL/ELT) using cutting-edge engineering tools and best practices.
- Implement both real-time and batch data processing solutions leveraging Kafka and Databricks.
- Build serverless data workflows and automation magic with Azure Functions.
- Partner closely with developers, testers, architects, and business teams to translate complex requirements into elegant solutions.
- Champion data quality, observability, security, and governance across our entire ecosystem.
- Help shape our cloud data platform's technical direction and architectural strategy.
Requirements
- 4+ years navigating the data engineering landscape.
- Python programming chops and SQL ninja-level skills.
- Proven track record with Apache Kafka in streaming or messaging architectures.
- Deep experience with Databricks and building robust Spark pipelines.
- Azure Functions expertise and comfort in the Azure ecosystem.
- Top-tier stakeholder communication skills – can seamlessly collaborate across teams.
- Bachelor's or Master's in Computer Science, Engineering, or a related technical discipline.
- Self-motivated, detail-obsessed performer who thrives in high-energy environments.
What we offer
- Flexible work arrangements.
- 20 fully-paid personal recreation days annually.
- A collaborative team that actually supports each other.
- Truly competitive salary.
- A people-first HR and management approach.




