Fluent, LLC is seeking a Senior Data & Automation Engineer to leverage Databricks and Spark expertise in building enterprise-grade data products. You will transform logical data models into optimized physical implementations and elevate standards for code quality, observability, and architecture design.
What You'll Do
- Design, build, and support scalable real-time and batch data pipelines using PySpark and Spark Structured Streaming on Databricks.
- Implement process automation and end-to-end workflows following Bronze → Silver → Gold architecture using Delta Lake best practices.
- Handle event-driven ingestion with Kafka and integrate into automated pipelines.
- Orchestrate workflows using Databricks Workflows/Jobs and CI/CD automation.
- Implement strong monitoring, observability, and alerting for reliability and performance.
- Collaborate cross-functionally in agile sprints with Product, Analytics, and Data Science teams.
- Translate enterprise logical data models into optimized physical and performance-tuned implementations.
- Write modular, version-controlled code in Git; contribute to code reviews and enforce quality standards.
- Implement robust logging, error handling, and data quality validation across automation layers.
- Utilize relevant AWS services (S3, IAM, Secrets Manager) and DevOps practices.
- Promote best practices through documentation, knowledge sharing, tech talks, and training.
What We're Looking For
- 5+ years of professional experience in data engineering, including Spark (PySpark) and SQL.
- 3+ years of hands-on experience building pipelines on Databricks (Workflows, Notebooks, Delta Lake).
- Deep understanding of Apache Spark distributed processing concepts and optimization.
- Strong experience with streaming architectures and Kafka.
- Familiarity with Databricks monitoring and observability tooling.
- Understanding of Lakehouse architecture, Unity Catalog, and governance principles.
- Proven proficiency in Git-based CI/CD workflows and automated deployment.
- Strong troubleshooting, optimization, and performance tuning skills.
- Experience designing and building large-scale, automated data pipelines.
Nice to Have
- Experience with schema management (Schema Registry) and data validation frameworks (Great Expectations, Deequ).
- Exposure to real-time ML systems and feature pipelines.
- Prior experience in startup or small agile teams.
- Familiarity with test-driven development in data engineering contexts.
Technical Stack
- PySpark, Spark Structured Streaming, Databricks, Delta Lake
- Kafka, SQL
- AWS (S3, IAM, Secrets Manager)
- Git, CI/CD
Team & Environment
Partners with Data Architects, Data Scientists, and Product Managers.
Benefits & Compensation
- Competitive compensation: $90,000 to $100,000 CAD + Bonus
- Ample career and professional growth opportunities
- New Headquarters with an open floor plan to drive collaboration
- Health, dental, and vision insurance
- Pre-tax savings plans and transit/parking programs
- 401K with competitive employer match
- Volunteer and philanthropic activities throughout the year
- Educational and social events
Work Mode
This is a local-country role based in Ontario.
Fluent participates in the E-Verify Program and follows all federal regulations including those set forth by The Office of Special Counsel for Immigration-Related Unfair Employment Practices (OSC).






