Cint is seeking a Senior BI & Data Architect for a six-month fixed-term contract to lead the technical evolution of their data platform. This hands-on role involves designing and implementing a Databricks Lakehouse, migrating legacy SQL into governed pipelines, and building a semantic layer for self-service analytics in Omni.
What You'll Do
- Design and implement the Unity Catalog structure — Catalogs, Schemas, and Volumes — to create a governed, secure, and well-documented data environment that serves as a Single Source of Truth across the organization.
- Lead the migration of complex business logic from legacy systems into a unified Databricks Lakehouse, refactoring tightly coupled SQL into modular, maintainable, and performant code.
- Architect our internal transformation framework using open-source tooling (Delta Live Tables or custom Python/SQL Spark pipelines), building scalable pipelines without reliance on managed SaaS platforms.
- Serve as the resident query performance expert — analyze Spark execution plans and Spark UI to diagnose bottlenecks, reduce data skew, and optimize join strategies on large-scale datasets.
- Govern our Databricks compute footprint through strategic application of Z-Ordering, Liquid Clustering, partition design, and Serverless SQL Warehouse configurations to maximize performance per dollar.
- Build and maintain CI/CD pipelines (GitHub Actions or equivalent) to automate testing, validation, and deployment of data models.
- Architect the semantic layer in Omni — designing data models built for self-service reporting with sub-second dashboard latency.
- Occasionally take on the BI Developer role, building executive-level dashboards that surface clear, actionable narratives from complex datasets.
- Partner with cross-functional stakeholders across Finance, Sales, Product, Marketing, and Trust & Safety to translate business questions into scalable data solutions.
- Translate performance and cost metrics into clear recommendations for senior leadership, balancing engineering rigor with business impact.
What We're Looking For
- Experience in designing and implementing Unity Catalog structures (Catalogs, Schemas, Volumes) in Databricks.
- Proven ability to lead migration of complex business logic from legacy systems into modern data platforms like Databricks Lakehouse.
- Strong expertise in refactoring tightly coupled SQL into modular, maintainable, and performant code.
- Hands-on experience building data transformation frameworks using open-source tools such as Delta Live Tables, Python, SQL, and Spark pipelines.
- Deep understanding of Spark performance optimization including analysis of execution plans, Spark UI, data skew reduction, and join strategy optimization.
- Experience governing Databricks compute resources using Z-Ordering, Liquid Clustering, partitioning strategies, and Serverless SQL Warehouses.
- Experience building and maintaining CI/CD pipelines using GitHub Actions or equivalent tools for data model deployment.
- Ability to architect semantic layers for self-service BI tools with emphasis on low-latency reporting (e.g., sub-second dashboard response).
- Experience building executive-level dashboards that communicate actionable insights from complex data.
- Proven ability to collaborate with stakeholders across Finance, Sales, Product, Marketing, and Trust & Safety to turn business questions into data solutions.
- Skill in translating technical performance and cost metrics into strategic recommendations for senior leadership.
Technical Stack
Databricks Lakehouse, Unity Catalog, Delta Live Tables, Python, SQL, Apache Spark, Omni, GitHub Actions, Z-Ordering, Liquid Clustering, Serverless SQL Warehouse, CI/CD pipelines






