Responsibilities
- Developing and maintaining ETL/ELT pipelines using SQL, Spark/PySpark, and Microsoft Fabric and/or Databricks
- Implementing data cleansing, validation, and transformation logic
- Building and maintaining medallion/lakehouse architectures using Microsoft Fabric or Delta Lake
- Integrating data from multiple enterprise systems
- Preparing analytics-ready datasets for BI reporting and downstream use cases
- Optimizing performance of pipelines, transformations, and data models
- Supporting data platform modernization and migration from legacy ETL solutions
- Working in Agile teams with architects, analysts, and customer stakeholders
- Contributing to engineering best practices, reusable patterns, and knowledge sharing
Requirements
- 2+ years of experience in Data Engineering or similar roles
- Strong programming skills in SQL and Python/PySpark
- Familiarity with Microsoft Fabric, Databricks, or similar data platform
- Experience with Azure or AWS data services
- Experience building ETL/ELT pipelines and data integration solutions
- Understanding of data lakes, data warehouses, and data modeling
- Experience with Git and modern development practices
- Ability to work in Agile teams and a consulting environment
- Excellent spoken and written English language skills
Nice to Have
- Hands-on experience with Microsoft Fabric, Databricks, Delta Lake, Snowflake
- Experience with data platform modernization or ETL migration projects
- Experience with Power BI or other analytics tools
- Familiarity with Delta Lake or similar storage formats
- Experience working in international environments
- Azure or Databricks certifications
Additional Information
- Excellent spoken and written English language skills