About the Role
Design and implement scalable data solutions using Azure technologies, supporting data pipelines, transformation workflows, and cloud infrastructure for enterprise clients.
Responsibilities
- Develop and maintain data pipelines using Azure Data Factory
- Design and optimize data storage solutions in Azure Blob Storage and Azure Data Lake
- Implement ETL processes for large-scale data ingestion and transformation
- Collaborate with analytics teams to support data modeling and warehouse structures
- Monitor data workflows for performance, reliability, and error handling
- Troubleshoot and resolve data integration issues across systems
- Ensure data quality and consistency across pipelines
- Support the deployment of data solutions in production environments
- Write and maintain technical documentation for data architecture and processes
- Integrate data from multiple sources including APIs, databases, and files
- Apply data security standards within Azure environments
- Work with stakeholders to understand reporting and analytics requirements
- Optimize query performance for data access layers
- Use version control for data pipeline code management
- Participate in code reviews and technical design sessions
- Automate routine data operations and monitoring tasks
- Support compliance with data governance policies
- Stay current with Azure platform updates and new features
- Assist in migrating on-premise data systems to Azure cloud
- Collaborate with DevOps for CI/CD integration of data workflows
- Implement monitoring and alerting for data pipeline health
- Use Databricks and Synapse Analytics for advanced data processing
- Design metadata management practices for data traceability
- Apply infrastructure-as-code principles using ARM templates or Bicep
- Contribute to data strategy and long-term platform planning
Nice to Have
- Certification in Microsoft Azure Data Engineering
- Experience with Apache Spark in Azure Databricks
- Knowledge of Delta Lake architecture
- Exposure to real-time data streaming with Event Hubs
- Experience using Azure Functions for data workflows
- Familiarity with Power BI data modeling
- Hands-on work with Azure Kubernetes Service
- Background in financial or healthcare data domains
- Experience with data lineage tools
- Knowledge of Terraform for Azure infrastructure
Compensation
Competitive salary based on experience, paid in local currency
Work Arrangement
Fully remote, Latin America time zones
Team
Collaborative engineering team delivering cloud data solutions for international clients
Application Process
- Submit resume and cover letter through company career portal
- Initial screening followed by technical assessment
- Two rounds of virtual interviews with engineering leads
- Final review and offer decision within two weeks of final interview
Technology Stack
- Azure Data Factory
- Azure Databricks
- Azure Synapse Analytics
- Azure Blob Storage
- Azure Data Lake Storage
- Azure SQL Database
- Azure Monitor
- Azure DevOps
- Git
- Python
- PowerShell
- ARM Templates
- Bicep
- Databricks SQL
- REST APIs
Not applicable; role is remote and open to candidates in Latin America