About the Role
The role involves developing and managing data pipelines, ensuring data accuracy, and supporting analytics needs across the organization through robust data architecture and engineering practices.
Responsibilities
- Design and implement data pipelines for efficient data flow
- Build and maintain scalable data storage solutions
- Ensure data quality and integrity across systems
- Collaborate with analysts and engineers on data needs
- Optimize data processing workflows for performance
- Support business intelligence and reporting tools
- Integrate data from multiple source systems
- Monitor data pipeline health and resolve issues
- Document data models and system architecture
- Apply data governance and security standards
- Work with cloud-based data platforms
- Troubleshoot data discrepancies and errors
- Improve data accessibility for end users
- Automate routine data engineering tasks
- Participate in agile development cycles
Nice to Have
- Experience with Apache Spark or similar frameworks
- Knowledge of streaming data platforms like Kafka
- Familiarity with data orchestration tools such as Airflow
- Experience in agile project environments
- Understanding of machine learning pipelines
Benefits
- Health and wellness coverage
- Retirement savings plan
- Paid time off and holidays
- Professional development opportunities
- Flexible work scheduling
- Employee assistance program
- Inclusive and diverse workplace culture
Compensation
Competitive salary based on experience
Work Arrangement
Hybrid work model with flexible remote options
Team
Collaborative team focused on data solutions and digital transformation
Our Approach
We focus on delivering data solutions that align with client goals using modern technologies and best practices in software engineering and data management.
Growth Opportunities
Engineers are encouraged to grow their skills through mentorship, training, and involvement in diverse projects across industries.
Available for qualified candidates