About the Role
The role involves designing and maintaining foundational data platforms, enabling robust analytics across the organization through efficient data pipelines and architecture.
Responsibilities
- Design and implement scalable data storage solutions
- Develop and manage ETL workflows for large datasets
- Ensure data accuracy, consistency, and accessibility
- Collaborate with analytics teams to define data requirements
- Optimize data models for performance and usability
- Support governance and data quality initiatives
- Maintain documentation for data systems and processes
- Troubleshoot and resolve data pipeline issues
- Work with cloud-based data platforms and services
- Integrate data from multiple source systems
- Build automated testing for data integrity
- Monitor system performance and data latency
- Implement security standards for data access
- Drive improvements in data reliability and efficiency
- Partner with stakeholders to understand reporting needs
Nice to Have
- Experience in regulated industries
- Knowledge of data privacy standards
- Familiarity with containerization and orchestration
- Background in machine learning data pipelines
- Experience with data cataloging tools
- Understanding of event-driven architectures
- Exposure to infrastructure as code
- Contributions to open-source data projects
- Advanced degree in computer science or related field
Compensation
Competitive salary and benefits package
Work Arrangement
Hybrid work model with flexibility
Team
Collaborative data team focused on scalable solutions
Why This Role Matters
This position is central to building the organization's long-term data capabilities, directly influencing how data is collected, stored, and used across teams.
Growth and Development
Opportunities to lead technical initiatives and grow into architecture or team leadership roles are supported through mentorship and project ownership.
Available for qualified candidates