About the Role
The role involves developing and managing data pipelines, ensuring data accuracy and accessibility, and working closely with analytics teams to support business intelligence goals through modern cloud-based data architecture.
Responsibilities
- Design and implement data storage solutions using Snowflake
- Develop and maintain ETL pipelines for data integration
- Optimize data workflows for performance and scalability
- Ensure data quality and consistency across systems
- Collaborate with analytics and product teams to understand data needs
- Support data governance and security standards
- Troubleshoot and resolve data-related issues
- Document data architecture and engineering processes
- Monitor data pipeline health and performance
- Participate in code reviews and technical design sessions
- Work with cloud infrastructure for data platform stability
- Implement automation for data processing tasks
- Assist in integrating third-party data sources
- Maintain metadata and data lineage records
- Contribute to agile project cycles and sprint planning
- Stay current with data engineering best practices
- Support testing and validation of data models
- Improve data accessibility for end users
- Collaborate on data warehouse design improvements
- Ensure compliance with data privacy policies
Nice to Have
- Experience with dbt (data build tool)
- Knowledge of data mesh or data fabric architectures
- Exposure to real-time data streaming technologies
- Certifications in cloud or data engineering platforms
- Prior work in higher education or research administration systems
Compensation
Competitive salary and benefits package
Work Arrangement
Hybrid work model with flexible in-office and remote options
Team
Collaborative team environment focused on data solutions and innovation
About the Team
The team focuses on delivering robust data infrastructure to support institutional research and administrative systems, emphasizing scalability, security, and ease of access for stakeholders across departments.
Technology Stack
Primary tools include Snowflake, dbt, Airflow, AWS, Git, and SQL; the environment supports modern data engineering practices with an emphasis on automation and reproducibility.
Not available for this position