About the Role
The role involves developing and maintaining data pipelines, integrating systems across the stack, and ensuring data accuracy and accessibility for analytical use.
Responsibilities
- Design and implement scalable data pipelines for ingestion and transformation
- Develop full stack features with emphasis on backend data services
- Collaborate with cross-functional teams to define data requirements
- Optimize database performance and query efficiency
- Ensure data consistency across distributed systems
- Build APIs to support data access and integration
- Maintain data quality and integrity through validation processes
- Support real-time and batch data processing workflows
- Monitor system performance and troubleshoot issues
- Document data architectures and engineering processes
- Implement security measures for data protection
- Work with cloud-based data storage and compute platforms
- Contribute to end-to-end testing of data systems
- Participate in code reviews and technical design discussions
- Improve data observability and monitoring tools
Nice to Have
- Experience with big data technologies such as Spark or Flink
- Familiarity with data warehousing solutions
- Knowledge of machine learning pipelines
- Exposure to DevOps practices in data environments
- Contributions to open-source data projects
Compensation
Competitive salary and benefits package
Work Arrangement
Hybrid work model with flexible scheduling
Team
Collaborative team environment focused on scalable data solutions
Technology Stack
- Primary languages include Python and JavaScript
- Backend frameworks such as Node.js and Django
- Cloud infrastructure on Google Cloud Platform
- Data orchestration using Apache Airflow
- Monitoring via Prometheus and Grafana
Team Structure
- Engineers work in agile pods of 5–7 members
- Dedicated product managers and QA resources
- Regular sprint planning and retrospectives
Available for qualified candidates