Responsibilities
- Build and maintain an ecosystem where engineers can safely and efficiently develop, debug and operate their services running in GCP, Kubernetes using DataFlow, DataProc and Python with Go
- Ensure services have high level of observability, enabling provision of quality service for customers
- Ensure services can scale vertically and horizontally based on current load, operational and telemetric data (OTEL, Prometheus, Victoria Metrics)
- Ensure team has enough insights about health of services (Grafana, Alerting, PageDuty)
- Help team fulfill security requirements given ISO and SOC2 audits by enforcing security principles like key distribution, key rotation, authorisation & authentication on service level, data encryption at transit, data isolation, resource limitations, quality of service, audit logs (mainly by Enovy proxies)
- Contribute to tooling so there are tools in place for debugging, troubleshooting and performance testing
- Automate manual/semi-manual steps deployment and instance setup
- Have hands on L3 support and incident resolutions
- Ensure CI pipelines have linters, security scans, code smell detection enabling engineers to produce quality MRs
Work Arrangement
Hybrid
Team
Structure: The Data Pipeline team is a backend-focused engineering team built on strong DevOps principles. The team is growing and currently has one SRE/DevOps engineer; the role is to onboard another to form an effective duo.
Additional Information
- Flexible working hours to accommodate your working style
- Remote-first environment
- OnCall rotation 24/7 support required