About the Role
Design and maintain scalable backend services that handle large-scale data ingestion, ensuring system reliability, performance, and seamless integration across platforms.
Responsibilities
- Develop and optimize backend services for ingesting large volumes of data
- Ensure data pipelines are resilient, scalable, and fault-tolerant
- Collaborate with data engineers to define ingestion protocols
- Monitor system performance and troubleshoot production issues
- Implement robust error handling and retry mechanisms
- Work with distributed systems to ensure data consistency
- Design APIs for internal and external data exchange
- Improve data processing efficiency and reduce latency
- Support schema evolution and versioning strategies
- Integrate with third-party data sources securely
- Maintain comprehensive logging and observability
- Write clean, testable, and well-documented code
- Participate in code reviews and system design discussions
- Ensure compliance with data privacy standards
- Optimize database queries and storage patterns
- Contribute to disaster recovery planning
- Automate operational workflows and deployment pipelines
- Evaluate new technologies for data processing
- Support on-call rotations for critical systems
- Mentor junior engineers on best practices
- Drive improvements in system monitoring and alerting
- Collaborate with product teams to understand data needs
- Ensure backward compatibility during system upgrades
- Conduct root cause analysis for data delivery failures
- Maintain service level agreements for uptime and latency
Nice to Have
- Experience with large-scale data ingestion systems
- Prior work with real-time streaming platforms
- Familiarity with data warehouse architectures
- Knowledge of change data capture techniques
- Experience with Avro or Parquet formats
- Background in observability and tracing
- Contributions to open-source data projects
- Understanding of GDPR or similar regulations
- Experience with Terraform or infrastructure as code
- Worked in fast-paced startup environments
Compensation
Competitive salary with equity and benefits
Work Arrangement
Hybrid remote with team hubs
Team
Collaborative engineering team focused on data infrastructure
Tech Stack
- Primary language: Go
- Cloud infrastructure: AWS
- Message brokers: Kafka
- Databases: PostgreSQL, Redis
- Container orchestration: Kubernetes
- CI/CD: GitHub Actions
- Monitoring: Prometheus, Grafana
- Infrastructure as code: Terraform
- Logging: ELK stack
Culture & Values
- Emphasis on ownership and accountability
- Data-driven decision-making
- Transparent communication across teams
- Continuous learning and improvement
- Respect for work-life balance
- Inclusive and collaborative environment
- Focus on long-term system sustainability
Available for qualified candidates