About the Role
Build and optimize core components of a distributed search engine used by thousands of customers worldwide, focusing on scalability, reliability, and performance improvements.
Responsibilities
- Develop and maintain core search algorithms and indexing systems
- Improve query processing speed and result relevance
- Collaborate with product teams to integrate search features
- Monitor system performance and troubleshoot production issues
- Design scalable backend services handling large data volumes
- Optimize resource usage across distributed systems
- Contribute to fault-tolerant infrastructure design
- Implement automated testing for search functionality
- Enhance data ingestion pipelines for real-time updates
- Support observability through logging and metrics
- Refine search ranking logic based on user behavior
- Work with large-scale data storage solutions
- Participate in code reviews and system design discussions
- Ensure system reliability under heavy traffic loads
- Assist in defining technical roadmaps for search features
- Integrate feedback from customer usage patterns
- Maintain documentation for internal systems
- Collaborate on security best practices for data handling
- Support deployment of new search capabilities
- Evaluate performance trade-offs in system design
- Contribute to disaster recovery planning
- Improve developer tooling for search development
- Work with geographically distributed engineering teams
- Drive improvements in system uptime and response times
- Assist in on-call rotations for critical systems
Nice to Have
- Experience with Elasticsearch or similar search platforms
- Background in full-stack development
- Knowledge of machine learning applications in search
- Experience with high-traffic production environments
- Contributions to open-source search projects
- Familiarity with natural language processing
- Experience with A/B testing frameworks
- Understanding of relevance evaluation metrics
- Knowledge of vector search or embedding models
- Experience with data sharding and replication
- Background in low-latency system design
- Familiarity with caching strategies
- Experience with query optimization
- Knowledge of Unicode and text processing
- Understanding of multilingual search challenges
Compensation
Competitive salary based on experience and location
Work Arrangement
Hybrid work model with office and remote flexibility
Team
Part of a global engineering team focused on core search infrastructure
Tech Stack
- C++ and Rust for core search components
- Kubernetes for container orchestration
- gRPC for inter-service communication
- Prometheus and Grafana for monitoring
- Apache Kafka for event streaming
- ZooKeeper for coordination services
- Bazel for build automation
- Docker for containerization
- Google Cloud Platform for infrastructure
- Protobuf for data serialization
Impact
- Systems serve billions of queries daily
- Low-latency responses under 50ms median
- Support for thousands of customer implementations
- Real-time indexing with sub-second propagation
- High availability across global regions
- Scalable architecture handling traffic spikes
- Consistent relevance improvements
- Efficient resource utilization at scale
- Secure handling of sensitive data
- Reliable disaster recovery processes
Available for qualified candidates in select regions