Odyssey is hiring a Member of Technical Staff, ML Performance. Your focus will be ensuring our models deliver exceptional speed, reliability, and scalability in both training and inference phases. You will optimize efficiency to minimize costs and build infrastructure to support massive scale within a year.
What You'll Do
- Optimize models for real-time use by hundreds of thousands of users.
- Design and implement distributed training strategies to reduce training time and resource consumption on large GPU clusters.
- Partner with our elite team of ML researchers and engineers to ensure model architectures are highly performant from conception.
- Develop sophisticated tools to identify performance bottlenecks and stability issues in both training and serving environments.
- Pioneer innovative approaches, frameworks, and system designs that enhance performance metrics across our model development and inference infrastructure.
- Have significant autonomy in technical decisions.
- Use the latest-generation GPUs.
What We're Looking For
- 8+ years of software engineering experience, with significant work in ML performance.
- Deep insight into modern machine learning architectures with a natural instinct for performance optimization, particularly distributed training and inference.
- Track record of owning projects end to end.
- Problem-solving mindset with the ability to acquire new skills as needed.
- Proficiency with PyTorch (or TF/JAX) and Triton as well as NVIDIA GPU ecosystems and optimization stacks.
- Highly metric-based.
Technical Stack
- PyTorch (or TF/JAX)
- Triton
- NVIDIA GPU ecosystems and optimization stacks
Team & Environment
You will partner with our elite team of ML researchers and engineers.
Odyssey is an equal opportunity employer.





