Nvidia is hiring a Senior AI-HPC Cluster Engineer - MLOps to provide leadership on large-scale HPC system management and develop the ecosystem around GPU-accelerated computing. You will build innovative tooling to accelerate researchers' velocity and support their workloads.
What You'll Do
- Provide leadership and strategic mentorship on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and nurture customer and cross-team relationships to consistently support the clusters and address changing user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
- Build innovative tooling to accelerate researchers' velocity, troubleshooting, and software performance at scale.
What We're Looking For
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum of 6 years of experience crafting and operating large scale compute infrastructure.
- Experience with AI/HPC job schedulers and orchestrators, such as Slurm, K8s or LSF.
- Applied experience with AI/HPC workflows that use MPI and NCCL.
- Proficient in using Linux including Centos/RHEL and/or Ubuntu Linux distributions.
- A solid understanding of container technologies like Enroot, Docker and Podman.
- Proficiency in one scripting language (Python, Bash) and at least one compiled language (Golang, Rust, C, C++).
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Excellent problem-solving to analyze complex systems, identify bottlenecks, and implement scalable solutions.
- Excellent communication and teamwork skills, with the ability to work effectively with diverse teams and individuals.
- Passion for continual learning and staying ahead of new technologies and effective approaches in the HPC and AI/ML infrastructure fields.
Nice to Have
- Experience with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with High-Speed Networking pertaining to HPC including InfiniBand, RDMA, RoCE and Amazon EFA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workload.
- Experience working with deep learning frameworks including PyTorch, MegatronLM and TensorFlow.
- Familiarity with metrics collection and visualization at scale with Prometheus, OpenSearch and Grafana.
Technical Stack
- Job Schedulers: Slurm, K8s, LSF
- HPC Workflows: MPI, NCCL
- Operating Systems: Linux, Centos/RHEL, Ubuntu
- Containers: Enroot, Docker, Podman
- Programming Languages: Python, Bash, Golang, Rust, C, C++
- NVIDIA Ecosystem: NVIDIA GPUs, CUDA, MLPerf
- Networking: InfiniBand, RDMA, RoCE, Amazon EFA
- Storage: Lustre, GPFS
- ML Frameworks: PyTorch, MegatronLM, TensorFlow
- Monitoring: Prometheus, OpenSearch, Grafana
Benefits & Compensation
- Equity
- Benefits
- Compensation: $184,000 USD - $287,500 USD for Level 4, and $224,000 USD - $356,500 USD for Level 5 + equity: Eligible
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.



