NVIDIA is hiring a Senior Solutions Architect, HPC and AI to support customers and partners across Europe. In this role, you will be instrumental in deploying, debugging, and optimizing training and inference workloads on large-scale GPU clusters. You'll focus on solving complex challenges at the intersection of High Performance Computing and AI, including scaling workloads and contributing to Europe's Sovereign AI initiative.
What You'll Do
- Collaborate with NVIDIA’s training framework developers and product teams to stay ahead of the latest features and help partners adopt them effectively.
- Assist with deployment, debugging, and improving the efficiency of AI workloads on extensive NVIDIA platforms.
- Benchmark new framework features, analyze performance, and share actionable insights with customers and internal teams.
- Work directly with external customers to solve cluster performance and stability issues, identify bottlenecks, and implement solutions.
- Build expertise and guide customers in scaling workloads efficiently and reliably on the latest generation of NVIDIA GPUs.
- Contribute to Europe’s Sovereign AI initiative by helping customers implement advanced resiliency features within AI training pipelines.
What We're Looking For
- BS, MS, PhD or equivalent experience in Computer Science, Electrical/Computer Engineering, Physics, Mathematics, or a related engineering field—or equivalent practical experience.
- 8+ years of experience in accelerated computing technologies at cluster scale, ideally including work with NVIDIA platforms.
- Strong programming skills in at least one of the following languages: C, C++, or Python.
- Practical experience identifying and resolving bottlenecks in large-scale training workloads or parallel applications.
- Hands-on experience in profiling and debugging large parallel applications.
- Solid understanding of CPU and GPU architectures, CUDA, parallel filesystems, and high-speed interconnects.
- Experience working with large compute clusters with an understanding of their internal scheduling and resource management mechanisms (e.g., SLURM or Cloud based clusters).
- Proficient knowledge of training pipelines and frameworks, encompassing their internal operations and performance attributes.
Nice to Have
- Experience debugging training pipelines running on thousands of GPUs in a production environment.
- Hands-on experience with performance profiling and optimizations using tools like Nsight Systems, Nsight Compute and good understanding of NCCL, MPI and low-level communication libraries.
- Ability to debug stability issues across the entire stack: parallel application, training frameworks, runtime libraries, schedulers, and hardware.
- Solid understanding of the internal workings of LLM frameworks such as PyTorch, Megatron-LM, or NeMo, and how they affect compute layers like CPUs, GPUs, network and storage or understanding of inference tools such as vLLM, Dynamo, TensorRT-LLM, RedHat Inference Server or SGLang.
Technical Stack
- C, C++, Python, CUDA, SLURM, Nsight Systems, Nsight Compute, NCCL, MPI
- PyTorch, Megatron-LM, NeMo, vLLM, Dynamo, TensorRT-LLM, RedHat Inference Server, SGLang
Work Mode
This role follows a local-country work mode, based in Europe.


