Remote (Global) Full-time

NVIDIA is hiring a Senior Software Engineer - Distributed Inference

About the Role

NVIDIA is looking for a Senior Software Engineer - Distributed Inference to build and maintain critical user-facing tools for our Dynamo Inference Server. This role focuses on developing distributed model management systems that power large-scale AI inference workloads, collaborating closely with infrastructure engineers and researchers.

What You'll Do

  • Build and maintain distributed model management systems, including Rust-based runtime components, for large-scale AI inference workloads.
  • Implement inference scheduling and deployment solutions on Kubernetes and Slurm, driving advances in scaling, orchestration, and resource management.
  • Collaborate with infrastructure engineers and researchers to develop scalable APIs, services, and end-to-end inference workflows.
  • Create monitoring, benchmarking, automation, and documentation processes to ensure low-latency, robust, and production-ready inference systems on GPU clusters.

What We're Looking For

  • Bachelor’s, Master’s, or PhD in Computer Science, ECE, or related field (or equivalent experience).
  • 6+ years of professional systems software development experience.
  • Strong programming expertise in Rust (with C++, Python as a plus).
  • Deep knowledge of distributed systems, runtime orchestration, and cluster-scale services.
  • Hands-on experience with Kubernetes, container-based microservices, and integration with Slurm.
  • Proven ability to excel in fast-paced R&D environments and collaborate across functions.

Nice to Have

  • Experience with inference-serving frameworks (e.g., Dynamo Inference Server, TensorRT, ONNX Runtime) and deploying/managing LLM inference pipelines at scale.
  • Contributions to large-scale, low-latency distributed systems (open-source preferred) with proven expertise in high-availability infrastructure.
  • Strong background in GPU inference performance tuning, CUDA-based systems, and operating across cloud-native and hybrid environments (AWS, GCP, Azure).

Technical Stack

  • Languages: Rust, C++, Python
  • Orchestration: Kubernetes, Slurm
  • Frameworks: Dynamo Inference Server, TensorRT, ONNX Runtime, CUDA
  • Cloud: AWS, GCP, Azure

Team & Environment

You will join a GPU-accelerated deep learning software team.

Benefits & Compensation

  • Equity
  • Benefits
  • Compensation: $184,000 USD - $287,500 USD for Level 4, and $224,000 USD - $356,500 USD for Level 5

Work Mode

This position is remote friendly.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Required Skills
RustC++PythonKubernetesSlurmDynamo Inference ServerTensorRTONNX RuntimeCUDAAWSDistributed SystemsHigh-Performance ComputingGPU Programming
Got hired remotely?

Get paid like a professional

Remote clients expect company invoices, not personal PayPal requests. Glopay forms an EU partnership that makes you look legitimate while you stay independent.

Professional invoices with EU company details
Compliance handled automatically
Withdraw to any bank account
Income reports for easy tax filing
Create free account
Free signup • 5 min setup
About company
NVIDIA

NVIDIA is the platform upon which every new AI‑powered application is built.

Visit website
Job Details
Category infrastructure
Posted 7 months ago