NVIDIA, a company dedicated to amplifying human imagination and intelligence, is seeking a GPU Computing Engineer for our Autonomous Driving team in Shanghai. In this role, you will analyze complex Deep Learning models and investigate critical stability and performance issues within TensorRT, directly impacting the future of autonomous systems.
What You'll Do
- Analyze Deep Learning models and investigate TensorRT stability and performance issues reported by customers or internal teams.
- Collaborate with an internationally distributed team, with remote locations in the US, APAC, and India, on CUDA and TensorRT development.
- Extract feature requirements and FAQs from your analysis and development work to generate technical documents.
What We're Looking For
- Bachelor's degree or equivalent experience in Computer Science, Electrical Engineering, or a related field.
- 3-5+ years of relevant professional experience.
- Strong programming skills in C, C++, and Python.
- Knowledge of popular inference networks and layers.
- Experience working with deep learning frameworks like Torch and PyTorch.
- Strong written and verbal communication skills in both English and Mandarin.
- Ability to work effectively in a diverse team environment and with cross-site peers.
- Strong customer communication skills and a powerful motivation to provide highly responsive support as needed.
Nice to Have
- A Master's degree is preferred.
- Expert-level proficiency in PyTorch.
Technical Stack
- CUDA
- TensorRT
- C
- C++
- Python
- Torch
- PyTorch
Team & Environment
You will work with an internationally distributed team with remote members located in the US, APAC, and India.
Work Mode
This position is based locally in Shanghai.
NVIDIA is an equal opportunity employer.





