Binance is seeking an Algorithm Engineer with a specialized focus on Large Language Model (LLM) Safety. This role is dedicated to pioneering AI safety protocols, developing algorithms, and implementing frameworks to ensure the responsible and secure deployment of LLMs within our ecosystem.
What You'll Do
- Research, design, and implement safety-first algorithms for large language models.
- Develop and refine frameworks to evaluate and mitigate potential risks in AI-generated content and behavior.
- Collaborate with cross-functional AI teams to integrate safety measures throughout the model development lifecycle.
- Analyze model outputs to identify safety vulnerabilities and propose algorithmic improvements.
What We're Looking For
- Proven experience in algorithm development, specifically related to natural language processing and large language models.
- Strong background in AI ethics, safety methodologies, and adversarial testing.
- Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow.
- Ability to work independently in a remote, globally distributed team environment.
Work Mode
This is a remote position. Candidates can be based in Taiwan (Taipei), Thailand (Bangkok), Australia (Brisbane, Melbourne, Sydney), Indonesia (Jakarta), Hong Kong, Mexico (Mexico City), New Zealand (Auckland, Wellington), Philippines (Manila), or Poland (Krakow, Warsaw).
Binance is an equal opportunity employer.




