The AI Security Institute is hiring a Research Engineer for its Societal Impacts team. In this role, you will work with research scientists to design, implement, and run experiments that address critical questions about the effects of frontier AI on society. The AI Security Institute is the world's largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action, positioned within the UK government with direct lines to the Prime Minister's office.
What You'll Do
- Design, implement, and run experiments on the societal effects of AI.
- Work on projects such as implementing human-AI interaction studies with multiple participants.
- Build pipelines to scrape and analyze publicly available AI agent implementations.
- Design and run model evaluations for various societal risks from AI model behavior.
What We're Looking For
- Experience writing scalable, maintainable Python code.
- Experience in one or more of: data engineering (data collection, cleaning, processing, visualization), ML engineering (training/evaluating models with PyTorch or similar), or full-stack web development.
- Knowledge of machine learning sufficient to understand recent papers in the field.
- Strong verbal communication, experience working on a collaborative research team, and interpersonal skills.
- Demonstrable interest in and understanding of the societal impacts of AI.
Nice to Have
- Building and maintaining complex data products.
- Designing and implementing experiments in human-AI interaction.
- Training or evaluation of frontier AI models.
- Research related to societal impacts of AI systems.
- A specialization in a relevant field of social or political science, economics, cognitive science, criminology, security studies, AI safety, or another relevant field.
Technical Stack
- Python
- PyTorch
Team & Environment
You will be part of the Societal Impacts team within the AI Security Institute.




