Research, design, and implement innovative security methods to protect AI agents from evolving threats such as prompt poisoning, context manipulation, and adversarial behaviors, working closely with engineering teams to integrate research into production-grade security controls.
Responsibilities
- Investigate emerging threats targeting agentic systems, including prompt injection, context poisoning, adversarial content embedding, and exploitation of agent reasoning and planning functions.
- Design scalable workflows that secure interactions between AI agents and web-based systems.
- Create new detection and mitigation strategies to identify malicious prompts, unsafe contextual inputs, and adversarial actions in LLM-driven agents.
- Integrate security controls into agentic runtime environments to ensure safe processing and use of external data.
- Collaborate with applied engineers to deploy research-based security solutions in production, balancing effectiveness with agent performance.
- Model potential threats by continuously analyzing the evolving AI threat landscape and anticipating risks as agent capabilities advance.
- Develop defensive capabilities within browser surrogates to detect and block sophisticated context poisoning and injection attacks in web content.
Requirements
- Bachelor of Science in Computer Science or substantial experience in large-scale cloud engineering; a relevant master's or doctoral degree is a strong plus.
- Minimum of three years in applied AI with demonstrated success deploying high-scale AI systems in production; direct experience with production agentic systems is a significant advantage.
- Expert proficiency in Python programming.
- Extensive experience with Kubernetes and cloud-native orchestration technologies.
- Strong skills in advanced data modeling and version control practices.
- Substantial background in cybersecurity or browser-related technologies is highly preferred.
- In-depth knowledge of prompt engineering methods and their potential for exploitation in agentic environments.
- Ability to navigate ambiguous technical challenges, test novel approaches, and iterate toward effective security outcomes.
Nice to Have
- Practical experience with orchestration frameworks such as LangChain or AutoGen, and/or standardized communication protocols like MCP.
- Experience developing immutable event streams and high-speed data pipelines for real-time traffic analysis.
- Familiarity with web rendering processes and programmatic manipulation of the DOM or Accessibility Tree to improve security.
- A security-first approach with a focus on building systems that are auditable, traceable, and resilient to failure.
Tech Stack
Python, Kubernetes (k8s), Cloud-native orchestration, Version control, Data modeling, LangChain, AutoGen, MCP, DOM manipulation, Accessibility Tree
Benefits
- Opportunities to take initiative and implement new ideas
- Hand in building a legacy within a growing company
- Collaborative, inclusive, and fun culture
Team
400 employees (growing); collaborative engineering and research teams spanning security and AI domains
- Stay Aligned
- Get It Done
- Customer Empathy
- Think Creatively
- Help Each Other Out
Additional Information
- All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or disability.
- Agencies must have a valid services agreement and be assigned by the Talent team to submit resumes via Ashby (ATS).
- Resumes submitted outside of this process will be considered the sole property of the company.
- No fees or payments will be issued for candidates submitted outside the official agency policy.


