Responsibilities
- Build and manage security checkpoints in continuous integration and delivery workflows, integrating artificial intelligence tools to improve vulnerability identification and streamline alert classification.
- Carry out periodic internal penetration testing across web, mobile, and AI-integrated applications, while overseeing external audit engagements and driving resolution of identified issues.
- Evaluate security posture of AI-powered functionalities, focusing on threats like prompt manipulation, data contamination during training, and unsafe handling of model outputs.
- Facilitate threat modeling workshops with technical leads and software engineers to uncover attack surfaces in both conventional systems and large language model architectures.
- Execute recurring security evaluations, analyze results from automated scanners and penetration tests, and collaborate with development teams to address risks based on severity.
- Develop organizational policies and recommended practices for safely adopting AI-powered coding tools and external AI-based application programming interfaces.
- Perform in-depth manual and automated code analysis to verify compliance with security requirements, including validation of code produced by AI systems.
- Serve as a security advisor to product development units, offering expertise on OWASP Top 10 risks, OWASP guidelines for large language models, and secure software development techniques.
- Regularly review system availability and performance metrics to ensure operational reliability and assist in forecasting infrastructure needs.