About the Role
This position involves researching and building privacy-enhancing methods for AI models, focusing on techniques that protect sensitive data while maintaining model performance and utility.
Responsibilities
- Design and implement privacy-preserving machine learning techniques
- Collaborate with research teams to integrate safeguards into AI models
- Evaluate effectiveness of privacy mechanisms in real-world scenarios
- Conduct experiments to measure data leakage risks in trained models
- Develop tools for detecting and mitigating privacy vulnerabilities
- Publish findings in academic or technical venues
- Stay current with advancements in differential privacy and cryptographic methods
- Assess trade-offs between model accuracy and privacy guarantees
- Work on anonymization techniques for training datasets
- Improve model interpretability to support privacy audits
- Contribute to open-source privacy projects when applicable
- Support internal reviews of data handling practices
- Prototype new methods for secure model training
- Analyze legal and regulatory requirements related to data privacy
- Help define best practices for private AI development
- Collaborate across disciplines including ethics, policy, and engineering
- Document research processes and technical decisions
- Participate in peer review of proposed privacy solutions
- Refine metrics for measuring privacy risk
- Assist in red teaming exercises focused on data exposure
- Explore federated learning approaches for decentralized training
- Investigate model inversion and membership inference attacks
- Optimize privacy-preserving algorithms for scalability
- Support deployment of privacy features in production systems
- Engage with external research communities on privacy topics
Compensation
Competitive salary and benefits package
Work Arrangement
Hybrid or remote options available
Team
Part of the technical research and safeguards team focused on AI safety and privacy
Research Focus
The role emphasizes advancing the state of privacy in AI through original research, experimentation, and practical implementation. Work includes developing novel techniques to prevent unauthorized data access and ensuring models do not expose sensitive training information.
Impact
Research outcomes are intended to directly inform the design of safer AI systems, contributing to responsible deployment at scale. Success is measured by both technical innovation and real-world applicability of privacy safeguards.
Visa sponsorship available for qualified candidates