
Research Engineer, Privacy

Research Engineer, Privacy
OpenAI
The Research Engineer, Privacy role at OpenAI involves working within the Privacy Engineering Team to integrate privacy into AI systems, focusing on privacy-preserving technologies and ensuring responsible data use. The position is based in San Francisco and offers relocation assistance.
Qualification
- Hands-on research or production experience with privacy-enhancing technologies (PETs).
- Fluency in modern deep-learning stacks such as PyTorch or JAX.
- Ability to convert cutting-edge research papers into reliable, well-tested code.
- Experience in stress-testing models for private data leakage and explaining complex concepts to non-experts.
- A track record of publishing or implementing novel privacy or security work.
Responsibility
- Design and prototype privacy-preserving machine-learning algorithms such as differential privacy and federated learning.
- Measure and strengthen model robustness against privacy attacks like membership inference and model inversion.
- Develop internal libraries and documentation to make privacy techniques accessible to engineering and research teams.
- Lead investigations into privacy-performance trade-offs of large models and publish insights for model-training decisions.
- Define privacy standards, threat models, and audit procedures for the ML lifecycle.
- Collaborate with Security, Policy, Product, and Legal teams to implement regulatory requirements.



