
Technical Lead, Safety Research

Technical Lead, Safety Research
OpenAI
The Technical Lead for Safety Research at OpenAI will spearhead initiatives to enhance AI safety and alignment, focusing on developing strategies to mitigate risks associated with AI misalignment and mistakes. This role involves setting research directions, collaborating with cross-functional teams, and conducting advanced research on AI safety topics. The position is based in San Francisco, CA, and follows a hybrid work model.
Qualification
- Strong track record of practical research on safety and alignment, ideally in AI and LLMs.
- Experience leading large research efforts in the field.
- Ability to set north star goals and milestones for research directions.
- Experience in developing evaluations to track progress in safety research.
- Strong collaboration skills to work across safety research and related teams.
Responsibility
- Set research directions and strategies to enhance AI safety, alignment, and robustness.
- Coordinate and collaborate with cross-functional teams to ensure AI meets safety standards.
- Evaluate and understand the safety of AI models and systems, identifying risks and proposing mitigation strategies.
- Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, and robustness.
- Implement new methods in OpenAI’s core model training and launch safety improvements in products.




