
Senior Researcher — Safety Systems, Misalignment Research

Senior Researcher — Safety Systems, Misalignment Research

Senior Researcher — Safety Systems, Misalignment Research
OpenAI
The Senior Researcher role in the Safety Systems team at OpenAI focuses on misalignment research to ensure the safe deployment of AGI. The position involves designing worst-case demonstrations, conducting adversarial evaluations, and developing automated tools for red-teaming and stress testing. The researcher will collaborate with various teams and publish findings to influence safety strategies and practices.
Qualification
- Passion for red-teaming and AI safety.
- Experience in designing and executing adversarial evaluations.
- Strong understanding of AGI alignment risks and safety measures.
- Ability to conduct rigorous research and publish findings.
- Experience collaborating with cross-functional teams.
Responsibility
- Design and implement worst-case demonstrations to illustrate AGI alignment risks.
- Develop adversarial and system-level evaluations based on demonstrations.
- Create automated tools and infrastructure for red-teaming and stress testing.
- Conduct research on failure modes of alignment techniques and propose improvements.
- Publish influential papers to shift safety strategy or industry practices.
- Collaborate with engineering, research, policy, and legal teams to integrate findings into product safeguards.
- Mentor engineers and researchers to promote a culture of rigorous safety work.



