
Research Engineer / Scientist, Safety Oversight

Research Engineer / Scientist, Safety Oversight

Research Engineer / Scientist, Safety Oversight
OpenAI
OpenAI is seeking a senior Research Engineer/Scientist for its Safety Oversight team, which focuses on ensuring the safe deployment of AI models. The role involves developing AI monitoring models, setting research directions for safety, and collaborating with cross-functional teams to enhance AI safety standards.
Qualification
- 4+ years of experience in AI safety, particularly in RLHF, human-AI collaboration, fairness, and biases.
- Ph.D. or equivalent degree in computer science, machine learning, or a related field.
- Experience with large-scale AI systems.
- 4+ years of research engineering experience.
- Proficiency in Python or similar programming languages.
Responsibility
- Develop and refine AI monitor models to detect and mitigate misuse and misalignment.
- Set research directions and strategies to enhance the safety, alignment, and robustness of AI systems.
- Evaluate and design red-teaming pipelines to assess the robustness of safety systems and identify improvement areas.
- Conduct research to improve models’ reasoning about human values and apply these to safety challenges.
- Coordinate with cross-functional teams to ensure products meet high safety standards.



