OpenAI logo

Research Engineer / Scientist, Robustness & Safety Training

OpenAISan Francisco
Apply Now
OpenAI logo

Research Engineer / Scientist, Robustness & Safety Training

OpenAI

Apply Now

OpenAI is seeking a senior researcher for the Safety Systems team, focusing on AI safety and robustness. The role involves conducting research on AI safety topics, implementing safety improvements in AI models, and collaborating with cross-functional teams to ensure high safety standards in AI deployment. Candidates should have a strong background in AI safety, a Ph.D. in a relevant field, and experience in safety work for AI model deployment.

Qualification

  • 4+ years of experience in AI safety, particularly in RLHF, adversarial training, robustness, fairness, and biases.
  • Ph.D. or other degree in computer science, machine learning, or a related field.
  • Experience in safety work for AI model deployment.
  • In-depth understanding of deep learning and its applications in AI safety.
  • Passion for AI safety and commitment to OpenAI’s mission of building safe AGI.

Responsibility

  • Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, and robustness.
  • Implement new methods in OpenAI’s core model training and launch safety improvements in products.
  • Set research directions and strategies to enhance AI systems' safety, alignment, and robustness.
  • Coordinate and collaborate with cross-functional teams, including T&S, legal, policy, and other research teams.
  • Evaluate and understand the safety of models and systems, identifying risks and proposing mitigation strategies.

Similar Jobs