OpenAI logo

Research Engineer / Scientist, Interpretability

OpenAISan Francisco
Apply Now
OpenAI logo

Research Engineer / Scientist, Interpretability

OpenAI

Apply Now

OpenAI is seeking a Research Engineer/Scientist for its Interpretability team, focusing on understanding deep learning models and ensuring AI safety. The role involves developing research on model representations, engineering infrastructure for model analysis, and collaborating across teams to enhance AI safety. Candidates should have a strong background in AI safety, mechanistic interpretability, and hold a Ph.D. or equivalent experience in relevant fields.

Qualification

  • Ph.D. or research experience in computer science, machine learning, or a related field.
  • 2+ years of research engineering experience.
  • Proficiency in Python or similar programming languages.
  • Experience in AI safety or mechanistic interpretability.
  • Strong background in engineering and quantitative reasoning.

Responsibility

  • Develop and publish research on techniques for understanding representations of deep networks.
  • Engineer infrastructure for studying model internals at scale.
  • Collaborate across teams on projects that OpenAI is uniquely suited to pursue.
  • Guide research directions toward demonstrable usefulness and/or long-term scalability.

Similar Jobs