
Research Engineer / Research Scientist, Alignment

Research Engineer / Research Scientist, Alignment

Research Engineer / Research Scientist, Alignment
OpenAI
The Research Engineer / Research Scientist position at OpenAI's Alignment team focuses on ensuring AI systems are safe and aligned with human values. The role involves designing experiments and scalable solutions for AI alignment, integrating human oversight, and addressing complex challenges in AI deployment. The position is based in San Francisco with a hybrid work model.
Qualification
- PhD or equivalent experience in computer science, computational science, data science, cognitive science, or similar fields.
- Strong engineering skills, particularly in designing and optimizing large-scale systems.
- Experience in AI research and alignment methodologies.
- Ability to work collaboratively in a team environment.
- Familiarity with experimental design and evaluation techniques.
Responsibility
- Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure.
- Design evaluations to reliably measure risks and alignment with human intent and values.
- Build tools and evaluations to study and test model robustness in different situations.
- Design experiments to understand how alignment scales with compute, data, context lengths, and adversary resources.
- Design and evaluate new Human-AI interaction paradigms and scalable oversight methods.
- Train models to be calibrated on correctness and risk.
- Design novel approaches for using AI in alignment research.




