Anthropic logo

Research Engineer / Scientist, Alignment Science, London

AnthropicLondon, UK
Full Timemachine-learningaipython+5 more
Apply Now
Anthropic logo

Research Engineer / Scientist, Alignment Science, London

Anthropic

Apply Now

The Research Engineer / Scientist position at Anthropic focuses on Alignment Science, aiming to develop safe and beneficial AI systems. The role involves conducting machine learning experiments to understand AI behavior and collaborating with various teams to address AI safety challenges.

Qualification

  • Experience in machine learning and AI research
  • Strong programming skills, preferably in Python
  • Familiarity with AI safety and alignment concepts
  • Ability to work collaboratively in a team environment
  • Interest in the ethical implications of AI systems

Responsibility

  • Build and run machine learning experiments to understand AI behavior
  • Contribute to exploratory research on AI safety
  • Collaborate with teams including Interpretability, Fine-Tuning, and Frontier Red Team
  • Develop methods for AI control in adversarial scenarios
  • Create alignment stress-testing methodologies

Similar Jobs