OpenAI AI Risk Manager Job | Safety & Ethics Focus

OpenAI is currently seeking a new Readiness Manager, responsible for anticipating and mitigating risks associated with artificial intelligence, to improve management of potential dangers associated with the rapid development of this technology.

Tl;dr

  • OpenAI is recruiting a new Head of Preparedness.
  • Key position facing risks related to AI and ChatGPT.
  • Recent changes within security teams.

Governance in full change at OpenAI

The artificial intelligence giant OpenAI is, once again, at a turning point in its risk management strategy. After a particularly turbulent year, the company led by Sam Altman actively looking for a new Head of Preparedness.

This position, considered strategic, aims to anticipate potential deviations from its models, even though the company has recently been shaken by several controversies linked to the impact of ChatGPT on mental health — some even leading to wrongful death lawsuits.

Security and anticipation: a central challenge

The outlines of the role could not be clearer: the future manager will have to manage the entire technical strategy for the “Preparedness framework“. This architecture aims to monitor and supervise the emerging capabilities of artificial intelligence likely to generate “major risks“. According to the job description, this executive will benefit from an annual remuneration of $555,000, not including a share in shares. But the task promises to be demanding: “ This is a stressful position, you will be thrown into the deep end immediately», warns Sam Altman who relayed the offer on X.

Internal movements and instability within teams

If this announcement attracts so much attention, it is also because the teams dedicated to security at OpenAIhave experienced many upheavals. The former Head of Preparedness, Alexander Madryhad left his position in July 2024. His succession was immediately entrusted to two figures in the company: Joaquin Quinonero CandelaThenLilian Weng. However, this stability was short-lived. Only a few months later, Lilian Weng left the ship while in July 2025, Joaquin Quinonero Candela announced his own departure from the preparedness division to join the recruitment department.

The urgency of a clear course in the face of AI risks

Faced with these successive changes, the question of a clear guideline becomes pressing. The issues raised by the rapid development of generative AI models – whether psychological impacts or misuses – are now forcing OpenAI to strengthen its capacity for anticipation and supervision. The company thus admits to having glimpsed the first weak signals around the consequences on mental health from 2025: “The potential impact on mental health was already noticeable in 2025, among other very real challenges arising with advances in models“, Sam Altman recently pointed out.

Among the priorities of the future Head of Preparedness are:

  • Ensure rigorous monitoring of new technological risks.
  • Pilot a proactive internal policy in the face of potential abuse.

It remains to be seen whether this recruitment will finally bring beneficial stability to security management at OpenAI, while the ecosystem carefully observes each movement.

Related Posts

Leave a Comment