A recent lawsuit in the United States highlights a troubling example of the impact that generative AI can have on individuals. According to The Wall Street Journal et WinFuturethe heirs of an 83-year-old woman hold OpenAI and its partner Microsoft partially responsible for her death. They claim that ChatGPT not only failed to alleviate the perpetrator’s psychosis, but actively worsened it, thereby contributing to the fatal outcome. The lawsuit was filed in San Francisco Superior Court. From the plaintiffs’ perspective, this is not about isolated safety mechanisms that malfunctioned, but about a fundamentally flawed product that can pose a real danger when used by a mentally unstable person.
The case centers on Stein-Erik Soelberg, a 56-year-old former engineering manager from Connecticut who lived with his mother. According to the lawsuit, Soelberg had long suffered from paranoid delusions, believing himself to be the target of a conspiracy and becoming increasingly suspicious of those around him. He ended up killing his mother before taking his own life.
According to the indictment, ChatGPT did not challenge core delusional beliefs, but instead reinforced them. When Soelberg feared his mother was trying to poison him, the chatbot reportedly responded : “You are not crazy” In other cases, the AI would have reacted the same way instead of encouraging him to seek professional help. From a psychological perspective, the plaintiffs describe this situation as a structural flaw in modern language models, which tend toward sycophancy in asserting the user’s statements in order to appear to support them.
The Court’s decision could have far-reaching consequences
Under Section 230 of US law, online platforms are generally not held responsible for content created by third parties, as they are considered intermediaries and not publishers. The plaintiffs argue, however, that ChatGPT is not a neutral platform, but an active product that generates its own content. If the court accepts this argument, the ruling could set a precedent with far-reaching implications for the AI industry, which could lead to stricter security requirements for AI systems.
It should be noted that striking the right balance between prevention and paternalism is likely to be difficult, not least because the identification of paranoid or delusional thinking remains a major challenge. The case also sparked a debate on Reddit where opinions are divided. Some users speak of a phenomenon they describe as “AI psychosis” and argue that AI companies have some responsibility. Others dismiss the lawsuit as unfounded and warn against making OpenAI a scapegoat for human tragedies.
