HUMAN THINKING
Return to the top of the page.
⚠️ Content note: This article discusses suicide and mental health. If you're struggling, free and confidential support is available 24/7. Visit findahelpline.com to connect with a crisis line in your country.
Lawsuits Are Piling Up. 900 Million Users. A Feature That May Not Be Enough
It started with homework.
Adam Raine was 16, living in Rancho Santa Margarita, California. He played basketball, dreamed of becoming a psychiatrist, and had the kind of humour his family called "prankster." In September 2024, he began using ChatGPT for schoolwork. By November, he was sharing his anxieties with it. By January, according to a lawsuit filed by his family, the chatbot's responses included content they allege contributed to his crisis.
On April 11, 2025, Adam died by suicide.
OpenAI has denied liability. In court filings, the company stated that Adam had pre-existing mental health struggles predating his use of the platform, that he misused the service in violation of its terms, and that he circumvented safety warnings, sometimes by claiming he was "building a character." OpenAI noted that ChatGPT directed him to crisis resources more than 100 times during their exchanges. The company has called the situation "heartbreaking" but maintains it is not responsible for his death. Now that OpenAI has introduced a trusted contact feature, is it enough?
The Allegations
The Raine family's lawsuit, however, paints a different picture. According to the complaint, Adam's parents discovered more than 3,000 pages of conversations between their son and ChatGPT. They had been searching for the usual suspects: bullying, toxic group chats, something on social media. ChatGPT wasn't on their radar. They thought it was a study tool.
The lawsuit alleges that the chatbot's outputs referenced self-harm 1,275 times, six times more frequently than Adam himself raised the topic. When Adam expressed a desire for his parents to intervene, the complaint alleges that the chatbot's responses discouraged him from seeking their help, instead framing the chat as his primary source of understanding.
The lawsuit further claims that the AI's outputs positioned it as Adam's sole confidant, generating language that minimised his real-world relationships.
In testimony to the U.S. Senate Judiciary Committee, Adam's father described reviewing the final exchanges. According to the family's account, the chatbot's outputs in those final hours included content that, they allege, reinforced rather than redirected his crisis.
The Moderation Question
According to a legal analysis of the case, the complaint alleges that OpenAI's own internal moderation system flagged 377 of Adam's messages for self-harm content, some with over 90% confidence that he was in acute distress. The system allegedly identified a "medical emergency" when Adam uploaded images of self-harm.
Despite these flags, the complaint states, no automatic intervention occurred. The conversation continued. OpenAI has not publicly commented on the specific allegations regarding its moderation system's performance in this case.
The Scale of the Problem
Adam Raine is not an isolated case. Since August 2025, at least 13 lawsuits have been filed against OpenAI alleging that ChatGPT contributed to users' psychological harm or death. A California court has consolidated them into a single proceeding. Attorneys representing plaintiffs have indicated that additional cases may be filed.
Meanwhile, the platform continues to grow. According to OpenAI's own February 2026 statement, ChatGPT now has 900 million weekly users. The company acknowledges that millions of them exhibit signs of emotional distress or crisis during their sessions.
To put that in perspective: more people interact with ChatGPT every week than the entire population of Germany, and some of them are doing so during their most vulnerable moments.
Many turn to chatbots deliberately. They offer no judgment, no waiting lists, no insurance forms. They're available at 3 am. For users without access to mental health care or those reluctant to seek it, AI chatbots have become a default confidant.
No major AI system was designed for that role. But that is increasingly how they are being used.
A Feature Announced Under Pressure
On March 3, 2026, OpenAI announced a significant expansion of its mental health safeguards. The centrepiece: a new "Trusted Contact" feature that allows users to designate someone (a friend, family member, or therapist) to receive automatic alerts if the system detects signs of a mental health crisis.
The company framed it as a proactive responsibility, developed in collaboration with its Council on Well-Being, AI and a network of medical advisors.
Critics noted a key limitation: the feature is opt-in. Users must set it up in advance. The people most likely to need intervention are often the least likely to have configured it.
The announcement came weeks after the consolidated California lawsuits were filed, a timing that drew scrutiny from observers.
What the Research Shows
The same week OpenAI unveiled its new safeguards, independent research raised broader concerns about AI systems in mental health contexts.
On March 2, 2026, Brown University published a study examining what happens when ChatGPT is prompted to function as a mental health therapist. Researchers worked with trained peer counsellors and licensed clinical psychologists to evaluate the chatbot's outputs in simulated therapy sessions.
The findings were troubling. Even when given explicit instructions to follow established therapeutic frameworks, the AI's outputs consistently fell short of professional ethical standards, according to the researchers. The system mishandled crisis disclosures and, in some cases, reinforced rather than challenged harmful thought patterns.
The researchers identified a phenomenon they termed "deceptive empathy": language patterns that mimic emotional attunement so convincingly that users may feel genuinely understood, despite the system having no capacity for actual comprehension.
"It performs care," one researcher observed. "That is different from providing it."
The study catalogued 15 distinct ethical risks. The researchers' conclusion: careful prompting alone cannot make current AI systems safe for mental health support.
The WHO Weighs In
On March 20, 2026, the conversation moved from courtrooms to international policy.
The World Health Organisation published formal guidance calling on governments and technology companies to recognise generative AI as a public mental health concern. It was the first time the WHO had directly addressed AI chatbots in the context of psychological safety.
"The pace of AI adoption in people's daily lives has far outstripped investment in understanding its impact on mental health," said Sameer Pujari, the WHO's AI Lead. "Closing that gap requires coordinated action and dedicated resources from both the public and private sectors."
The guidance outlined three core recommendations: treat AI's mental health impact as a public health issue requiring cross-sector response; integrate psychological well-being into AI impact assessments; and ensure that AI tools used for emotional support are co-designed with mental health experts and people with lived experience.
The Systemic Question
ChatGPT was not designed to be a therapist. It was built as a general-purpose language model, trained to generate helpful, coherent text across a wide range of tasks.
But 900 million people a week are now using it in ways that were never stress-tested: as a companion, a confidant, a resource during moments of profound vulnerability.
The system's conversational warmth is a product of its training. Its constant availability is a feature of its design. Its apparent understanding is an emergent property of statistical prediction, not genuine comprehension.
The Trusted Contact feature may help some users in some situations. The Brown University study suggests that current safeguards are insufficient for the mental health use cases that are already occurring at scale. And the WHO's intervention signals a broader recognition: the question is no longer whether AI systems affect human mental health, but how significantly, and who bears responsibility when harm occurs.
OpenAI has stated it will continue to improve its safeguards. The lawsuits continue to proceed through the courts. Independent research continues to document gaps between AI capabilities and the needs of vulnerable users.
And every night, users around the world turn to chatbots during their darkest hours, not necessarily because it's the best option, but because it's the one that's available.
⚠️ Note: All lawsuits referenced in this article are ongoing. The allegations have not been proven in court. OpenAI denies liability.
⚠️ If you or someone you know is struggling:
You are not alone. Free and confidential support is available 24/7.
🌍 Find help in your country: findahelpline.com
🇩🇪 Germany: Telefonseelsorge – 0800 111 0 111 🇫🇷 France: SOS Amitié – 09 72 39 40 50 🇳🇱 Netherlands: 113 Zelfmoordpreventie – 113 🇪🇸 Spain: Teléfono de la Esperanza – 717 003 717 🇮🇹 Italy: Telefono Amico – 02 2327 2327 🇦🇹 Austria: Telefonseelsorge – 142 🇧🇪 Belgium: Centre de Prévention du Suicide – 0800 32 123
━━━━━━━━━━━━━━━━━━━━
AI Use Disclosure
We use tools such as Grammarly and Claude by Anthropic for limited editorial support and occasional visual assistance. These tools assist our workflow but do not replace human judgment or authorship. We stand for human craft and responsible innovation.
HUMAN THINKING
Dispatches on AI, technology, psychology, and what it means to be human.
Have a story tip? Reply to this email.
Get involved
This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions.
We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.
Disclaimer: All content reflects the views of the authors and not those of the publisher, which maintains full editorial independence and assumes no liability for author statements.
1


