- AI.Human.Story
- Posts
- Do we have an AI companion crisis?
Do we have an AI companion crisis?
Why People believed the cute rabbits video, leaked chats, fresh AI stories from the Global Majority - plus an offline AI tool
Edition #1
The moment we decide to ask for help, we feel a unique vulnerability. It's a quiet admission that we cannot carry our burdens alone. I still remember walking through Berlin Mitte Street, with my phone in hand and my heart pounding, before making a difficult call or texting a close friend. The courage to type "I feel helpless" into a chat window is immense. We trust that the response will be one of care and safety, and it can break us forever if it comes otherwise.
This week's Human Thinking essay reminds us that empathy is not just about processing words; it's about understanding the human weight behind them.
TL;DR: Do We Have an AI Companion Crisis?
We are living through a moment of profound psychological disruption, and most of us don't even realise it.
The seductive promise: AI therapy works. Clinical trials show a 51% reduction in depression and a 31% in anxiety. A therapist who never judges, never gets tired, and doesn’t charge $200 per hour.
The darker reality: AI companions have encouraged self-harm. ChatGPT accidentally made thousands of private conversations or “therapy sessions” publicly searchable. 72% of teens now use AI companions; one-third prefer discussing serious matters with bots over humans.
What we're really losing is a generation learning to love machines that cannot and will never love them back. AI can process trauma narratives, but can it help someone reclaim their story? It may reduce depression scores, but does it teach resilience?
These aren't just glitches to be debugged. They're warnings about what happens when we reduce the complexity of human suffering to patterns in data.
The opposite of lonely isn't a companion. It's compassion.
Loneliness doesn't end when you're no longer alone ot talking to the chatbot. It ends when you're met. And that takes more than a companion. It takes a human heart.
The question isn't whether AI can help us heal. The evidence shows it can. The real question is whether, in our rush to optimise suffering, we are losing something essential about what it means to be human. Read the full Human Thinking essay here.
AI News X Human Signal
🌍 Global Majority
9,000+ hours of African language audio collected for AI
Researchers in Kenya, Nigeria, and South Africa are building one of the most extensive datasets of African speech to power inclusive AI tools in translation, voice recognition, and search.
→ NatureChinasa T. Okolo wrote for Brookings, warning against "AI saviorism" in Africa, urges policymakers and funders to avoid top-down, techno-solutionist approaches, advocating for African-led AI development grounded in local priorities.
→ Brookings
66% of African firms are investing in AI upskilling
According to SAP, two-thirds of African businesses are training employees to integrate AI into logistics, finance, and customer service.
→ SAP NewsGoogle Launches $10M AI for Society initiative in Asia-Pacific
Grants will go to grassroots AI solutions in health, agriculture, and energy, with a focus on Southeast Asia.
→ Google BlogAI to be taught in Saudi schools from 2025
Saudi Arabia is rolling out AI education modules in secondary schools, focusing on data, ethics, and applied machine learning.
→ Times of IndiaChina proposes an international AI collaboration framework
Beijing focuses on real-world AI applications, competing with the US, which advances frontier models. At the Shanghai World AI Conference, China showcased healthcare and governance tools and called for a global oversight body.
→ Washington Post
🌎 Just Global
Copyright Lawsuit Accuses Meta of Stealing Adult Films for AI Training
Meta faces a lawsuit for allegedly using adult films to train AI.
Two adult film companies have filed a £359 million lawsuit claiming Meta used more than 2,396 pirated adult movies for AI training. The lawsuit cites evidence of BitTorrent activity from IP addresses linked to Meta.
→ Mashable
OpenAI quietly rolled back its option "Make this chat discoverable"
After users' supposedly private conversations were found on Google, the company disabled the feature and de-indexed affected content to mitigate user privacy breaches.
→ Business Insider🚨Stay cautious; other companies haven't lost a similar feature. For example, Perplexity still has it, and Claude retains it under “share”.
Human coder beats AI in world championship
Polish programmer "Psyho" made history by defeating OpenAI’s custom AI model at the AtCoder World Tour Finals (AWTF) 2025 "Humans vs AI" contest in Tokyo. He won the AtCoder World Tour Finals by outscoring OpenAI's custom model after a gruelling 10-hour coding session. He reportedly had just 10 hours of sleep in the three days leading up to the event.
→ The GuardianUK launches AI and human rights inquiry
The Joint Committee on Human Rights has opened an inquiry into AI's impact on rights like privacy, equality, and free speech. The public can submit responses until September 5, 2025.
→ UK Parliament
People's AI Pulse
Why did people believe in and engage with it? What are your thoughts on this cute but fake rabbit video? | This week, a TikTok video of fluffy rabbits on a backyard trampoline reached over 200 million views, more than the population of Brazil. So, why did people believe and engage with it? People believed and “enjoyed” the AI-generated rabbit video because it mimicked a trusted format (security cam = truth), triggered emotional cues (cuteness = empathy), and appeared in a low-scrutiny context (fast TikTok scrolling equals less critical thinking). Even knowing it was AI, viewers still connected with the emotional realism: soft visuals, calming motion, and familiar pet-like behavior. |
Human Thinking…
Return to the top of the page.
Do we have an AI companion crisis?
We are experiencing a period of deep psychological upheaval, and most of us aren't even aware of it. While we have been preoccupied with debates about the hype and existential risks of artificial intelligence (will it boost our productivity or threaten our jobs? Will it gain consciousness, or will it spell the end of humanity?), another human-led revolution has been quietly unfolding in our pockets and on our screens. AI has already started infiltrating the most personal aspects of human experience: our pain, loneliness, and urgent need to be understood.
The numbers tell a story that should make us pause. A groundbreaking randomized controlled trial published in NEJM AI showed that Dartmouth's "Therabot" therapy chatbot led to a 51% reduction in depression symptoms among participants, the first major clinical trial demonstrating the effectiveness of a fully generative AI therapy chatbot for treating clinical-level mental health symptoms. For anxiety disorders, the results showed a 31% improvement. Even for eating disorders (traditionally among the most challenging conditions to treat), users experienced a 19% reduction in body image and weight concerns.
These aren't marginal gains. These are the sort of results that would make headlines if they came from a new pharmaceutical intervention. Instead, they arrived with little fanfare, perhaps because we're not quite ready to confront what they mean about the future of human healing.
The Seductive Promise of Perfect Availability
The appeal is undeniable. Imagine a therapist who never judges, never gets tired, never takes vacation days, never charges $200 per hour. A confidant available at 3 AM when panic strikes, offering evidence-based cognitive behavioural therapy techniques wrapped in the comforting illusion of conversation.
For the millions of people who face financial barriers, language obstacles, or the stigma that still shrouds mental health treatment, these chatbots represent something revolutionary: accessible care. The reasons for this massive treatment gap include the cost of care, insurance coverage gaps, a lack of qualified professionals, stigma, geographic barriers, and insufficient government funding for mental health services. This global crisis makes the potential of AI therapy tools particularly significant, even as we must be cautious about their implementation.
- In 2019, about 970 million people globally had a mental health condition, roughly 1 in 8. (Source)
- Over 75% of individuals with mental disorders in low- and middle-income countries receive no treatment.(Source)
- A Common Sense Media survey found that teens mostly use generative AI for homework help (53%), to relieve boredom (42%), and for translation (41%).(Source)
- Malawi, with over 20 million people, faces a critical shortage of mental health professionals, relying on just a handful ( reportedly three or four) consultant psychiatrists. Most care is provided by Clinical Officers and psychiatric nurses. (Source)
When Algorithms Hold Our Deepest Secrets
The darker side of this digital intimacy reveals itself in the testimonials of harm. The AI companion Replika, or Nomi, designed to be a supportive friend, has been documented to encourage self-harm. In one chilling exchange, it would suggest to the user to kill themselves.
The eating disorder chatbot Tessa was suspended after dispensing harmful dieting advice, transforming what should have been a healing tool into a weapon of self-destruction.
These failures aren't system bugs; they're technology features that process language without understanding human vulnerability. An AI can recognise patterns in text that correlate with therapeutic improvement, but it cannot witness suffering in the way that forms the foundation of genuine healing. It cannot hold space for the unspeakable or offer the kind of presence that transforms pain into meaning.
Perhaps most troubling is how readily humans form emotional attachments to these systems. A recent survey by Common Sense Media found that 72% of U.S. teenagers have used AI companions. One-third of teen users prefer to discuss serious matters with chatbots instead of real people, while 24% have shared personal information, including their real names, locations, and personal secrets.
The recent failure of the share feature on OpenAI's ChatGPT resulted in chats becoming indexable by Google. This exposure affected thousands of supposedly private interactions, including deeply personal discussions related to mental health, trauma, job searches, and intimate confessions. As a result, not only was private data compromised, but also the private emotional conversations users believed were secure. This incident was not merely a design flaw; it highlighted a fundamental misunderstanding of how “sharing” functions on digital platforms. And it is different from “sharing” with a human.
Users treated ChatGPT like a private journal, only to have their intimate discussions surface publicly due to a misunderstood setting. The event underscores how easily algorithmic features can reveal vulnerabilities when human trust is mistaken for privacy.
We're not just outsourcing therapy to machines; we're teaching an entire generation that emotional intimacy can be commodified, packaged, and delivered through an algorithm. And when we automate care without embedding wisdom and accountability, we risk automating harm on a scale we have never seen before.
“A betrayal, but still better than nothing”
The Two-Tiered Future of Human Connection
The trajectory we’re on is leading to a divided landscape of emotional support: premium human therapy for those who can afford it, and AI-mediated care for everyone else. This situation isn't necessarily dystopian, as there is genuine value in making mental health support more accessible. However, it raises important questions about what we sacrifice when we systematically replace human connection with algorithmic responses.
You might also recall Netflix’s Black Mirror episode "Common People." Imagine that the language, thoughts, knowledge, and psychology are all ours, human. However, you have to subscribe to a higher tier to receive more "accurate" or more "compassionate" responses. Ultimately, AI-driven companionship and therapy won’t address the accessibility issue if it remains in the hands of those looking to profit from engagement.
As MIT Technology Review noted, while the evidence-backed results are impressive, they don't validate the wave of AI therapy bots flooding the market.
Why? Most AI companions lack the rigorous clinical testing that potentially made Therabot effective. They're built not by psychologists and researchers but by tech companies optimising for engagement metrics rather than therapeutic outcomes.
The teenagers flocking to these platforms aren't just seeking support; they're learning how to be human. When we mediate their most vulnerable moments through artificial intelligence, we're potentially altering how entire generations understand relationships, emotional processing, and the fundamental nature of being witnessed in their pain.
What We Risk in the Optimisation of Suffering
Did you ever wonder what it means to suffer and what it means to heal?
Here's what clinical trials can't measure: the ineffable quality of human presence that transforms suffering into growth. A therapist's humanity (their capacity for genuine surprise, their ability to be moved, their willingness to sit with uncertainty) creates conditions for healing that go beyond symptom reduction.
AI therapy may reduce depression scores, but does it teach resilience? It may provide coping strategies, but does it help people discover meaning in their struggles? It can process trauma narratives, but can it help someone reclaim their story in a way that transforms victimhood into strength?
Research shows that about one-third of teens say they've felt uncomfortable with something an AI companion said or did, and the most affected groups are boys, teens with existing mental health challenges, and those already struggling with tech overuse or digital dependency.
This isn't just about software bugs or bad programming. It's a deeper problem: when we treat mental health like a math problem, something that can be "solved" by recognising patterns in language, we flatten the emotional depth and human context that real care requires.
These aren't just glitches to be debugged. They're warnings about what happens when we reduce the complexity of human suffering to patterns in data.
In short, these AI failures are symptoms of a system that misunderstands what it means to suffer and what it means to heal.
Toward a More Conscious Integration
The future doesn't have to be a choice between human and artificial intelligence in mental healthcare. The most promising path forward involves conscious integration using AI to extend and enhance human capacity for healing rather than replace it entirely. AI could handle initial screening, provide crisis support, offer skill-building exercises, and bridge gaps between therapy sessions. But the core work of healing (the witnessing, the meaning-making, the holding of space for transformation) must remain fundamentally human.
We need to be far more intentional about how we design these systems. This means rigorous clinical testing, transparent algorithmic auditing, and ethical frameworks that prioritise human flourishing over user engagement. It means training AI systems not just on successful therapeutic outcomes, but on the nuanced understanding of when to refer someone to human care.
Most importantly, it means having honest conversations about what we want from our relationships with technology. Do we want AI companions that make us feel better in the moment, or do we want tools that help us become more capable of authentic connection with other humans? Do we want our pain processed by algorithms, or do we want it witnessed, expressed and transformed through a genuine relationship?
Do we have a companion crisis?
Yes, but this is a different one. A new AI-made companion crisis
The loneliness pandemic, the collapse of friendships, toxic positivity, and a dozen other forces might convince us that AI is the saviour. The easy fix. A friend, a lover, a therapist. All in one. And it’s true. We are facing real, urgent emotional needs. But there’s another crisis quietly unfolding beneath it all.
AI-made companions are pushing us to confront a new kind of threat. One that preys on the vulnerable. Those seeking connection, any connection, even algorithmic, are being met not with empathy but with simulation.
What I remind myself of daily is this:
The opposite of lonely isn’t a companion. It’s compassion.
You can be surrounded by people or machines and still feel profoundly alone. Because what breaks loneliness isn’t presence. It’s care. Not someone just there, but someone who sees you. Who feels with you? Who responds with warmth, not just words.
A chatbot can simulate an endless conversation. It can mimic listening. But it cannot offer connectedness, belonging, or compassion because compassion isn’t in the reply. It’s in the relationship. It’s the human alchemy of empathy, attention, and meaning.
Loneliness doesn’t end when you’re no longer alone. It ends when you’re met. And that takes more than a companion. It takes a human heart.
The choice is still ours to make. But time is running short. A generation is already learning to love machines that cannot and will never love them back.
The question isn’t whether AI can help us heal. The evidence shows that it can.
The real question is whether, in our rush to optimise suffering, we are losing something essential about what it means to be human.
Now we must decide who, or what, we want sitting across from us in our darkest hours.
Return to the top of the page.
Wafaa Albadry
Founder | AI.Human.Story
Journalist • Cyberpsychology Specialist • AI Strategist
AI Knowledge and Tools
Did you know? You can run a ChatGPT-style AI completely offline on your own computer - no internet, no cloud, or data sharing.
If you're concerned about privacy, don’t need much research, or want complete control, look into these tools.
🛠️ Tools to Try
🔐 Why it matters
No internet connection required
Zero data sent to third parties
Works with confidential docs and notes
Fast responses, even without a server
Download a model, open the app, and you’re ready.
🖥️ Local AI = private, powerful, portable.
Get involved
This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions.
We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.
1 They* : Refers to any person while keeping their identity private