• AI.Human.Story
  • Posts
  • AI’s Eyes, Ears, and Voice for an Unstandardized Human

AI’s Eyes, Ears, and Voice for an Unstandardized Human

👶🚜 Babies driving excavators, AI psychosis, Indigenous voices, grieving father, & pig butchering

There’s a silence that comes when your thoughts slip away, letter by scrambled letter. For years, dyslexia made every page a battlefield. Broadcasting felt like refuge, but the stumbles followed. Then AI arrived, a “shadow teacher” that promised it could map my mind and open doors I thought were locked. Is it true?
Now, as machines learn to see beyond sight and hear beyond hearing, those breakthroughs risk being sidelined for shinier distractions.

This week's Human Thinking essay asks what AI could become if it learned our differences insteaad of standarising us, and why the innovations that open doors for millions struggle for attention in a market chasing scale. 

Wafaa

TL;DR: AI’s Eyes, Ears, and Voice for an Unstandardized Human

AI’s promise isn’t to sand down our edges, it’s to recognise them. When machines meet humans as we are, “nonstandard” stops being a flaw and becomes a language.

What this looks like: A camera that reads a menu aloud and names a friend’s smile. An app that learns your voice instead of rejecting it. Eyes that guide a paintbrush. A hand moved with intention, not apology. These aren’t gadgets; they’re re-entries into the world.

The uncomfortable truth: Investment chases scale. Chatbots swell; human-centred tools scrape by. Life, work, and people bend to the machine, inputs, and outputs. Accessibility tech resists this pull. It removes barriers rather than making people climb them, and sometimes works so well that it makes itself obsolete.

Bias persists, access is uneven, and AI still mishears the very voices it promises to serve. The task is simple: tune the model’s ears, widen its languages, and lower the barriers to entry.

The standard: Don’t measure AI by monthly users or tokens processed. Measure it by what it gives back: dignity, agency, the ability to be understood without translation.

The non-standard: is the Human
AI is not fate. It is clay. Shape it for the person before you, making it different, particular, and brilliant, and leave as much of their potential beautifully unstandardized as possible.

AI News X Human Signal

🌍 Global Majority

UN Warns AI Could Harm 476 Million Indigenous People
The UN warned that AI is being developed without indigenous voices, risking the exploitation of cultural knowledge. It also highlighted examples of AI helping protect land and heritage.
By United Nations via UN News

Broadcasters Urged to Keep Africa’s AI Story Authentic
At a major West African media gathering, Nigeria’s broadcast chief called for AI adoption with strong ethics and locally rooted storytelling. The goal: prevent AI from erasing cultural voices in the rush to innovate.
By Abdul Mohammed Isa for Voice of Nigeria

Cabinet approves national AI policy, targets 1m professionals by 2030
Pakistan’s government has pledged to train one million people in AI by 2027, launch venture funds, and roll out thousands of AI-led civic projects.
By tribune.com 

Kenya leads in AI innovation and policy development, as Africa’s market is set to triple by 2030

 With Africa’s AI market expected to triple by 2030, Kenya is positioning itself as a continental leader through policy reforms, innovation hubs, and homegrown startups.
By Alfred Onyango for The Eastleigh Voice

South Africa Mobilizes Universities for Inclusive AI Growth
South Africa’s science minister is urging universities to advance AI research while preserving African languages and equitable access. The country is also using its G20 presidency to press for global policies that keep poorer nations in the AI conversation.
By University of Zululand via DSTI

🌍 Global

Sweden’s PM Faces Backlash for Using ChatGPT in State Decisions
Sweden’s Prime Minister Ulf Kristersson has admitted to regularly consulting AI tools like ChatGPT for a “second opinion” on political decisions. Critics warn that this risks outsourcing democratic judgment to opaque algorithms, with one newspaper accusing him of falling for “the oligarchs’ AI psychosis.” Supporters say it’s an innovative use of modern tools — but where should leaders draw the line?
By Miranda Bryant for The Guardian

“AI Psychosis” again… How Chatbots May Be Triggering Delusions
A study by Morrin et al. titled "Delusions by Design? How Everyday AIs Might Be Fueling Psychosis (and What Can Be Done About It)" warns that everyday AI use may worsen psychotic symptoms. This issue gained attention when Geoff Lewis, an OpenAI investor, described what can be seen as a mental health crisis related to what he named “ a system” In a video, he claimed that a non-governmental system has negatively impacted over 7,000 lives through funding disruptions, eroded relationships, and reversed opportunities. Lewis stated that 12 deaths linked to the system were preventable, emphasising that these individuals were not unstable; they were erased.

A Father Brought His Son Back as an AI. And Let a Journalist Talk to Him

After losing his 17-year-old son in a school shooting, a grieving father created an AI clone to be interviewed by a reporter. The project raises haunting questions about grief, consent, and whether recreating the dead honors or distorts their memory.
By Jake Horton and Produced by Meiying Wu for BBC News

Algorithms in the Classroom: Built-in Bias Exposed
According to new research, AI tools used to guide student behaviour plans show racial bias. Names perceived as Black were met with harsher recommendations than those seen as white, prompting calls to pause the tools before they deepen inequality in schools.
By Norah Rami for ChalkBeat

One “Poisoned Document” Could Unlock Your Digital Life
Researchers have discovered a vulnerability in how ChatGPT connects to apps like Google Drive. A single crafted file could trick the AI into revealing sensitive data without the user ever opening it.
By Matt Burgess for Wired

🚨 A poisoned document is a file intentionally designed to trick an AI into revealing private data or bypassing safeguards when it processes the file.

When AI Health Advice Turns Deadly
A rare medical case of bromism was linked to a patient following ChatGPT-generated health advice. A 60-year-old man with no medical or psychiatric history arrived at the emergency room convinced his neighbor was poisoning him. He denied taking any medication or supplements, yet his lab results told a different story. Tests revealed severe electrolyte imbalances, with dangerously high chloride, extremely low phosphate, and an unusual negative anion gap, along with signs of both respiratory acidosis and metabolic alkalosis. Though his vital signs and physical exam appeared normal, the biochemical chaos led doctors to admit him for close monitoring and urgent electrolyte correction.

By Audrey Eichenberger et al. A Case of Bromism Influenced by Use of Artificial Intelligence Clinical Cases

Australia Criminalises Sexual Deepfake Creation and Sharing

New South Wales has passed strict laws making it a crime to create or distribute explicit deepfake images, punishable by up to three years in prison. It’s one of the strongest legal responses yet to AI-driven abuse.
By NSW Government nsw.gov.au

Tucson Blocks Mega Data Centre After Public Outcry
The Tucson City Council has unanimously rejected plans for “Project Blue,” a massive AI data centre, after residents raised alarms over water use and environmental impact. It’s a rare win for community resistance against significant tech expansion.
By Yana Kunichoff for Arizona Luminaria

German Police Expand Predictive Surveillance With Palantir
Germany’s police forces are increasing their use of Palantir software, raising fears of “predictive policing” that could erode civil liberties. The CIA-linked company says it’s about safety, but critics see it as mass surveillance in disguise.
By Marcel Fürstenau for DW

Academic Integrity Under Threat From AI-Generated
A new study indicates that as many as one in five computer science papers now contain AI-generated text, a trend observed across numerous disciplines since the release of ChatGPT. This surge in machine-written content is prompting universities to re-evaluate the concepts of authorship and originality.
By Phie Jacobs for Science

People's AI Pulse
👶These Babies Run Excavators, Host Podcasts &  Hacked Your Brain Chemistry

PixVerse released an AI-generated video of a baby driving an excavator, complete with a tiny hand gesture. Whether it’s meant to be the middle finger is up for debate, but one thing’s sure: AI still hasn’t mastered human hands, making it a handy little fact-checking clue.

👶🚜🍔 AI is serving up the impossible: a baby in an excavator, another “on shift” at McDonald’s complaining about diaper prices forcing him into early hard labor 😂, and newborns giving press interviews.

💖🤯 The internet’s forever formula for viral gold: cute + absurd = instant dopamine and for a few seconds, we’re all in on the same joke.


AI’s Eyes, Ears, and Voice for an Unstandardized Human 

"I recognize the look on people's faces when they struggle to understand what I've said," Aubrie Lee, a brand manager at Google whose speech is affected by muscular dystrophy, shared on Google's blog some years ago. While researching this essay, I came across the line. The words were part of an announcement for a Google tool designed to help people with non-standard speech.

I wondered if machines could help people connect and communicate. Can technologies de-standrise communication and our connectedness, turning what was once impossible into reality?

Our world has been designed for a standardized human for generations, unintentionally sidelining countless brilliant minds. According to a UNICEF report, nearly 240 million children with disabilities face numerous barriers, with about half never having the opportunity to attend school. Today, technology and AI are dismantling these barriers, one line of code at a time. This effort isn't about fixing people but changing the world's inability to recognize and include them.

AI to Empathize, Not Standardize

The promise of AI is not about creating one-size-fits-all solutions, but tools that become one-size-fits-one, adapting to the individual to unlock their full potential. This spirit of empowerment has been growing for years, long before the recent hype around generative AI, driven by a deep understanding of human needs.

The principle behind Microsoft's Seeing AI, developed by Saqib Shaikh, a blind engineer, focuses on creating a tool that meets the needs of visually impaired users. This app utilizes a phone's camera to narrate the world around the user, helping them read restaurant menus and describe the expressions on friends' faces. Rather than attempting to "fix" blindness, it offers a new way to experience the environment.
Google's Project Relate is designed to help individuals with speech difficulties. The app learns a user's unique way of speaking. Instead of simply rejecting what it interprets as an "error," the AI encourages the user to teach it, resulting in a personalized model that can better understand their speech.

Creativity can also be freed from physical limitations. Sarah Ezekiel, an internationally recognized artist diagnosed with motor neurone disease, lost the use of her hands but not her imagination. Today, she uses her eye as a paintbrush. With the help of an eye-gaze computer, she controls every stroke with her gaze, producing celebrated works of art. The AI is not the artist; it liberates the artist who has always existed within her.

Investment Attention

Despite significant successes, a concerning shift is happening. Despite the promise of such tools, investment attention skews toward consumer-facing AI for mass-market appeal, while bespoke, human-centred applications struggle to secure comparable resources.

Gathering special types of data, like unusual speech sounds or muscle signals, is being overlooked because of the focus on big models trained on general internet data. While these models are powerful, they risk creating a one-size-fits-all approach, potentially leaving behind many people who need real-life solutions.
The initial spirit of empowerment now struggles to remain relevant in an industry increasingly focused on scale rather than personalized solutions.

The numbers tell the disparity story: According to the 2025 Stanford AI Index Report, in 2024, U.S. private AI investment reached $109.1 billion, nearly 12 times China’s $9.3 billion and 24 times the U.K.’s $4.5 billion. Generative AI attracted $33.9 billion globally, an 18.7% increase from 2023. AI business adoption also accelerated sharply, with 78% of organisations reporting AI use in 2024, up from 55% the previous year.

 In contrast, companies developing life-changing accessibility tools have received far less funding. For instance, Be My Eyes, which assists blind users in navigating their environment through AI, has raised $10.6 million. While numbers differ, WHO mentioned in one of its reports that Globally, at least 2.2 billion people have a near or distance vision impairment. This is a stark reminder of the need, especially when the number of 1.3 billion people with significant disabilities forms 16% of the global population, or a population larger than China's, yet the AI gold rush pursues different aspirations. 

This disparity highlights an uncomfortable truth about our priorities or the industry’s priorities. We are channelling billions into creating more sophisticated chatbots, while the tools that could restore dignity, independence, and opportunities for millions remain chronically underfunded.
Artificial intelligence is not a force of nature; it is clay in our hands, and we choose to shape it in the way it serves us.

The Bigger Picture: Standardising The Human

Most tech that dominates the market does not solve real problems. It manufactures them, creates the illusion of need, hooks users, and then scales that dependency into profit. Take Deepfakes as an example. What problems does it solve?
Deepfakes are no longer just a curiosity. They are a business attracting serious money. Investors have poured hundreds of millions into detection technologies, with single deals ranging from $16 million to over $100 million. The larger industry, covering both creation and detection, is valued in the billions today and is projected to climb into the tens of billions by the early 2030s.

The origins were almost casual. Deepfakes appeared because they could, not because anyone asked for them. They follow a script even as entertainment: the same kinds of jokes, the same flicker of unease, the same predictable strangeness. Even the fear of being deceived feels standardised. What began as a technical experiment has now grown into a societal problem. There was once talk of using deepfakes for accessibility, to give those who lack one a voice or presence. That promise is rarely examined and seldom delivered.

Accessibility tools focus on removing real barriers rather than creating endless upgrades. They aim to eliminate obstacles that may no longer be needed. Because of this, they often don't become widely popular or attract huge investments. They are designed to end exclusion, not to boost growth. In a market that values size, solving a problem so well that it disappears is seen as bad business.

We had a great invention, a discovery. But discovery is not an endpoint; it's a catalyst for progress. This discovery or creation of AI should be the clay in our hands, helping us solve real-world problems.
This is not a story about disability or gadgets. The goal of innovation is not how many people are feeding AI with their data or paying to subscribe to the product, but how much of their potential can remain beautifully, usefully unstandardised.

AI Mishears and Misunderstands, again 

Of course, this journey comes with challenges. An AI is only as fair as the data it learns from. Research indicates that speech recognition systems make nearly twice as many errors when transcribing the speech of Black individuals compared to white individuals. According to the Stanford Report, one study found an average word error rate of 35% for Black speakers versus 19% for white speakers. 

Nothing is a fairy tale here, and the world still needs more ways to connect, not only in certain parts of the world, the West, but everywhere and in every language. This piece reminds us of what is possible through technology and where we need to focus our efforts.

We should not aim to create tools, just becuase we can, tools that have no problem to solve, think deepfakes, but rather AI technologies that solve real-world issues. And that makes everyone understand everyone. Connecting us.

Wafaa Albadry

The same spirit of ingenuity is addressing the digital divide. In Kenya, engineer Roy Allela was inspired by his deaf niece to create a device that translates sign language into audible speech. While this glove relies on a smartphone- a barrier for the 2.6 billion people still offline- its brilliance lies in its inspiration. It is a testament to how human-centred innovation can light the way forward, creating solutions that inspire us to bridge the remaining gaps in access and infrastructure.

 A Future Engineered for Everyone


The evidence is positive. Studies confirm that AI demonstrably improves learning and motivation for students with disabilities. But is the world’s will is there to drive innovation into this direction?
AI is giving us the keys to unlock a more inclusive world. It is a powerful ally in our quest to ensure that every mind is seen, every voice is valued, and every person has the tools to share their unique brilliance with the world. 
Return to the top of the page.
Wafaa Albadry
Founder | AI.Human.Story
Journalist • Cyberpsychology Specialist • AI Strategist

AI Knowledge

Did you know? 

International crime rings use AI to automate "Pig Butchering" scams, a fraud that blends long-term romance with fake crypto investments. AI-powered scripts help scammers build deep trust over months, allowing them to manipulate victims into investing their life savings. A scammer uses AI to build and scale an emotional bond with several people on dating apps or social media, then introduces a "secret" crypto platform. They persuade the victim to invest heavily and then disappear with the funds.

💔 Why it matters

  • Massive Financial Losses: The FBI's Internet Crime Complaint Centre (IC3) reports billions of pounds lost each year to investment scams, with pig butchering being a primary cause.

  • AI-Powered Manipulation: Security researchers confirm these operations use AI-enhanced scripts for convincing, simultaneous conversations that create an authentic emotional connection.

  • Industrial Scale Operations: This is not the work of a single scammer. Reports from the UN and companies like Sophos detail large-scale, organised crime networks, often staffed by victims of human trafficking, operating these scams around the clock.

  • Exploits Emotional Vulnerability: The lengthy grooming process is designed to prey on loneliness, making the eventual financial betrayal even more devastating.

🐷 AI Pig Butchering = your heart 💔 and wallet 💰 are the targets.

Get involved

This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions. 

We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.