- AI.Human.Story
- Posts
- Your Privacy or Children's Safety? Why That's the Wrong Question
Your Privacy or Children's Safety? Why That's the Wrong Question
Understanding the EU "Chat Control" Proposal

Edition #6
As humans, we build things, then watch what we've built get twisted into something unimaginable. Right now, generative AI is being weaponised in the darkest corners of the internet, predators turning cutting-edge tools into instruments of harm against children. The numbers are staggering: a 1,325% surge in AI-generated child sexual abuse material from 2023 to 2024.
The world wants to stop it. But the cure they're proposing is not what peolpe expcted, to watch everyone.
The EU's "Chat Control" proposal would force encrypted messaging platforms to scan your private messages, URLs, pictures, and videos, before they're even encrypted. The promise: catch predators. The price: scan everyone. To find the criminals hiding among us, they want to treat us all like potential criminals.
The Vote that was scheduled to happen Today 14th of October was cancelled because a coalition of EU governments, led by Germany, refused to support the proposal, but it's not dead. It's evolving, waiting for the next vote, the next crisis, the next moment we're too exhausted to fight back.
This week on AI Human Story, we break down the impossible choice being framed as inevitable: mass surveillance or child safety.
Why are the laws being made right now heading in the wrong direction, and what happens when the laws meant to protect us become the very thing we need protection from?
Every innovation carries a choice, and every law must seek justice, not convenience. The most dangerous laws make us choose between two rights instead of balancing them, and we're running out of time to make the right choice.
How Canva, Perplexity and Notion turn feedback chaos into actionable customer intelligence
Support tickets, reviews, and survey responses pile up faster than you can read.
Enterpret unifies all feedback, auto-tags themes, and ties insights to revenue, CSAT, and NPS, helping product teams find high-impact opportunities.
→ Canva: created VoC dashboards that aligned all teams on top issues.
→ Perplexity: set up an AI agent that caught revenue‑impacting issues, cutting diagnosis time by hours.
→ Notion: generated monthly user insights reports 70% faster.
Stop manually tagging feedback in spreadsheets. Keep all customer interactions in one hub and turn them into clear priorities that drive roadmap, retention, and revenue.
Human Thinking…
Return to the top of the page.
The AI Surveillance Machine Europe Almost Built - And How Germany Stopped It (For Now)
Imagine opening your phone tomorrow morning and finding a notification: "AI has flagged your message for review." You sent a beach photo of your kids to grandma, and the AI thought it looked suspicious.
This nearly became reality in Europe.
The Plan: AI That Reads Everything
For three years, the EU has been developing what critics call "the most invasive surveillance proposal in democratic history." The official name - Child Sexual Abuse Regulation (CSAR) - sounds noble. The stated mission: use AI to catch child predators before they harm kids.
But here's what it actually means for everyone else who is a usual citizen: every message you send on WhatsApp, every photo you upload to cloud storage, every email you write, all of it scanned by AI algorithms before it even gets encrypted. Not just for suspected criminals. Everyone. All the time.
The technology relies on two types of AI:
First, digital fingerprinting to match known abuse images. This part works reasonably well.
Second, and here's where it gets dystopian: AI classifiers that try to identify new abuse material and detect "grooming behaviour" in text conversations. These machine learning models would analyse your words, assess your photos, and decide whether your content is "permissible." That means they can partially access and may store this information and materials, and who knows, maybe train more LMs on it.
The AI Accuracy Disaster
Nobody wanted to discuss this problem: even the "reasonably well" working systems produce staggering numbers of false accusations.
A Facebook study on 150 accounts reported to authorities for alleged CSAM found that more than 75% were "non-malicious" - people sharing images "such as outrage or poor humour," not out of sexual interest in children. The study, conducted by Meta Research in consultation with the National Centre for Missing and Exploited Children, analysed accounts reported to authorities and found the vast majority were sharing content without intent to harm children.
LinkedIn's transparency report revealed similar problems: Out of 75 files flagged by its PhotoDNA hash-matching system in the second half of 2021, only 31 were confirmed as actual CSAM upon manual review, a 59% false positive rate even for known content detection using digital fingerprints, supposedly the most accurate method available.
And that's just for detecting known material. The European Parliament's own impact assessment concluded that technologies to detect new abuse material and grooming behaviour "are of substantially lower accuracy."
The EU Commission claims 88% accuracy for grooming detection, which sounds good until you consider the scale and check what the companies say about it. Microsoft warned that the European Commission's claimed 88% accuracy figure is unreliable and based on one model in English (see annexe 9).
As over 500 cryptographers and security researchers documented in an open letter: "Even if AI scanning were 99.5% effective at identifying abuse, it would lead to billions of wrong identifications every day", given the scale of messages sent across Europe.
They stated bluntly: "There is no known machine-learning algorithm that can identify illegal images without making large numbers of mistakes."
With billions of messages sent daily in Europe, even a 3% error rate means millions of false accusations monthly.
That vacation photo of your kid at the beach? It could be flagged. Your dermatology photos? Flagged. Teenagers sexting each other? Or adults with different features in photos? Flagged.
Signal Draws a Line in the Sand
On October 3, 2025, just days before the scheduled vote, Meredith Whittaker, President of the Signal Foundation, issued a clear ultimatum.
"If we were given a choice between building a surveillance machine into Signal or leaving the market, we would leave the market."
No hedging. No compromise. Signal would rather abandon 83.5 million German users than become a government spy tool.
Whittaker's argument was surgical: you cannot build a backdoor that only lets the "good guys" in. Once you create infrastructure to scan everyone's messages, hackers and hostile governments will exploit it. Even intelligence agencies, not exactly champions of privacy, agreed this would be "catastrophic for national security."
Then came the gut punch. Speaking to Germany specifically, Whittaker wrote: "We cannot let history repeat itself, this time with bigger databases and much, much more sensitive data."
She didn't need to name the Stasi. Every German over 40 knows what mass surveillance looks like when it goes wrong.
How It All Unfolded: A Timeline
October 3, 2025: Signal Foundation issues an open letter warning Germany against supporting Chat Control, threatening to leave the EU market rather than implement surveillance.
October 7-8, 2025: Germany announces it will not support the proposal. Federal Justice Minister Stefanie Hubig declares: "Unprovoked chat monitoring must be taboo in a constitutional state."
October 8, 2025: COREPER II (Committee of Permanent Representatives) discusses the proposed regulation. Without German support, the proposal lacks the majority needed.
October 9, 2025: The German Bundestag holds a debate on Chat Control during "Aktuelle Stunde" (current affairs session), solidifying Germany's opposition.
October 14, 2025: The vote originally scheduled for adoption by EU interior ministers is quietly removed from the agenda. As ORF reported:
"The vote originally scheduled for 14 October will not take place because there is no majority for the proposal."
The Vote That Wasn't
Germany represents 19% of the EU's population, enough to form a "blocking minority" in the EU Council. Without Germany's support, the proposal was dead on arrival.
Jens Spahn, Chairman of the CDU/CSU parliamentary group in the Bundestag, clarified the stakes: "This would be like opening all letters as a precautionary measure to see if there is anything illegal in them."
The vote that was supposed to happen today never happened.
But the Fight Isn't Over
Here's the thing about these proposals: they never really die. They just get reworded and resubmitted.
According to Patrick Breyer's update, a digital activist and former member of the EU parliament, the EU Commission is likely to propose extending "Chat Control 1.0", the current regulation that permits (but doesn't require) providers to scan messages. Breyer warns: "An extension of this indiscriminate bulk scanning regime is not acceptable. Scanning under this regulation needs to be targeted and limited to suspects where requested by a judicial authority."
EU governments are expected to try again when interior ministers meet in mid-December 2025. Breyer warned: "The proponents of Chat Control will use every trick in the book and will not give up easily."
However, Breyer celebrated the victory but remained vigilant: "Without the tireless resistance from citizens, scientists, and organisations, EU governments would have passed a totalitarian mass surveillance law."
Expect new language, assurances about "safeguards, " and maybe the same mass surveillance with a friendlier name.
What This Means for AI's Future
This isn't just a privacy story. It's a story about what happens when we deploy imperfect AI systems to monitor entire populations, and who pays the price when those systems make mistakes at scale.
And here's the darkest part: once you build this infrastructure, it's permanent.
Today, it scans for child abuse. Tomorrow? Political dissent? "Misinformation"? Religious content? The technology doesn't have agency; it merely scans whatever governments or the dominant backdoor with malicious intent instruct it to scan.
What Happens Next
Mid-December's discussions will reveal whether Germany's opposition was a principled stance or a temporary political calculation. Privacy advocates are mobilising for round two, and tech companies are watching nervously.
The question isn't whether AI can scan our messages. It obviously can.
The question is whether we want to live in a society where it does.
The Uncomfortable Truth
Nobody wants child abuse imagery spreading online. The goal is legitimate. The suffering is real.
However, the proposed solution, training imperfect AI to read everyone's private messages and make judgment calls about "permissible content," is a cure worse than the disease.
It's a security threat that creates massive new vulnerabilities while generating millions of false accusations and catching very few actual criminals.
It is a mass surveillance threat that proposes to surveil everyone to catch dozens of criminals. That's not justice. That's paranoia with a badge.
Wafaa Albadry
Founder | AI.Human.Story
Journalist • Cyberpsychology Specialist • AI Strategist
Get involved
This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions.
We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.
Disclaimer All content reflects the views of the authors and not those of the publisher, which maintains full editorial independence and assumes no liability for author statements.