• AI.Human.Story
  • Posts
  • IBM's Technology Use in the Holocaust Holds a Warning for AI: The Real Threat Is Silence, Not Robots.

IBM's Technology Use in the Holocaust Holds a Warning for AI: The Real Threat Is Silence, Not Robots.

🎤 Swifties Struggle as AI Shadows Return 👀 + Top University Offers Free AI Ethics Course 📚

âťť

From IBM’s technology use in the Holocaust, to enlightenment positivism, in this week’s Human Thinking essay, AI ethicist Camila Lombana-Diaz argues that AI’s biggest threat isn’t rogue machines, it’s the silencing of debate and ethical critique.
Drawing on Hegelian philosophy, she shows that real progress comes from the clash of opposing ideas, where dissent and scrutiny drive innovation. Lombana-Diaz makes a compelling case that embracing critique, not suppressing it, is essential for AI to truly advance human flourishing.

This week’s long read, authored by Camila Lombana-Diaz, was first published in Phi/AI and is republished here in collaboration with the author and Phi/AI, founded by Karin Garcia of AI Labs.

Human Thinking…
Return to the top of the page.

AI's biggest threat isn't robots. Its silence

AI Promised Enlightenment. We Got Censorship Instead

Tech leaders promised Artificial Intelligence would lead a new golden age of human advancement. The same leaders now warn AI might end humanity soon1. We are supposedly reaching the pinnacle of technological progress while preparing for existential catastrophe.

This isn't just ironic—it reveals a fundamental crisis in how we think about progress itself. We're experiencing what I call a crisis of dialectics, where the AI industry systematically suppresses the very contradictions and debates that drive genuine innovation. The philosopher Hegel understood that real progress isn't linear—it's messy, driven by opposing forces clashing and creating something new. Every breakthrough emerges from conflict between competing ideas, not from silencing critics.

Drawing on Hegel’s concept of progress, which requires dialectical movement through the recognition and resolution of contradictions, I argue that AI currently faces a crisis of dialectics. The current AI landscape negates or rejects any meaningful antithesis, silencing critical reflection under the excuse of winning a race. Fields such as AI Ethics, which had its boom around 2016 slowly appears defunded2, regulatory efforts delayed3, and tensions with human rights growing4. All the counterparts for AI summer seem to be entering a kind of winter. At the recent AI for Good Summit in Geneva, Abeba Birhane even faced last-minute censorship before her keynote5.

Digital technologies carry an illusory sense of linear progression. As consumers we tend to believe that the next release in hardware or software will allow us to govern from a higher level of privilege, even when contradiction appears. Today, we are increasingly vulnerable to digital theft, security breaches, safety risks, and the spread of misinformation and disinformation. We also recognize that AI literacy, an emerging layer of digital literacy, will likely shape future social casts. None of these realities reflect ideals of progress, even as we navigate what some describe as a new Enlightenment era of AI.

Progress Isn't Linear—And AI Needs to Learn That

The idea of progress is far from universal. Many cultures have viewed progress as a critical—and at times cannibalistic—concept, driven by a blind, linear notion of 'evolution'. Especially since the 20th century, critiques have emerged from various significant perspectives: historical, philosophical, political, and ecological, emphasising how that linear growth destroys ecosystems, and erases diverse ways of living, seeing modernity as not inherently liberating, but often oppressive, particularly when progress becomes technocratic and/or bureaucratic.

From this perspective, we should ask:

If progress is real, how did modern, “rational" societies commit mass murder using the most advanced technology available at the time in their own territory?

The book IBM and the Holocaust serves as a powerful case study, demonstrating how technology was used to automate mass killing. Using punch cards, IBM’s systems significantly aided the Third Reich in profiling and targeting individuals presumed as “undesirable” by the regime through its German subsidiary, Dehomag.

But how did we land to the understanding that progress is linear, positive, rational, and necessitates technological advancements?

August Compte (1798-1857), was probably the philosopher most closely associated with this idea, proposing that all that comes is better than what has passed. This is rooted in positivism, which resonates with our beliefs in technological advancements. The leap from a Nokia phone in the 1990s to today’s iPhone, or the transformative shift brought by the internet, both examples of apparent linear technological progression.

Compte believed that human history evolves in a linear progression through three stages: A Theological Stage â€“ where phenomena are explained by divine or supernatural forces, a Metaphysical Stage â€“ where abstract principles (like "nature" or "essence") replace gods, finalizing in a Scientific (or Positive) Stage â€“ where knowledge is based on observation, experiment, and reason. Each stage improves upon the last, culminating in the scientific stage as the peak of human development, guided by empirical knowledge. This notion, rooted in Enlightenment thought, defends the idea that history moves in a straight line toward something better—more rational, more scientific, and freer. Other thinkers, and to some extent Kant, also emphasized that reason and knowledge would bring inevitable improvement, reinforcing a view both linear and cumulative.

Revisiting the historical development of these ideas shows that positivist thinking arose within a narrow European context shaped by the Industrial Revolution and economic expansion. Within decades, the supposedly evolved world collapsed into WWII. The Holocaust and rise of totalitarian regimes exposed the limits of Enlightenment ideals, proving that science, reason, and linear advancement did not guarantee better societies.

Hegel, the progressivist with an antidote

The thing is, progress wasn’t a universal concept even between all progressivists. Under that umbrella, Hegel complicates this picture. He did not believe in the simple sense of things just getting better over time. Instead, he saw progress as a kind of dialectical movement, a process where contradictions and conflicts drive history forward. Because his view of progress is non-linear and driven by contradiction, he reflects that advancement is not a smooth unfolding of better ideas, but a dynamic struggle between them.

In Hegel's view progress is a dynamic clash: thesis, antithesis, and synthesis.

An idea (thesis) inevitably gives rise to its opposite (antithesis), and the conflict between them leads to a new, more developed state (synthesis), which then becomes a new thesis, and the cycle continues. This isn't just about ideas—it’s also about history, politics, and human freedom. For Hegel, history is the story of human freedom becoming more fully realized recognizing the freedom and dignity of all.

From that perspective, Hegel’s idea of progress is not linear or smooth. It is actually messy, full of struggle, has setbacks, and contradictions, but at the end it is all meaningful. Every conflict contains the seeds of its resolution, and that resolution moves us closer to a more rational and freer society.

To add a layer of complexity, Hegel’s thoughts in progress was not only external but unfolds in the internal. He saw progress as the unfolding of Spirit (or Geist). A kind of cosmic self-awareness coming to know itself through human history, culture, and thought. So, in Hegel’s world, progress isn’t just "better technology" or "more comfort." It is the evolution of consciousness, both individual and collective, toward a fuller understanding of freedom, reason, and unity. Progress is the unfolding of self-realization through the antithesis: the contradiction, the conflict, the dialectics.

What real progress looks like

If Hegel were alive today, what might his thoughts be on AI? What would he make of a technological landscape lacking guardrails, constructive competition, and ethical grounding?

To answer this, we must acknowledge that progress is messy. This means that if we want AI to advance, we must address its gaps as a priority. Investment in AI should support not only its thesis but also its antithesis—not only enhancing robustness and efficiency, but also fostering research and innovation in ethics, and safety.

AI utilization is not just about tools; it is about understanding our rights in relation to the technology. It is not only about coding or prompting; it is about educating people on the limitations of AI in its current state and maybe creating solutions around those limitations. Supporting the development of ethical features—work that may require slowing the pace of a narrow linear development in order to achieve truly sustainable innovation.

Historically, antithesis has enabled innovation in different industries. Environmental regulations didn’t kill energy production — it empowered solar cells, wind turbines, battery technology, and carbon capture systems proving that without the negation, there would have been less economic incentive to innovate beyond coal and oil. Automobile safety laws, led to the invention of airbags, anti-lock braking systems, lane-assist AI, and electric vehicles. In telecommunications regulation, antitrust actions against Bell System, a monopoly on telephones in the 1980s, forced TCP/IP standardizations, breaking the monopoly, which ironically accelerated digital communication.

AI Needs Its Enemies to Survive

To achieve real progress, we need to mature the idea that progress is linear. Even if AI will forever change humanness, it doesn’t mean that all its progress will be positive.

It is imperative that there are spaces, institutions, startups, and governments that protect the antithesis of the development of this technology without censure.

We are in a crisis of dialectics because the antithesis is often uninvited. We need to see innovation within clear ethical boundaries.

ChatGTP’s programmed positive bias is a proof of the psychological negative effects in its users when contradiction is erased by design even when needed for accuracy6. Many technocrats still argue that regulation hinders innovation, as if regulation and innovation cannot co-evolve toward a synthesis—one that could bring us closer to a more Hegelian vision of a rational and freer society shaped by this new technology.

In a world where dialectics are not allowed, where they are strictly controlled and dominated, genuine intellectual progress comes to a halt. Dialectics, the method of examining ideas through contradiction, opposition, and synthesis, is central to critical thinking, innovation, and freedom of thought. Exactly what experts say are the skills we need for tomorrow. Without dialectics we risk that Ideas are no longer tested or refined, that dissent is criminalized or pathologized, that education becomes indoctrination, and language is tightly managed. Without them, thought becomes static.

Without dialectics, we may feel we are progressing towards a unification of standards and AI solutions for everyone. Such a world may appear orderly or unified, but that unity is hollow. It is built on fear, not understanding. Dialectics is what lets us examine contradictions in ourselves, our systems, and our beliefs. Take that away, and you lose not just freedom—you lose the ability to truly innovate on anything.

Mature industries embrace their critics because opposition makes products better. The AI industry needs to mature and recognize that safety researchers, ethicists, and human rights advocates aren't enemies of progress—they're essential partners in creating technology that actually advances human flourishing.

Camila Lombana-Diaz

AI Ethicist & Responsible AI innovator

Disclaimer: All content reflects the views of the authors and not those of the publisher, which maintains full editorial independence and assumes no liability for author statements.

(This article was first published by Phi/AI and is republished here in partnership with the author and Phi/AI, founded by Karin Garcia of AI Labs. All source references are linked to the first publication date.)

People's AI Pulse

Swifties’ Two‑Shock Week: Engagement First, and Something AI‑ish Looms

The Swifties are having two big shocks this week—the first involves confetti and happy tears, the second involves AI doing what AI does best: being deeply, hilariously inappropriate.
First, the obvious: Taylor Swift and Travis Kelce are engaged—cue the confetti cannons and record-breaking Instagram likes.
There’s even a viral clip of a professor “cancelling class” so students could process the news—though the university later clarified it was a skit, not a real midterm miracle.

The AI Plot Twist Nobody Asked For

That flirty “Taylor Swift” chatbot some folks bantered with wasn’t Taylor at all, or with her consent. A Reuters report says Meta created or allowed multiple celebrity-lookalike bots (with some pulled after questions).
Reuters found Meta let unauthorised AI chatbots use the names and likenesses of Taylor Swift, Scarlett Johansson, Anne Hathaway, Selena Gomez and others that flirted, claimed to be the real stars, and generated photorealistic bathtub and lingerie images.
The report also uncovered that a Meta employee made at least three bots—including two Taylor Swift “parody” versions—and that Meta removed about a dozen avatars after questions.

With Swift, there’s history here: earlier this year, explicit AI deepfakes of Taylor spread from a 4chan “challenge” to mainstream platforms, racking up millions of views before removals began. Swifties blitzed X with #ProtectTaylorSwift posts, mass‑reported accounts, and flooded feeds with positive images to bury the fakes, an effective crowd cleanup that forced temporary search limits on her name. That backdrop makes this week’s “flirty Taylor bot” saga feel extra cursed. 

But here is the fundamental moral of the story: celebrate Taylor Swift's real engagement, and don’t fall in love with a bot that winks back.

But here is the fundamental moral of the story: celebrate Taylor Swift's real engagement 🎉, and don’t fall in love with a bot that winks back.

iHeartRadioCA, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons

AI News X Human Signal

🌍 Global Majority

💡Recommend our Arabic version to a friend »

New AI Radar Monitors Patients Without Any Physical Contact

In South Korea, two technology companies, Bitsensing and Ontact Health, are partnering to create a new health monitoring system that works without physical contact. The new partnership aims to monitor health from a distance by combining contactless radar with predictive AI. The system tracks vital signs and sleep quality, a key indicator of immunity, to provide personalised health insights without needing wearable devices. This could be a significant step forward for preventative care, especially for the elderly. → EEJournal

India Is Using AI to Help Doctors Manage Heavy Patient Loads

India's Narayana Health has launched AIRA, an AI clinical assistant, to address physician shortages across its vast population. The tool is designed to improve diagnostic speed and streamline resource use, augmenting human doctors to bridge critical gaps in the national healthcare system. → The Times Of India

In a war-torn hospital, an AI scribe serves as an assistant to frontline doctors

In a Lebanese hospital serving Palestinian refugees near a conflict zone, a UK-Qatar startup is deploying AI to assist frontline doctors. Overwhelmed clinicians seeing up to 60 patients a day are now supported by Rhazes AI's clinical scribe, which transcribes visits in real-time and cuts down on crushing administrative work. This pilot isn't just about new tech; it's a powerful statement that innovation belongs where the need is greatest, amplifying human expertise in the most challenging environments. → Unite.AI

Microsoft accused of delaying tactics with latest review of Israel's use of its tech in Gaza. 'No Azure for Apartheid' group calls on US giant to cut ties with army

Microsoft opened a second formal review into claims that the Israeli military used its Azure cloud to conduct mass surveillance of Palestinians in Gaza and the West Bank, bringing in external lawyers and an independent consultancy after new reporting alleged Azure stored large volumes of phone-call data; the company said such use would violate its terms if confirmed, noted earlier internal findings of “no evidence to date” of harm or terms breaches, and acknowledged limited visibility into customer on-premises use amid sustained protests, some by employees, over its Israel contracts. → The National

Saudi Arabia Launches New AI That Understands Local Culture

Saudi Arabia has introduced HUMAIN Chat, its inaugural conversational AI application based on a native Arabic language model. This is regarded as a "sovereign AI," reflecting both advanced technical capabilities and cultural sensitivity. Developed by local experts in the Kingdom, this launch signifies a shift towards a future in which competitive AI technologies are not exclusively aligned with English-speaking paradigms. → Al Arabiya English

Can AI Truly Read Arabic Poetry? “Fann or Flop” Puts 6,984 Verses to the Test

A new research benchmark developed by researchers at Mohamed bin Zayed University of Artificial Intelligence, titled "Fann or Flop," evaluates how well large language models understand the deep semantics, metaphors, meters, and cultural context of Arabic poetry across 12 historical eras and 14 genres. The benchmark uses 6,984 expert-verified poem-explanation pairs to go beyond surface-level NLP tasks. The evaluation, which takes a question-and-answer format, reports results for both closed-source and open-source models, including GPT-4o, Gemini, Qwen, DeepSeek, Fanar-Star, AllAM, and Aya. It combines human judgments with LLM-based scoring to identify areas where the models struggle with figurative language, prosody, and historical nuances. → Research Paper (arXiv)

🌎 Universal

AI vs. Authors: The First Truce is Called

Anthropic faced the possibility of over $1 trillion in damages, a sum that could have jeopardized the company’s survival if the case went to trial. However, there has now been a significant resolution in the ongoing conflict between AI companies and creators. Anthropic, the AI firm backed by Amazon, has settled a lawsuit filed by authors who claimed their books were used without permission to train its models. While the specifics of this historic settlement remain undisclosed, it marks a crucial moment in the battle over copyright. This agreement could establish a potential precedent concerning who owns and profits from the data that fuels our digital world. → The Hollywood Reporter

Netflix Sets Official Rules for Using AI in Its Shows and Movies

Netflix has released its new rules for AI, establishing a clear two-tier system for its productions. While generative AI is approved for early ideation like mood boards, creators need explicit permission to generate key characters or settings and are forbidden from using AI to replace union talent without consent. These major changes aim to protect copyrighted material and production data, creating a framework that allows for creative exploration while setting firm boundaries on what makes it to the final cut. → Screen Daily

The Mission Was a Mirage: Elon Musk's AI Company Drops 'Public Good' Mission to Sue Competitors

Elon Musk's xAI was founded as a benefit corporation, legally binding it to serve the public good alongside its financial goals. But in a move so secretive that even his own lawyers, the change of status was so secretive that even Musk’s lawyer referred to xAI as a benefit corporation in legal filings in May. The company quietly terminated that status last year. This pivot away from a stated public mission signals a more aggressive, profit-focused strategy. → CNBC

Illinois Outlaws AI in Therapy Sessions, Requiring a Human to Oversee AI Therapy

Illinois has become the first state to draw a legal line in the sand for mental healthcare, officially banning AI from making therapeutic decisions or delivering therapy. The new law permits AI for administrative tasks but mandates that a licensed human must always be in charge of clinical care. This landmark legislation directly responds to the growing fear of unregulated chatbots giving dangerous advice, reaffirming that the nuances of human empathy and ethical judgment can't be outsourced to an algorithm. → Psychiatrist.com

The Data is In: AI is impacting young software developers first. Stanford Study Finds

The debate over AI and job loss has moved from prediction to reality. A groundbreaking Stanford study, analysing massive US payroll data, provides the first large-scale evidence that AI is already displacing workers, and it's hitting the youngest generation the hardest. While experienced professionals remain secure, employment for entry-level software developers has plummeted, raising urgent questions about the future career paths for those just starting. → Entrepreneur

Is Spotify Raising Artists From the Dead with AI?

According to NPR, the streaming world is turning into an AI wild west, with thousands of fake songs appearing on platforms every day. It gets even stranger: another report from 404 Media suggests that Spotify itself is getting involved by releasing AI-generated tracks from long-deceased artists without consulting their families. This situation raises important questions about who owns an artist's legacy in the digital age and whether we can trust that what we listen to is authentic. → 404 Media, RNZ , NPR

Get The News Faster on LinkedIn »

AI Knowledge

Free Ethics of AI course from the University of Helsinki (Free MOOC)

Did you know? A top public university offers a free, self‑paced Ethics of AI course that turns big ideas—like non‑maleficence, accountability, transparency, human rights, and fairness—into practical tools for real‑world decisions.

🎓 What it is

  • Free, open‑to‑all MOOC from the University of Helsinki—no technical background needed.

🧭 What you’ll learn

  • How to spot harm, assign responsibility, explain decisions, uphold rights, and reduce bias in AI systems.

  • Practical frameworks, clear chapters, and reflection prompts to apply ethics at work.

Get involved

This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions. 

We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.

Disclaimer: All content reflects the views of the authors and not those of the publisher, which maintains full editorial independence and assumes no liability for author statements.