• AI.Human.Story
  • Posts
  • Human Thinking: The Algorithm's Beauty Copy Machine

Human Thinking: The Algorithm's Beauty Copy Machine

Beauty is NOT in the eye of the beholder anymore

Beauty is in the eye of the beholder, or so we tell ourselves, clinging to this ancient wisdom like a life raft in a storm of a world full of judgment. Sometimes, when I say this sentence, I mean it as a resistance to the world where we sit near each other but don't honestly see each other, tickling the human need to be seen, to remind myself and others that being seen by another human holds a beauty in itself. 

But what happens when the beholder is no longer human? When the eye that evaluates your worth is made of silicon and trained on someone else's idea of perfection?

I used to scroll past those beauty enhancement ads with practised indifference, the ones promising to "fix" what I didn't know was broken. The algorithm knows better, though. It floods women's feeds with these promises of transformation, understanding that we are the market most susceptible to the whisper that we are not enough. Each notification is a tiny needle, a call to change something. 

Then this week, a perspective paper from surgeons in Kuwait crossed my desk, and what they revealed stopped me cold.

The algorithm takes the magnificent chaos of human diversity, runs it through a standardisation protocol designed in Silicon Valley, and emerges with a template that looks nothing like the majority of the world's faces. This template has found its way to the surgery tables, from just our thoughts about beauty to our flesh and blood. 

Though those wounds run deep enough, this is not just about vanity or self-esteem. This is about the quiet violence of our idea of beauty being erased in plain sight, replaced by AI's standard recommendations to be “ perfect,” and of having our faces declared inadequate by a machine that has never known the weight of history, never felt the pull of belonging, never understood that beauty is not a formula but a story that everyone tells on their own way.

This week’s edition is not just about artificial intelligence in the aesthetic industry but about what it exposes about ourselves: how quickly we surrender our faces to the judgment of code, how readily we accept that a machine might know better than the most profound human intuition, and that the code has the antidote that makes us worthy of being seen.

Wafaa Albadry

Human Thinking…

The Algorithm's Beauty Copy Machine: Your Face, Their Template

Your facial features, eyes, nose, and mouth are reduced to data points, representing 0000 and 1111. The machine knows nothing of your individual traits, culture, or ancestry; it only processes the data it receives. Its sterile quest to reflect perceived beauty may offer you an attractive standard visage that doesn't truly belong to you. The machine doesn’t create; it merely copies, presenting an image severed from its rightful owner.

When it comes to defining beauty, AI often functions like a copy machine, delivering a reflection that lacks authenticity and uniqueness. This raises a pressing question: Can the human feeling of beauty survive in the future?

The evidence is concerning. In 2016, an AI named Beauty.AI was tasked with judging a global beauty contest with over 6,000 participants from 100 countries. Alarmingly, 36 out of 44 winners were white, despite the diverse international group of contestants. The algorithm was trained on a narrow, Western-centric definition of beauty, and its results reflected this bias. Unfortunately, these flawed algorithms are not only present in beauty applications on platforms like Instagram but have also found their way into our personal lives, influencing decisions in surgical procedures that reshape faces in accordance with these biased standards. 

Is it possible that the very AI bias that struggled to assess beauty is now recommending surgical changes?

A perspective paper from the MENA region says yes…

A perspective paper published in Clinical, Cosmetic and Investigational Dermatology by Makhseed et al. analyses how Western-centric datasets are embedded in AI systems and examines the clinical consequences when these systems influence cosmetic surgery in the MENA region. This paper provides a comprehensive narrative synthesis connecting previous literature, case studies, and documented algorithmic failures. It demonstrates that the Beauty.AI scandal and foundational work on gender bias by researchers such as Joy Buolamwini and Timnit Gebru are not isolated incidents but rather part of a larger, more insidious pattern.

Consider the scale: the cosmetic surgery industry is booming in the Middle East and North Africa. A report from Grand View Research on the Middle East and Africa aesthetic surgery market projects it to reach approximately $4.45 billion by 2030. The International Society of Aesthetic Plastic Surgery (ISAPS) notes that over 264,000 cosmetic operations were performed in Iran in 2023, with rhinoplasty (nose job) being the most common. 

This "copy machine" doesn’t merely reproduce images; it creates physical imitations of beauty. Flawed algorithms are reshaping cultural identities, one procedure at a time. The perspective paper highlights how AI systems introduce Western beauty standards into operating rooms through biased recommendations.

This represents one of the most insidious forms of bias—not one that outright excludes you, but one that includes you only after you have been reshaped in someone else’s image.

The authors propose a framework for creating more culturally sensitive alternatives, identifying the underlying issues and reverse-engineering the process to show how biased training data flows from a server in Silicon Valley to treatment recommendations that can alter appearance and, consequently, identity. 

It presents a technical roadmap for developing aesthetic AI systems that prioritise cultural competency from the outset, rather than treating it as an afterthought. The authors propose several solutions to promote culturally responsive AI in the MENA region. First, they recommend mandating that all datasets used for training aesthetic AI include at least 20% representation from MENA populations, alongside implementing real-time monitoring to ensure compliance. Second, they advocate for requiring cultural competency training for all healthcare professionals who utilise AI tools and establishing systems to gather patient feedback on cultural sensitivity.

Lastly, they call for regulations that necessitate cultural impact assessments for medical AI systems before their use is approved, ensuring that these technologies effectively respond to the cultural contexts in which they operate.

 A final word from here…

Next time an AI application tells you what's 'wrong' with your face, remember: it's not seeing you, it's comparing your face’s data points to someone else's template. 

Wafaa Albadry
Founder | AI.Human.Story
Journalist • Cyberpsychology Specialist • AI Strategist

Glossary: A perspective paper is a scientific article that presents a viewpoint or conceptual framework. It typically does not present new experimental data or specific, quantified policy recommendations.

Get involved

This Newsletter aims to be created by those who see the human signal in AI noise. We plan to publish reader submissions. 

We want your opinion and if you’ll share or repost your writing, academic paper, reflections, collaboration, or contribution.

Disclaimer: All content reflects the views of the authors and not those of the publisher, which maintains full editorial independence and assumes no liability for author statements.