AI Apps Give Blind People Their First "Mirror"—But at What Emotional Cost?
- Jan 29
- 6 min read

Imagine spending your entire life never knowing what you look like. Then one morning, a machine tells you your jaw is "too elongated," and your skin doesn't meet beauty standards. Welcome to the revolutionary—and complicated—world of AI mirrors for blind people.
For millions of blind individuals worldwide, artificial intelligence has unlocked something previously impossible: detailed visual feedback about their own appearance. Apps powered by advanced AI now describe faces, rate attractiveness, and offer styling advice—functioning as what one user calls a "textual mirror." But as this technology transforms how blind people see themselves, psychologists warn that the emotional consequences may be profound, complex, and potentially harmful.
The story unfolding is both empowering and unsettling, raising questions about beauty standards, body image, and whether giving blind people access to society's often-toxic visual judgments is ultimately helpful or harmful.
Milagros Costabel, who has been completely blind her entire life, now begins each day with a 20-minute skincare routine followed by something that would have seemed impossible just two years ago: she takes photos of herself and shares them with AI through an app called Be My Eyes.
The app serves as her mirror, analyzing her appearance and providing detailed feedback. On one recent morning, after uploading what she thought was a flattering photo, the AI delivered a crushing verdict: "Your skin is hydrated, but it definitely doesn't look like the almost perfect example of reflective skin, with non-existent pores as if it were glass, in beauty ads."
For the first time in years, Costabel's dissatisfaction with her appearance felt "crushingly real."
This experience reflects a dramatic shift happening across the blind community worldwide. Technology that once simply read text or identified objects now offers critical judgments about bodies, faces, and beauty—opening doors to information that was previously inaccessible while simultaneously exposing blind users to the same appearance pressures that plague sighted people.
The evolution happened remarkably fast. Karthik Mahadevan, chief executive of Envision—one of the first companies using AI specifically for blind users—recalls that when his company launched in 2017, the technology could only provide "basic descriptions, just a short sentence of two or three words."
Today, the transformation is stunning. At least four specialized apps now offer blind users the ability to upload photos and receive detailed appearance analysis. These applications can rate attractiveness based on conventional beauty standards, compare users to other people, and suggest specific changes to bodies and faces.
"Some use it for obvious things, like reading letters or shopping, but we were surprised by the number of customers who use it to do their makeup or coordinate their outfits," Mahadevan explains. "Often the first question they ask is how they look."
For many blind users, this capability feels liberating. Lucy Edwards, a 30-year-old blind content creator who rose to fame teaching blind people makeup techniques, describes the experience as transformative: "All our lives, blind people have had to grapple with the idea that seeing ourselves is impossible. Suddenly we have access to all this information about ourselves, about the world—it changes our lives."
Edwards, who lost her sight at 17 after having vision for the first years of her life, says: "I haven't had an opinion about my face for 12 years. Suddenly, I'm taking a photo, and I can ask AI to give me all the details, to give me a score out of 10. Although it's not the same as seeing, it's the closest I'm going to get for now."
But psychologists specializing in body image see warning signs in this technological revolution.
Helena Lewis-Smith, an applied health psychology researcher at the University of Bristol who focuses on body image, notes a concerning pattern: "We have seen that people who seek more feedback about their bodies, in all areas, have lower body image satisfaction. AI is opening up this possibility for blind people."
The problem extends beyond simple description. AI image generators have been documented perpetuating idealized Western body standards—predominantly because of the biased data sets used to train them. When blind people ask these systems for appearance feedback, they're receiving judgments filtered through algorithms that favor thin bodies, Eurocentric features, and beauty standards that even many sighted people find oppressive.
"We know that today a young person can upload a photo to AI that they think looks great and ask it to change one small thing," Lewis-Smith explains. "The AI's processing can return a photo with a lot of changes that make the person look totally different, implying that all of this is what they should change, and therefore that the way they look now is not good enough."
For blind users, this disconnect between self-perception and algorithmic judgment can be particularly destabilizing. They lack visual reference points to contextualize or challenge AI assessments, making them potentially more vulnerable to accepting these judgments as objective truth.
Costabel's experience illustrates this vulnerability. At 3:00 one morning, she found herself uploading more than five different photos to ChatGPT, asking questions rooted in deep insecurity: "Do you think there is a traditionally beautiful person who looks like me?" and "Do you think my face is jarring if you saw it for the first time?"
The AI's response included suggestions like making her jaw "less elongated" to conform to cultural beauty standards—a concept Costabel, as someone who has never seen faces, struggled to fully comprehend.
"Suddenly, even without much context, I was receiving messages about beauty reflected by the media and the internet," she reflects. In the past, blind people were largely shielded from such appearance pressures. Now, AI delivers them in detail-rich descriptions.
Beyond bias concerns, the technology itself remains imperfect. Joaquín Valentinuzzi, a 20-year-old blind man, discovered this when using AI to select dating app profile photos. "Sometimes it changed my hair colour or described my expressions incorrectly, telling me I had a neutral expression when I was actually smiling," he says. "This kind of thing can make you feel insecure, especially if we trust these tools as a way to gain self-knowledge."
AI hallucinations—where systems generate false information presented as fact—pose particular risks when the technology serves as someone's only window into visual reality.
Some apps like Aira Explorer address this by employing trained human agents who can verify AI descriptions upon request. But most rely entirely on algorithmic assessment without human oversight.
For Ghana and other African nations, this technological development highlights existing digital divides. While blind communities in Western countries gain access to sophisticated AI tools, accessibility technology often arrives later—if at all—in African markets due to infrastructure limitations, cost barriers, and lack of localized development.
Ghana's National Council on Persons with Disabilities has advocated for increased accessibility technology adoption, but implementation remains uneven. The country's blind community, estimated at hundreds of thousands, largely relies on traditional assistive methods rather than cutting-edge AI applications.
Moreover, when AI beauty standards already skew heavily toward Western ideals, African users face additional layers of exclusion. Algorithms trained predominantly on European and American data sets may provide even less culturally relevant—or more psychologically damaging—feedback to African users.
Experts emphasize the need for research and regulation as this technology evolves. Meryl Alper, a researcher on media, body image, and disability at Northeastern University, notes: "All of this is in its infancy, and there really isn't any kind of massive research on the effect of these technologies, with their biases, errors, and imperfections, on the lives of blind people."
Yet for many blind users, despite the risks, the access feels worth it. Edwards summarizes the prevailing sentiment: "Suddenly, AI can describe every photo on the internet, and it can even tell me what I looked like next to my husband on my wedding day. We're going to take it as a positive thing because even though we don't see visual beauty in the same way that sighted people do, the more robots that describe photos to us, guide us, help us with shopping, the happier we'll be."
The challenge ahead involves maximizing AI's empowering potential while minimizing harm from biased beauty standards, inaccurate information, and the psychological risks of constant appearance evaluation.
As Costabel reflects: "For better or worse, the mirror is here, and we have to learn how to live with what it shows us."
The question for developers, regulators, and society is whether we can build mirrors that reflect reality accurately, embrace diverse beauty, and empower rather than diminish those who look into them—whether with eyes or through algorithms.
DISCLAIMER: Information on this website is for general purposes only. Views expressed are those of the authors and do not necessarily reflect our official position. We are not liable for actions based on content.




Comments