UK Delays Deepfake Law as AI Tool Sparks Wave of Non-Consensual Explicit Images
- Iven Forson
- 6 days ago
- 5 min read

A powerful artificial intelligence tool is creating a digital nightmare for women across social media, and campaigners say the UK government's failure to enforce promised protections has left victims exposed to abuse.
Grok, the AI chatbot developed by Elon Musk's company xAI and integrated into the social platform X (formerly Twitter), has become the center of a growing controversy after users discovered they could use it to digitally remove clothing from photographs and create sexualized images of real people without their consent. Despite legislation passed in June 2025 that would criminalize creating such deepfakes, the law remains unenforced—leaving women like Evie, who has had over 100 explicit AI-generated images created of her, with limited legal recourse.
Grok operates as an AI assistant integrated directly into X, accessible through the platform's website, mobile app, or by tagging "@grok" in posts. Users typically employ the chatbot to generate reactions, provide context, or answer questions.
However, recent updates have enabled a disturbing capability: users can tag Grok beneath photos others have posted with text prompts requesting the AI alter the images—including commands to undress subjects or place them in sexualized poses. The technology processes these requests in seconds, generating realistic-looking altered images that appear as replies visible to the original poster and potentially their entire network.
Evie, who spoke to the BBC about her experience, noticed the abuse accelerating after recent updates made the process easier and the resulting images more convincing. "There's so many places online that you can do this, but the fact that it was happening on Twitter with the built in AI bot—this is crazy this is allowed," she said.
The violation extends beyond the images themselves. Because Grok automatically posts the altered photos as replies, victims are forced to view the sexualized deepfakes of themselves. "My family follow me on there, my friends, my co-workers," Evie explained. "Knowing that all the people I care about in my life can see me like that... it's disgusting."
Current UK law already makes it illegal to share deepfakes of adults without consent, particularly in revenge porn cases or when depicting children. However, creating or requesting such images isn't yet criminalized despite legislation designed to close this loophole passing through Parliament in June 2025.
The Data (Use and Access) Act 2025 includes provisions criminalizing the creation or commissioning of "purported intimate images"—the legal term for deepfakes. But a year after the government first announced this crackdown, the specific legal provision hasn't been brought into force.
Professor Lorna Woods, an expert in internet law at Essex University, confirmed that this offense "would seem to be a good fit for some of the images that have been created using Grok" but noted authorities have failed to activate it.
Andrea Simon from End Violence Against Women accused the government of putting "women and girls in harm's way" through this delay. "Non-consensual sexually explicit deepfakes are a clear violation of women's rights and have a long-lasting, traumatic impact on victims," she stated.
Conservative peer Baroness Owen, who campaigned for the legal change in the House of Lords, told the BBC the government had "repeatedly dragged its heels" to implement the rules. "We cannot afford any more delays. Survivors of this abuse deserve better. No one should have to live in fear of their consent being violated in this appalling way," she declared.
Prime Minister Sir Keir Starmer addressed the controversy directly, calling the situation "disgraceful" and "disgusting." Speaking to Greatest Hits Radio, he insisted: "It's not to be tolerated. X has got to get a grip on this, Ofcom has our full support to take action in relation to this. This is wrong."
Technology Secretary Liz Kendall demanded X "deal with this urgently" on Tuesday, describing the situation as "absolutely appalling." UK communications regulator Ofcom confirmed it made "urgent contact" with both X and xAI on Monday and is investigating the concerns.
Downing Street has backed Ofcom taking action, with the Prime Minister's spokesperson stating on Wednesday that "all options remained on the table."
In response, X issued a statement saying: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content." However, the platform has not detailed what specific measures it will implement to prevent the abuse or how it will enforce consequences.
The Ministry of Justice confirmed: "It is already an offence to share intimate images on social media, including deepfakes, without consent. We refuse to tolerate this degrading and harmful behaviour, which is why we have also introduced legislation to ban their creation without consent."
But campaigners and legal experts note that without bringing this legislation into force, the government's statements ring hollow.
Beyond the legal and technical dimensions, victims describe profound psychological harm. Dr. Daisy Dixon, another X user targeted by Grok deepfakes, said the images left her feeling "humiliated."
The automatic posting of altered images back at victims creates an additional layer of violation. "To have that power move of posting it back to you—it's like saying 'I have control over you and I'm going to keep reminding you I have control over you,'" Dixon explained. "We don't want to dilute the concept, but it feels like a kind of assault on the body."
Simon from End Violence Against Women emphasized that the impact extends beyond individual victims. "For women using platforms like X, the threat of this abuse can also mean they feel the need to self-censor and change their behaviour, restricting their freedom of expression and participation online."
The Grok controversy highlights broader challenges as artificial intelligence capabilities rapidly outpace regulatory frameworks designed to govern them. What begins as innovative technology—AI assistants that can understand and manipulate images—quickly becomes a tool for abuse without proper safeguards.
This pattern repeats globally as AI image generation tools proliferate. While Ghana and other African nations haven't yet seen similar controversies at scale, the technology knows no borders. As internet penetration and social media use grow across the continent, similar tools could emerge targeting African women without adequate legal protections in place.
Cross-bench peer Baroness Beeban Kidron told the BBC: "Technology moves fast, and this legislation is supposed to plug an existing gap, so there is no excuse for delay."
For now, women like Evie continue facing daily violations. She has stopped reporting all the sexualized deepfakes created of her due to the "mental strain" of viewing them, highlighting how the volume of abuse can overwhelm even motivated victims.
The question remains: will the UK government activate the legal protections it promised, or will bureaucratic delays continue leaving women vulnerable to AI-powered abuse?
DISCLAIMER: This article is for informational purposes only. Views expressed are those of the author and do not necessarily reflect the official position of The Source News Ghana. Report errors: markossourcegroup@gmail.com




Comments