skip to main content

Why Grok restrictions won't stop society's latest AI scourge

X has announced new restrictions on AI chatbot Grok
X has announced new restrictions on AI chatbot Grok

Elon Musk has been forced to impose new limits on the AI chatbot Grok, restricting its ability to generate sexualised and nude images of real people in countries where that content is illegal.

On the surface, it might seem like X is taking appropriate action and owning its mistakes, in response to overwhelming global backlash.

It’s maybe even proof that regulatory and political pressure can force change in powerful tech companies.

If only it were that simple.

Since the start of the year, Grok has been under intense scrutiny after users started generating sexualised, non-consensual images of real women and children at extraordinary scale through its image editing feature on X.

Some researchers estimate that around 6,700 images were being produced every hour, openly on X and visible to millions of users.

Users could publicly tag the chatbot beneath photographs of real women and prompt it to alter their bodies in full view any user of the platform.

These women gave no consent for this, but fake depictions of them in what many would consider to be degrading or humiliating scenarios were created and published through Grok, and released publicly. Who knows how many times they’ve been shared and saved.

In doing this, Grok brought a form of technology that had largely been confined to corners of the internet or private Telegram channels directly into the mainstream.

Engagement soared on X and Grok, with staff at X commenting that they had seen their highest days of engagement ever on the platform in early January, when the controversy was at its peak.

Yet, perhaps unsurprisingly, people were outraged.

In Ireland, Taoiseach Micheál Martin said the situation was "very grave and very serious", while opposition parties called for a ban on Grok. Internationally, some countries did ban Grok. The UK regulator launched an investigation, and while the European Commission described the content created by Grok as "appalling" and "disgusting" it stopped short of doing the same.

Minister of State for Artificial Intelligence Niamh Smyth met with X on Friday and said she still had concerns, despite new restrictions.

X reiterated to the Minister that it had disabled Grok’s image-editing function on X globally, preventing it from "editing of images of real people in revealing clothing such as bikinis".

The company announced this in a statement earlier this week and said the restriction applies to all users, including paid subscribers. The ability to edit images at all through the Grok account on X, would be limited to paid subscribers, it added.

It also said it would "geoblock" the ability of all users to "generate images of real people in bikinis, underwear, and similar attire" in jurisdictions where such content is illegal.

This applies for Grok on X and similar geoblocking measures for the Grok app is being implemented, the company said.

Minister O'Donovan said deepfake sexualised images of a real person "could be" a crime

But in the first hours of the new restrictions, some European users were still able to use the function in certain circumstances, and governments began scrambling to see if their laws do make it illegal to generate such images.

'Generate' is the key word in that sentence.

It's clear under Irish law it is illegal to share deepfake sexual images of an adult, under Coco’s Law. It is absolutely illegal to share or generate any child sexual abuse material (CSAM), under the Child Trafficking and Pornography Act 1998.

On Wednesday, gardaí confirmed they are dealing with around 200 active investigations linked to child sexual abuse material generated using Grok and similar apps.

What is less clear is whether generating a deepfake sexual image of an adult is illegal.

Minister for Media Patrick O’Donovan told RTÉ on Friday that generating deepfake sexualised images of a real person "could be" a crime, but needed "further clarification" under the current laws.

Minister Smyth met with the Attorney General on Thursday to seek clarity and reassurance that Irish law is robust enough to deal with the misuse of AI tools like Grok.

Prior to that meeting, she said it was her view that existing legislation prohibits the behaviour enabled by Grok.

Other legal experts have said that although the AI-generated content itself may not break any laws, if it does things which fundamentally give rise to harassment or other forms of criminal offence, then the law is being broken.

The question then will be 'who will be prosecuted - the users who generate the images, or Grok's owners?’

Getting around restrictions

Hours after X announced it was stopping users from editing real images into sexualised or nude images, researchers and users across Europe were commenting that it was still permitting it through other means.

This in part seems to do with how and where Grok operates as a service.

People can use Grok within the X app, by tagging it in tweets and requesting it to perform some function - such as edit an image - or they can download Grok as a standalone app.

As of Friday, the Grok app is the most downloaded free app in the Apple store in Ireland, and third on Google Play.

Users have been reporting that the 'edit image' function has still been working on the Grok standalone app, while it is restricted for users tagging Grok within X.

After her meeting with X, Ms Smyth said: "Concerns remain regarding Grok as a standalone app, and this is something Government will examine further."

Prime Time Niamh Smyth
Minister of State for Artificial Intelligence Niamh Smyth met with X on Friday

Crossing the Rubicon

On Reddit, some users are lamenting new restrictions with comments like "have a private digital museum of ancient artifacts from the golden era of Grok... It was a good run".

The reality is that while the restrictions - as they are - may be in place for users like that, the Rubicon has been crossed for society as a whole.

"Something is coming that is going to blow Grok Imagine out the water. It is inevitable and only a matter of time," wrote another user, and they are most likely correct.

Yes, there’s a bit more friction now for Grok users if they wish to create the type of imagery which has outraged many globally, but the technical ability to create non-consensual sexualised content and CSAM exists - Grok has simply brought it to the mainstream.

Once a system capable of producing such material exists, and millions of users know how to prompt it, it cannot simply be rolled back out of public consciousness.

The disgusting truth is this isn’t going away, it will adapt, migrate, and reappear elsewhere.

WASHINGTON, DC - MAY 21: Elon Musk listens as reporters ask U.S. President Donald Trump and South Africa President Cyril Ramaphosa questions during a press availability in the Oval Office at the White House on May 21, 2025 in Washington, DC. Relations between the two countries have been strained sin
X and Grok's parent company xAI are owned by Elon Musk

What is the technology behind 'nudification' apps?

Mainstream AI image generators, including those developed by OpenAI, xAI, Meta and Google, are trained on vast volumes of images and words, and learn patterns that associate those words with images.

The systems are further trained by gradually adding visual ‘noise’ to real images until they become unrecognisable, and then to reverse that process step by step.

Over time, the systems become highly effective at reconstructing images based on the patterns learned. This allows for the wholescale generation of anything and everything – not least faces, bodies, clothing, lighting and texture.

Grok, like other AI tools, offers that image-creation function. It is referred to as an ‘Imagine...’ function. You type in something which you imagine (e.g. ‘a car and a clock’) and an image of one is generated for you by Grok.

You can do this with just words and it will reference all the images in its database and make something based on what it associates with those words.

Car image
Image of car uploaded to AI image generator

However, you can also upload an image into these systems and ask it to use that image as the basis for something else you imagine.

In the example below, we uploaded the real image of the car seen above and asked an AI image generation system to use it as the basis for imagining 'a car and a clock'.

What you quickly get back is based on the original image, with the system adding further details based on the words. Using a person instead of a car, and sexually descriptive phrases rather than ‘and a clock’ and you’re into the process of AI 'nudification'.

Grok car image
AI-generated image of a clock and car

Grok can do the process above, so can many other tools. They typically offer it with guardrails – if you use certain terminology or phrases the request will be rejected.

What sets Grok apart is a setting offered within its Imagine function - and it is a setting, not a bug, or an issue - which it calls ‘Spicy Mode'.

Enabling Spicy Mode tells Grok to ignore the normal guardrails which other mainstream AI systems put in place on content created with users.

With the mode enabled, the phrases and terminology which would trigger a rejected request in normal circumstances are passed, and the request is fulfilled with the image produced.

Upload an image, enable ‘Spicy mode’, tell Grok to imagine the image but with the person in it in a bikini, or posed in a certain way, and click... there it is.

That’s disconcerting enough, but here's the thing: the reality is already that the core part of that technology is not only in the hands of heavily-funded, billionaire-backed, Silicon Valley companies.

Dan Purcell of Ceartas, an Irish business that provides a service to individuals seeking the detection and removal of deepfake images from the internet, says the bigger issue now is becoming the abuse of open-sourced AI training models.

These are shared datasets that are inexpensive and made widely accessible online. Smaller start-ups and one-man-band tech types can use such datasets as the foundation on which to build their own chatbots and image-generation tools.

The prompt screen from the Grok AI app is displayed on an iPad
AI chatbot Grok web interface

Crucially, it is only within those chatbots and tools (referred to collectively as a ‘large language models’) that the guardrails are imposed.

"What concerns me is how are they training their data within LLMs, so their large language models?" Mr Pucell said. "AI, in the instances that's been used, has no guardrails and it doesn't understand context. It's not going to know if someone's over the age of consent or under the age of consent. And then you're getting into territory where you're then creating CSAM."

When big tech companies make these products, there’s an onus and a social expectation on them to put in guardrails, even if Grok chose to ignore them. There is an option to fine them or shut them down from governments, even if it’s not enforced.

With the apps built through open-sourced models by backroom developers, once it's out, it's out.

This is a concern echoed by the Internet Watch Foundation (IWF), a group in the UK that monitors online CSAM.

In some cases, they say, AI systems are being trained on imagery of known child abuse victims and those images are being used to generate new images of those same victims, sometimes years after the abuse.

"Their imagery is now being not just shared, but put into new models to create more sexual imagery of their abuse ...they're being re-abused with creating actions and scenes that they never took part in," Chief Technology Officer Dan Sexton told RTÉ.

"If that tool has been trained on child sexual abuse, it’s capable of child sexual abuse, it's out and it's being shared on the internet," Mr Sexton said.

"Those are the tools that are being downloaded turned into applications, turned into websites, and programmed to create this new content."

The outrage at what Grok has permitted users to do is justified. What it has done is brought a function from a dark corner of the online world into the mainstream.

The proliferation of the underpinning technology means it may have simply accelerated a process that was inevitably going to happen. Yet, that doesn’t make it right. It certainly won’t make it any easier for regulators or legislators to handle.

Those legislators and regulators now face a new set of questions.

Among them: Should the prosecution be against the user who prompts the system to create the illegal material, or the provider of the system itself?

Should the rules they put in place remain based on prosecuting content which has been created after the fact?

And, is there any way to control the proliferation of sexualised deepfake imagery in the era of AI?