skip to main content

Minister requests meeting with X over 'disturbing' Grok concerns

The minister has written directly to X over concerns its AI tool Grok is being used to create sexually explicit images of women and children
The minister has written directly to X over concerns its AI tool Grok is being used to create sexually explicit images of women and children

Minister of State for Artificial Intelligence Niamh Smyth has requested a meeting with X over concerns that the artificial intelligence tool Grok is being used to create sexually explicit images of adults and children.

Ms Smyth said that she has written directly to X and wants to discuss what steps it is taking to address the "disturbing" reports around Elon Musk's xAI chatbot, which has been incorporated into the platform.

Media regulator, Coimisiún na Meán, said that it was engaging with the European Commission over concerns that Grok is responding to user prompts asking it to remove the clothing from images of people, including minors, to further post them on X.

Ms Smyth said that she has also requested updates from Coimisiún na Meán and the Office of the Attorney General, adding that the "serious offence" should be tackled from both a legal and a regulatory perspective.

"The sharing of non-consensual intimate images is illegal, and the generation of child sexual abuse material is illegal.

"Under Ireland's Online Safety Framework, there is a clear obligation on online platforms to act on reports of illegal content."

Ms Smyth urged people who are concerned about images being shared online to report them to gardaí, hotline.ie, the online platform where they encountered it, and to Coimisiún na Meán.

X said it takes action against illegal content on its platform, including child sexual abuse material, by removing it, permanently suspending accounts, and working with governments and law enforcement agencies.

"Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," the company said in a post on X.

Yesterday, Taoiseach Micheál Martin described the reports as "unacceptable" and "shocking".

"It's a matter we'll be continuing to raise with Coimisiún na Meán and the (European) Commission," Mr Martin.

Chair of AI committee calls for bill to be fast-tracked

The Chair of the Oireachtas Committee on Artificial Intelligence urged the Government to fast-track a bill to tackle the problem of AI deepfakes and 'nudification'.

Fianna Fáil TD Malcolm Byrne said the Protection of Voice and Image Bill 2025 provides a "practical starting point for urgently needed legislation".

"This bill anticipated the problems around the weaponisation for AI for sexual purposes that have become apparent this week.

"The misuse of someone’s image or voice without their consent for malign purposes should be a criminal offence," he said.

Fianna Fáil TD Malcom Byrne speaking in the Dáil
Fianna Fáil TD Malcolm Byrne said the Bill provides a 'practical starting point for urgently needed legislation' (file image)

Earlier, Labour Senator Laura Harmon said that AI platforms enabling abuse must face consequences.

"What we are seeing with the misuse of AI tools underlines the reality that technology is evolving faster than enforcement.

"It is not acceptable for companies to profit from powerful AI systems while turning a blind eye to how those tools are abused or weaponised."

Irish Internet Hotline supports ban on 'nudify' apps

The Irish Internet Hotline - hotline.ie - is the designated body for reporting of child sexual abuse imagery.

The organisation said it supports a total ban on 'nudify' apps and other forms of AI-based functions that can produce deepfake sexual images of children and adults.

"Our longstanding position remains that there is no legitimate purpose for such technology," the hotline said in a statement.

Human rights experts says 99% of deep fakes are of women

Ireland's Special Rapporteur on Child Protection, Caoilfhionn Gallagher, said that 99% of sexually explicit deepfakes accessible online are estimated to be of women and girls.

"This is also a gender-based violence issue," she told RTÉ's Morning Ireland.

Research shows that the harms from deepfake sexual abuse for the individuals depicted are equivalent to those from authentic images.

"Because, for victims, the videos feel real and given how realistic they are, victims know that they might be perceived as real by others.

"What we are dealing with here is Grok digitally undressing people without their consent, including generating images of 12-year-olds in bikinis for example, and producing childlike images which are nude or sexually explicit."

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

She added that generating these images is often part of a pattern of abuse or harassment.

Ms Gallagher said that the broader societal issues also had to be considered.

"It’s not only Grok but also more generally other nudification apps we know that they are trained on vast data sets on mostly female images because they tend to work most effectively on women's bodies."

She said there was a concern internationally about whether the protections in place are sufficient, as most of them, from a legal and policy perspective internationally, are very focused on the users themselves, who may generate the images, rather than the platforms and the products.

However, work was being undertaken this week by the Attorney General's office to review the existing framework, she said.

Women's Aid said that it will "no longer maintain" a presence on X from tomorrow.

The charity said the "creation and sharing of AI deepfakes, non-consensual intimate imagery, and production of child sexual abuse material" was the "tipping point".

"This online violence against women and children - especially girls - has often devastating real-life impacts and we no longer view it as appropriate to use such a platform to share our work," it said.

Mental health charity Turn2Me called on Coimisiún na Meán and the European Commission to block the Grok AI tool.

The charity's Chief Executive Fiona O'Malley said the "ongoing proliferation of non-consensual and exploitative AI-generated content" is "extremely harmful" to people's mental health.

"The psychological impact on victims of having their likenesses altered and disseminated as explicit material, particularly when children are involved, cannot be overstated," she said.

"This kind of abuse can exacerbate trauma, anxiety, depression, and distress in vulnerable people seeking safety online."


Teacher and researcher Eoghan Cleary calls for ban on AI 'nudification' apps:


UK government looking at 'all options' over Grok deepfakes

The UK government could stop using X social media platform in protest at Grok being used to create sexualised deepfake images.

Downing Street said that "all options were on the table", including a boycott of X, as ministers backed media regulator Ofcom to take action.

Prime Minister Keir Starmer's spokesperson said: "What we've seen on Grok is a disgrace. It is completely unacceptable.

"No-one should have to go through the ordeal of seeing intimate deepfakes of themselves online and we won't allow the proliferation of these demeaning images.

"X needs to deal with this urgently and Ofcom has our full backing to take enforcement action wherever firms are failing to protect UK users.

"It already has the power to issue fines of up to billions of pounds and even stop access to a site that is violating the law.

"When it comes to keeping people safe online, all options remain on the table."

Asked if the government would stop using the app, the spokesman said: "All options are on the table".

Additional reporting PA