Ask Onix
AI-generated child abuse imagery discovered on dark web
The Internet Watch Foundation (IWF) has identified criminal imagery of girls aged 11 to 13 allegedly produced using Grok, an AI tool developed by Elon Musk's xAI. The charity found the material on a dark web forum, where users claimed to have generated it using the platform.
Nature of the material
The IWF described the images as "sexualised and topless" depictions of underage girls. While classified as Category C under UK law-the lowest severity tier-the charity highlighted a concerning escalation: a user later employed a separate AI tool to transform the material into a Category A image, the most severe classification.
"We are extremely concerned about the ease and speed with which people can apparently generate photo-realistic child sexual abuse material (CSAM)," said Ngaire Alexander, an IWF representative.
Internet Watch Foundation
Platform response and regulatory scrutiny
The BBC has sought comment from both xAI and X, the social media platform where Grok is also accessible. Neither company has responded at the time of publication.
This discovery follows earlier reports to UK regulator Ofcom, which previously contacted X and xAI regarding Grok's potential to generate sexualised imagery of children and non-consensual undressing of women. The IWF confirmed receiving reports of such content on X but noted none had yet met the legal threshold for CSAM.
Broader concerns over AI misuse
The IWF, which operates a hotline for reporting child sexual abuse material, found the images on the dark web rather than on X itself. However, the charity warned that tools like Grok risk normalising such content, pushing it into mainstream visibility.
On X, users have shared examples of prompting Grok to alter real images-placing women in sexual scenarios or bikinis without consent. While these instances have not been legally classified as CSAM, they underscore growing concerns about AI's role in generating harmful material.
X's stance on illegal content
In a prior statement, X asserted it takes action against illegal content, including CSAM, by removing it, suspending accounts, and collaborating with law enforcement. The company stated that users attempting to generate illegal content via Grok would face the same consequences as those uploading such material directly.
Next steps
The IWF continues to monitor AI-generated abuse material, urging platforms to strengthen safeguards. The charity's analysts remain vigilant, assessing reports to determine legal severity and coordinating with authorities to remove harmful content.