Society

X restricts Grok AI from editing real people into revealing clothing

Navigation

Ask Onix

X imposes limits on Grok AI image edits

Elon Musk's social platform X has blocked its Grok artificial-intelligence tool from altering photographs of real individuals to depict them in revealing attire in regions where such edits are illegal. The move follows a surge of criticism over sexually explicit deepfakes generated by the model.

Policy update details

In a statement posted on X, the company confirmed that it has "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing." The restriction applies to jurisdictions where such edits violate local laws, including the creation of images featuring swimwear, underwear, or similar attire.

X also reiterated that only subscribers with paid accounts can use Grok to modify images, adding that this layer of control will help hold users accountable if they attempt to breach platform rules or legal standards.

Government and regulator reactions

The UK government described the change as "vindication" after Prime Minister Keir Starmer had publicly urged X to rein in its AI tool. A spokesperson for the communications regulator Ofcom called the update a "welcome development" but stressed that its investigation into potential violations of UK law remains active.

"We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."

Ofcom spokesperson

Technology Secretary Liz Kendall endorsed the decision while emphasizing that Ofcom's probe must thoroughly establish the facts. California's top prosecutor revealed hours earlier that the state is examining the spread of sexualized AI deepfakes, including those targeting minors, produced by Grok.

Implementation challenges and enforcement

It remains unclear how X will enforce location-based restrictions on Grok's image-editing capabilities or whether users could bypass them using virtual private networks (VPNs). VPN usage surged in the UK last year when adult websites began requiring age verification under the Online Safety Act.

Policy researcher Riana Pfefferkorn questioned how the AI model will distinguish real individuals from fictional characters and what penalties will apply to rule-breakers. She noted that the safeguards arrived late, long after the initial wave of abuse, and suggested that Musk's public conduct-including reposting an AI-generated image of Starmer in a bikini-undermined the company's credibility.

Advocacy groups respond

Campaigners and legal experts welcomed the policy shift but warned that the harm to victims is already widespread. Clare McGlynn, a law professor at Durham University, said the change came "too late for the thousands of women who have been victimized and whose images remain online."

"This shows how victims of abuse, campaigners, and government pressure can force tech platforms to act. But it can't stop here-given the evolving nature of AI-generated harms, platforms must be required to take proactive preventative action."

Andrea Simon, Director, End Violence Against Women Coalition

Ofcom could seek a court order to block X entirely in the UK if the platform fails to comply with local laws. Starmer had previously threatened to strip X of its self-regulatory privileges and strengthen legislation if the company did not address the issue.

Content standards and global variations

Musk clarified on Wednesday that Grok's "not safe for work" settings permit "upper body nudity of imaginary adult humans," aligning with content standards for R-rated films in the United States. He added that the rules would adapt to local laws in other countries.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed