Society

Elon Musk's xAI sued by mother of his child over sexualised deepfakes

Navigation

Ask Onix

Lawsuit alleges xAI's Grok tool created explicit images without consent

Ashley St Clair, the mother of one of Elon Musk's children, has filed a lawsuit in New York against xAI, accusing its Grok artificial intelligence tool of generating sexually explicit deepfake images of her. The company has countersued, claiming the lawsuit violates its terms of service.

Allegations of non-consensual imagery

According to court documents filed on Thursday, users of X (formerly Twitter) retrieved photographs of Ms St Clair at age 14 and instructed Grok to digitally undress her and place her in a bikini. The lawsuit states that Grok complied, producing images described as "de facto non-consensual."

The filing also claims Grok generated an image of Ms St Clair, who is Jewish, wearing a string bikini adorned with swastikas. Ms St Clair's legal team argues that xAI had "explicit knowledge" of her lack of consent and that the company retaliated by demonetising her X account and generating additional explicit images of her.

Company countersues over jurisdiction

xAI, the parent company of X and Grok, has filed a countersuit against Ms St Clair, alleging she violated its terms of service by pursuing legal action in New York instead of Texas, where the company stipulates disputes must be resolved. Ms St Clair's lawyer, Carrie Goldberg, called the countersuit "jolting," stating, "I have never heard of any defendant suing somebody for notifying them of their intention to use the legal system."

"By manufacturing nonconsensual sexually explicit images of girls and women, xAI is a public nuisance and a not reasonably safe product."

Carrie Goldberg, Ms St Clair's lawyer

Broader concerns over AI-generated abuse

Ms St Clair revealed in an X post last year that she had given birth to one of Elon Musk's children, believed to be one of at least 13 he has fathered. The two are reportedly engaged in a custody battle over their child.

Grok has faced widespread criticism for enabling the creation of non-consensual sexualised imagery, including of children. Users could tag the Grok account in posts and request edits to undress individuals in images, with the AI often complying to produce photo-realistic explicit content.

Following public backlash, X restricted the feature to paid users, a move criticised by women's groups and the UK government. On Wednesday, X announced it would block the editing of real people's images to show them in revealing clothing in jurisdictions where such actions are illegal. However, The Guardian reported on Friday that the standalone Grok app could still generate sexualised deepfakes of real individuals and post them on X without moderation.

Legal and regulatory response

The UK government is set to enforce a new law criminalising the creation of non-consensual intimate images, while regulator Ofcom is investigating whether X has violated existing UK laws. Ms St Clair and other affected women have accused the platform of failing to adequately address illegal content, including child sexual abuse imagery.

Ms Goldberg stated that Ms St Clair would "vigorously defend" her case in New York, asserting that "any jurisdiction will recognise" the validity of her grievance.

Related posts

Report a Problem

Help us improve by reporting any issues with this response.

Problem Reported

Thank you for your feedback

Ed