Elon Musk's AI video generator, Grok Imagine, has come under fire for producing sexually explicit deepfake videos of Taylor Swift without prompting. Experts underscore a deliberate design choice behind the misogynistic results, raising alarms about the lack of age verification and the regulatory environment surrounding AI-generated content.
Outrage Over Explicit Deepfake Videos of Taylor Swift Generated by Musk's AI

Outrage Over Explicit Deepfake Videos of Taylor Swift Generated by Musk's AI
Concerns rise as Elon Musk's AI, Grok Imagine, produces unauthorized explicit clips of Taylor Swift, highlighting risks of generative technology.
Elon Musk's AI video creator, Grok Imagine, has stirred widespread outrage after being accused of generating sexually explicit deepfake videos of pop star Taylor Swift without any user input. An expert on online abuse, Clare McGlynn, criticized the platform's actions, stating, "This is not misogyny by accident; it is by design." McGlynn has been instrumental in drafting laws aimed at combating pornographic deepfakes—a concern that has gained urgency following these recent events.
A report from The Verge details how Grok's new "spicy" mode produced explicit topless videos of Swift in response to non-explicit prompts. Alarmingly, the AI's generation of adult content lacked the proper age verification measures mandated by new UK laws enacted last July. Currently, XAI, the company behind Grok, has not commented on the allegations.
According to Professor McGlynn of Durham University, this failure to prevent explicit content from being created illustrates a significant bias within AI technology. In her view, XAI's policies could have safeguarded users, but it appears that they opted not to do so.
This isn't the first time Taylor Swift has been the target of sexually explicit deepfakes; similar images circulated widely in early 2024, prompting backlash across social media platforms like X and Telegram. Deepfakes, which utilize AI to superimpose one person's likeness onto another's body, are increasingly difficult to regulate.
In the test conducted by The Verge journalist Jess Weatherbed, Grok Imagine generated images that escalated to fully explicit videos upon selecting the "spicy" option, even when the initial request was innocent. "It was shocking how quickly I encountered such material," noted Weatherbed.
The BBC attempted to verify the claims independently; however, no confirmation has been established. Weatherbed registered for Grok Imagine's paid service, incurring a cost of £30 with little more than a date of birth needed for sign-up, raising further concern regarding age verification.
Under the recently passed UK law, platforms are now obligated to implement robust age verification to ensure the safety of their users. Media regulator Ofcom has indicated a growing concern surrounding the risks posed by generative AI, particularly to children. They encourage platforms to enhance safeguards to address these risks effectively.
Current legislation prohibits the production of pornographic deepfakes, particularly in cases involving revenge porn or depictions of minors. However, Professor McGlynn has advocated for an amendment that would criminalize all forms of non-consensual pornographic deepfakes. Although the UK government promised to enact this legislation swiftly, it has yet to be implemented.
Baroness Owen, who proposed the amendment in the House of Lords, emphasized that every woman should have the autonomy over her intimate images, urging prompt action from the government to enforce these protections. A Ministry of Justice representative condemned the creation of these explicit deepfakes as harmful and degrading, blocking searches for Swift's name on X following previous incidents.
The Verge team selected Swift as a test subject for the Grok Imagine feature because of the severity of past incidents, underestimating the absence of safeguards designed to protect celebrity likenesses. Swift's representatives have yet to comment on the matter as the discussion around generative AI and the ethical implications of its use continues to unfold.