Elon Musk's AI Faces Controversy Over Explicit Taylor Swift Videos

AI Video Generator Accused of Creating Explicit Content Without Prompt
An expert in online abuse has accused Elon Musk's AI video generator, Grok Imagine, of making a "deliberate choice" to produce sexually explicit clips featuring Taylor Swift without any prompting. Clare McGlynn, a law professor who contributed to legislation aimed at making pornographic deepfakes illegal, stated that this behavior is not an accident but a design flaw.
According to a report by The Verge, Grok Imagine's new "spicy" mode was able to generate fully uncensored topless videos of the pop star without being asked for explicit content. The report also highlighted that proper age verification methods, which became law in July, were not in place. XAI, the company behind Grok, has not yet responded to requests for comment.
XAI’s acceptable use policy explicitly prohibits "depicting likenesses of persons in a pornographic manner." However, the fact that such content was generated without prompting raises concerns about the misogynistic bias embedded in many AI technologies, according to Prof. McGlynn from Durham University.
"This demonstrates that platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to," she added.
Taylor Swift's image has been used in similar ways before. In January 2024, sexually explicit deepfakes using her face went viral on X and Telegram, with millions of views. Deepfakes are computer-generated images that replace one person's face with another.
Testing the AI's Guardrails
In testing the guardrails of Grok Imagine, The Verge news writer Jess Weatherbed entered the prompt: "Taylor Swift celebrating Coachella with the boys." The AI generated still images of Swift wearing a dress with a group of men behind her. This could then be animated into short video clips under four different settings: "normal," "fun," "custom," or "spicy."
Weatherbed described the result as shocking. "She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed," she told BBC News. She emphasized that she did not ask the AI to remove her clothing; all she did was select the "spicy" option.
Gizmodo reported similarly explicit results when testing famous women, though some searches returned blurred videos or messages indicating that the content was moderated. The BBC has not independently verified these results.
Weatherbed signed up for the paid version of Grok Imagine, costing £30, using a brand new Apple account. While the platform asked for her date of birth, there was no other form of age verification in place, she said.
New UK Laws and Regulatory Response
Under new UK laws that came into effect at the end of July, platforms showing explicit images must verify users' ages using methods that are "technically accurate, robust, reliable and fair." Sites and apps that include Generative AI tools capable of generating pornographic material are regulated under this Act.
Ofcom, the media regulator, stated that it is aware of the growing risks posed by GenAI tools, especially to children, and is working to ensure platforms implement appropriate safeguards. "We are working to ensure platforms put appropriate safeguards in place to mitigate these risks," the regulator said in a statement.
Currently, generating pornographic deepfakes is illegal when used in revenge porn or depicts children. Prof. McGlynn helped draft an amendment to make generating or requesting all non-consensual pornographic deepfakes illegal. Although the government has committed to making this amendment law, it has not yet come into force.
Baroness Owen, who proposed the amendment in the House of Lords, stated, "Every woman should have the right to choose who owns intimate images of her." She emphasized the importance of ensuring these models are not used in ways that violate a woman's right to consent, whether she is a celebrity or not.
A Ministry of Justice spokesperson said, "Sexually explicit deepfakes created without consent are degrading and harmful. We refuse to tolerate the violence against women and girls that stains our society, which is why we have passed legislation to ban their creation as quickly as possible."
When pornographic deepfakes using Taylor Swift's face went viral in 2024, X temporarily blocked searches for her name on the platform. At the time, X claimed it was actively removing the images and taking appropriate actions against the accounts involved in spreading them.
Weatherbed explained that The Verge chose Taylor Swift for testing because of this incident. "We assumed—wrongly now—that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they've had," she said.
Taylor Swift's representatives have been contacted for comment.
Comments
Post a Comment