Ofcom Investigates Elon Musk’s X Over Grok AI Sexual Deepfakes

Key Highlights

  • Ofcom launches an investigation into Elon Musk’s X over concerns its AI tool Grok is being used to create sexualized images.
  • The UK watchdog states there have been “deeply concerning reports” of the chatbot being used to create and share undressed images, as well as “sexualised images of children.”
  • Ofcom can potentially issue a fine of up to 10% of X’s worldwide revenue or £18 million if found to be in breach of the law.
  • Musk referred to the investigation as an “excuse for censorship” and questioned why other AI platforms were not being looked at.
  • The decision follows global backlash over Grok’s image creation feature, with Malaysia and Indonesia temporarily blocking access to the tool.

Ofcom Investigation into Elon Musk’s X Over Grok AI Sexual Deepfakes

The UK communications regulator Ofcom has launched a formal investigation into Elon Musk’s social media platform X over concerns that its AI tool, Grok, is being used to create and share sexually explicit images. This move comes in the wake of widespread reports suggesting the chatbot is being misused by users to generate undressed or sexualized images of individuals without their consent.

Regulatory Action and Potential Penalties

In a statement, Ofcom emphasized that there have been “deeply concerning reports” of Grok being used to produce and share undressed images as well as “sexualised images of children.” If the platform is found in violation of the law, Ofcom has the authority to impose significant fines. These can amount to up to 10% of X‘s worldwide revenue or £18 million, whichever figure is greater.

Elon Musk’s Response and Backlash from Users

Musk responded to the news by stating that he believed the UK government wanted “any excuse for censorship” in response to a post questioning why other AI platforms were not being investigated. The backlash has been significant, with several users reporting the creation of sexually explicit images based on their consent.

One woman reported that over 100 sexualized images had been created using Grok without her permission. She highlighted the severity and distress this issue causes to individuals and called for immediate action from X.

Regulatory Bodies and Public Response

The Technology Secretary, Liz Kendall, welcomed the investigation but urged Ofcom to complete it swiftly. “It is vital that Ofcom completes this investigation as soon as possible because the public—most importantly the victims—will not accept any delay,” she said.

Other political figures have also raised concerns. Northern Ireland politician Cara Hunter stated that she had decided to leave the platform due to these issues, while Downing Street confirmed that the government would continue to focus on protecting children but remain open to reviewing its presence on X.

Industry Context and Future Implications

The decision by Ofcom to investigate comes in response to global backlash over the use of AI tools for generating explicit images. In Malaysia and Indonesia, Grok was temporarily blocked as a result of these concerns.

Experts noted that while the investigation is a significant step, it also raises broader questions about the regulation of AI and its potential misuse. Dr.

Daisy Dixon, a professor of internet law at Essex University, commented on the difficulty in predicting how quickly Ofcom will move forward with their investigation: “Ofcom has a degree of choice in how fast or slow they take the investigation.” She added that under rare circumstances, Ofcom could apply for a business disruption order to block access to X immediately.

“Women and girls need action and changes on the ground so that Grok does not produce illegal intimate images,” said Clare McGlynn, a law professor at Durham University. She emphasized that the focus should be on preventing such content from being created in the first place rather than just reacting to complaints.

The investigation into X and its AI tool, Grok, highlights the ongoing challenges of regulating emerging technologies in the digital age. As more platforms incorporate advanced AI capabilities, the need for robust guidelines and enforcement mechanisms will only grow.