Kendall urges X to curb Grok’s AI sexual deepfakes
Technology Secretary Liz Kendall has urged Elon Musk's social media platform X to address the use of its artificial intelligence chatbot Grok in the creation of non-consensual sexualised images of women and girls.
The call follows concerns that Grok can be used to generate sexualised imagery without consent. The issue adds to scrutiny of how large platforms handle harmful and illegal content linked to generative AI tools.
Alexander Brown, Head of Technology Media and Telecoms at Simmons & Simmons, said the UK Online Safety Act 2023 covers the sharing of intimate images, including some AI-generated deepfakes.
"Under the UK Online Safety Act 2023, the sharing of intimate images (which includes AI-generated deepfakes that "appear to show" someone in an intimate state) is a criminal offence. The OSA requires all companies to take robust action against illegal content and activity. Platforms like X are required to implement measures to reduce the risks their services are used for illegal offending. They also need to put in place systems for removing illegal content when it does appear. The Act sets out a list of priority offences. These reflect the most serious and prevalent illegal content and activity, against which companies must take proactive measures. The OSA designates the sharing of intimate images without consent (including AI-generated deepfakes that "appear to show" someone in an intimate state) as a "priority offence." This means X must take proactive, proportionate steps to prevent such content from appearing on its platform and to swiftly remove it when detected. The OSA is enforced by Ofcom and companies can be fined up to £18 million or 10 percent of their qualifying worldwide revenue, whichever is greater. In serious cases, Ofcom can apply for a court order for 'business disruption measures', such as requiring payment providers or advertisers to withdraw their services from a platform, or requiring Internet Service Providers to block access to a site in the UK. Reports of comments by complainants suggest that X may well have a case to answer under the OSA as it has not acted to take down images reported to it by complainants. But Ofcom will also want to see that X has proactively taken steps to prevent the content appearing in the first place."
Legal duties
Brown said the law treats the sharing of intimate images without consent as a criminal offence when the content falls within the scope of the legislation. He said this includes AI-generated deepfakes that "appear to show" someone in an intimate state.
He said the Act places obligations on companies to act against illegal content and activity. He added that platforms must put measures in place that reduce the risk their services get used for illegal offending. He said companies also need systems for removing illegal content when it appears.
Brown said the Act identifies a set of priority offences. He said these reflect the most serious and prevalent forms of illegal content and activity. He said the designation matters because it drives expectations of proactive measures from platforms.
Ofcom enforcement
Brown said Ofcom enforces the Online Safety Act. He said penalties can reach up to £18 million or 10 percent of qualifying worldwide revenue, whichever is greater.
He said Ofcom can also seek court orders in serious cases. He described these as "business disruption measures". He said these could require payment providers or advertisers to withdraw services from a platform. He said measures could also require internet service providers to block access to a site in the UK.
Complaints focus
Brown pointed to reports that complainants had raised concerns about how X responded after users reported images. He said the reports suggested the platform may have a case to answer under the Online Safety Act if it did not take down content that users had flagged.
He also said Ofcom will look beyond reported content. He said the regulator will want evidence that X took steps to prevent the content appearing on the platform in the first place.
Kendall's intervention sits alongside a broader policy debate about how generative AI tools affect online harms. Lawmakers and regulators have focused on deepfake imagery and the speed at which it can be created and shared across large networks.
The concerns about Grok also raise questions about how platforms manage AI features embedded within consumer services. Companies have released chatbots and image generation tools at pace in recent years. Regulators have signalled that existing safety and content laws still apply when such tools get used for illegal activity.
Brown said the Online Safety Act sets expectations for proactive and swift action where priority offences apply, and he said Ofcom will examine what steps a platform took before and after harmful material appeared.