Social Media: Non-consensual Sexual Deepfakes Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Social Media: Non-consensual Sexual Deepfakes

Lord Clement-Jones Excerpts
Wednesday 14th January 2026

(1 day, 11 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

My Lords, the technological capabilities and their misuse that have prompted this Statement are, needless to say, deeply disturbing and demand our careful attention. The use of AI to generate non-consensual sexual imagery of women and children is both grotesque in itself, but also corrosive of trust in technology more broadly.

We therefore welcome the Secretary of State’s confirmation that new offences criminalising the creation or solicitation of such material will be brought into force this week. We support the enforcement of these laws. We also welcome Ofcom’s decision to open a formal investigation into the use of Grok on X under the Online Safety Act, an investigation that must proceed swiftly to protect victims and hold platforms to account.

Hard though it is to predict the misuses of emerging technologies, we must collectively find better ways to be ready for them before they strike. I fear there is a pervasive and damaging sense of regulatory, legislative and political uncertainty around AI. As long as that remains the case, we risk remaining a victim of events beyond our control.

From the outset of this Parliament, and indeed in opposition, the Government have pledged to legislate on AI. Reviews and policy documents, including the Clifford AI Opportunities Action Plan, promised a framework to drive adoption and regulatory clarity. However, we still have no clear timeline, nor even a clear account of the Government’s policy on AI.

It is worth noting that the legislative tools the Government are now relying on to implement their proposed new offences, such as the creation and solicitation of non-consensual intimate images, are the product of amendments introduced by this House to the Data (Use and Access) Act. Ministers have repeatedly argued both that binding AI regulation must come, and that the existing multi-regulator framework is sufficient.

Evidence to the House of Commons Science, Innovation and Technology Committee late last year confirmed that the Secretary of State would not commit to a specific AI Bill, instead speaking of considering targeted interventions rather than an overarching legislative framework. This may indeed be the right approach, but its unclear presentation and communication drive uncertainty that undermines confidence for investors, businesses and regulators, but above all for citizens.

Progress on other AI-related policy commitments seems to have stalled too. I do not underestimate the difficulty of the problem, but work thus far on AI and copyright has been pretty disappointing. I am not seeking to go into that debate now, but only to make the point that it contributes to a widespread sense of uncertainty about tech in general and AI in particular.

Frankly, this uncertainty has been compounded by inconsistent political messaging. Over the weekend, reports emerged that the Government were considering banning X altogether before subsequently softening that position, creating wholly unnecessary confusion. At the same time, the Government have mischaracterised X’s decision to move its nudification tools behind a paywall as a means to boost profits, when the platform argues, reasonably persuasively, that this is a measure to ensure that those misusing the tools cannot do so anonymously.

Nor has there been much effective communication from the Government about their regulatory intentions for AI. This leaves the public and businesses unclear on how AI will be regulated and what standards companies are expected to meet. Political and legislative uncertainty in this case is having real consequences. It weakens our ability to deter misuse of AI technologies; it undermines public confidence, and it leaves regulators and enforcement agencies in a reactive posture rather than being empowered to act with a clear statutory direction.

We of course support efforts to criminalise harmful uses of AI. However, under the Government’s current Sentencing Bill, most individuals convicted of these new AI-related offences against women and girls will be liable for only suspended sentences, meaning that they could leave court free to continue using the technology that enabled their crime. This is concerning. It cannot be right that someone found guilty of producing non-consensual sexual imagery may walk free, unrestrained and with unimpeded access to the tools that facilitated their offending.

As I say, we support Ofcom’s work and the use of existing powers, but law without enforcement backed by a coherent, predictable regulatory regime will offer little real protection. Without proper sentencing, regulatory certainty and clear legislative direction for AI, these laws will not provide the protection that we need.

We urge the Government to publish a clear statement on their intentions on comprehensive AI regulation, perhaps building on the AI White Paper that we produced in government, to provide clarity for both tech companies and the public, and to underpin the safe adoption of AI across the economy and society. We must assume that new ways to abuse AI are being developed as we speak. Either we have principled, strategic approaches to deal with them, or we end up lurching from one crisis to the next.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, we on the Liberal Democrat Benches welcome the Secretary of State’s Statement, as well as her commitment to bring the new offence of creating or requesting non-consensual intimate images into force and to make it a priority offence. However, why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?

My Commons colleagues have called for the National Crime Agency to launch an urgent criminal investigation into X for facilitating the creation and distribution of this vile and abusive deepfake imagery. The Secretary of State is right to call X’s decision to put the creation of these images behind a paywall insulting; indeed, it is the monetisation of abuse. We welcome Ofcom’s formal investigation into sexualised imagery generated by Grok and shared on X. However, will the Minister confirm that individuals creating and sharing this content will also face criminal investigation by the police? Does the Minister not find it strange that the Prime Minister needs to be reassured that X, which is used by many parliamentarians and government departments, will comply with UK law?

While we welcome the move to criminalise nudification apps in the Crime and Policing Bill, we are still waiting for the substantive AI Bill promised in the manifesto. The Grok incident proves that voluntary agreements are not enough. I had to take a slightly deep breath when I listened to what the noble Viscount, Lord Camrose, had to say. Who knew that the Conservative Party was in favour of AI regulation? Will the Government commit to a comprehensive, risk-based regulatory framework, with mandatory safety testing, for high-risk models before they are released to the public, of the kind that we have been calling for on these Benches for some time? We need risk-proportionate, mandatory standards, not voluntary commitments that can be abandoned overnight.

Will the Government mandate the adoption of hashtagging technology that would make the removal of non-consensual images possible, as proposed by the noble Baroness, Lady Owen of Alderley Edge, in Committee on the Crime and Policing Bill—I am pleased to see that the noble Lord, Lord Hanson, is in his place—and as advocated by StopNCII.org?

The Secretary of State mentioned her commitment to the safety of children, yet she has previously resisted our calls to raise the digital age of consent to 16, in line with European standards. If the Government truly want to stop companies profiteering from children’s attention and data, why will they not adopt this evidence-based intervention?

To be absolutely clear, the creation and distribution of non-consensual intimate images has nothing whatever to do with free speech. These are serious criminal offences. There is no free speech right to sexually abuse women and children, whether offline or online. Any attempt to frame this as an issue of freedom of expression is a cynical distortion designed to shield platforms from their legal responsibilities.

Does the Minister have full confidence that Ofcom has the resources and resolve to take on these global tech giants, especially now that it is beginning to ramp up the use of its investigation and enforcement powers? Will the Government ensure that Ofcom uses the full range of enforcement powers available to it? If X continues to refuse compliance, will Ofcom deploy the business disruption measures under Part 7, Chapter 6 of the Online Safety Act? Will it seek service restriction orders under Sections 144 and 145 to require payment service providers and advertisers to withdraw their services from the non-compliant platform? The public expect swift and decisive action, not a drawn-out investigation while the abuse continues. Ofcom must use every tool Parliament has given it.

Finally, if the Government believe that X is a platform facilitating illegal content at scale, why do they continue to prioritise it for official communications? Is it not time for the Government to lead by example and reduce their dependence on a platform that seems ideologically opposed to the values of decency and even perhaps the UK rule of law, especially now that we know that the Government have withdrawn their claim that 10.8 million families use X as their main news source?

AI technologies are developing at an exponential rate. Clarity on regulation is needed urgently by developers, adopters and, most importantly, the women and children who deserve protection. The tech sector can be a force for enormous good, but only when it operates within comprehensive, risk-proportionate regulatory frameworks that put safety first. We on these Benches will support robust action to ensure that that happens.

Baroness Lloyd of Effra Portrait The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
- View Speech - Hansard - - - Excerpts

I thank both noble Lords for their contributions to the debate. We all agree that the circulation of these vile, non-consensual deepfakes has been shocking. Sexually manipulating images of women and children is despicable and abhorrent. The law is clear: sharing or threatening to share a deepfake intimate image without consent, including images of people in their underwear, is a criminal offence. To the noble Lord’s point, individuals who share non-consensual sexual deepfakes should expect to face the full extent of the law. In addition, under the Online Safety Act, services have duties to prevent and swiftly remove the content. If someone has had non-consensual intimate images of themselves created or shared, they should report it to the police, as these are serious criminal offences.

I turn to some of the points that have been raised so far. The Government have been very clear on their approach in terms of both the AI action plan and the legislation that we have brought forward. We have introduced a range of new AI-related measures in this Session to tackle illegal activity; we have introduced a new criminal offence to make it illegal to create or alter an AI model to create CSAM; we are banning nudification apps; and we are introducing a new legal defence to make it possible for selected experts to safely and securely test models for CSAM and non-consensual intimate images and extreme pornography vulnerabilities.

AI is a general-purpose technology with a wide range of applications, which is why we think that the vast majority of AI systems should be regulated at the point of use. In response to the AI action plan, the Government are committed to working with regulators to boost their capabilities. We will legislate where needed and where we see evidence of the gaps. Our track record so far has shown that that is what we do, but we will not speculate, as ever, on legislation ahead of future parliamentary Sessions.

I come to the question of Ofcom enforcement action. On Ofcom’s investigation process, the Secretary of State was clear that she expects an update from Ofcom on next steps as soon as possible and expects Ofcom to use the full legal powers that Parliament has given it to investigate and take the action that is needed. If companies are found to have broken the law, Parliament has given Ofcom significant enforcement measures. These include the power to issue fines of up to 10% of a company’s qualifying worldwide revenue and, in the most serious cases, Ofcom can apply for a court order to impose serious business disruption measures. These are all tools at Ofcom’s disposal as it takes forward its investigations. On the question of whether Ofcom has the resources to investigate online safety, as I think I have mentioned in the House before, Ofcom has been given additional resources year on year to undertake its duties in respect of enforcing the Online Safety Act: that is, I think, £92 million, which is an uplift on previous years.

I come to the question of the Government’s participation in news channels and on X. We will keep our participation under review. We do not believe that withdrawing would solve the problems that we have seen. People get their news from sources such as X and it is important that they hear from a Government committed to protecting women and girls. It is important that they hear what we are doing and hear when we call out vile actions such as these. We think it is extremely important to continue to take action and continue to back Ofcom in the actions that it is taking in respect of this investigation, and in fact all of its investigations under the Online Safety Act.

The noble Lord asked whether it should be mandatory for AI developers to test whether their models can produce illegal material. Enabling AI developers to test for vulnerabilities in their models is essential for improving safeguards and ensuring that they are robust and future-proofed. At present, such testing is voluntary, but we have been clear that no option is off the table when it comes to protecting UK users, and we will act where evidence suggests that further action can be effective or necessary. We are keeping many of the areas that have been raised today under review and we are seeking further evidence. We are looking at what is happening in other jurisdictions and at what is happening here and we will continue to take action.

I also reflect on the point that the noble Lord made that the issues around enforcing illegal activity are nothing to do with free speech. These are entirely separate issues and it is incredibly important to note that this is not about restricting free speech, but about upholding the law and ensuring that the standards that we expect offline are held online. Many tech companies are acting responsibly and making strong endeavours to comply with the Online Safety Act, and we welcome their engagement on that. We need to make sure that our legislation and our enforcement is kept up to date with the great strides in technology that are happening. This means that, in some cases, we will be looking at the real-life impact and taking measures where new issues arise. That is the track record that we have shown and that is what we will continue to do.