My Lords, we on the Liberal Democrat Benches welcome the Secretary of State’s Statement, as well as her commitment to bring the new offence of creating or requesting non-consensual intimate images into force and to make it a priority offence. However, why has it taken this specific crisis with Grok and X to spur such urgency? The Government have had the power for months to commence this offence, so why have they waited until women and children were victimised on an industrial scale?
My Commons colleagues have called for the National Crime Agency to launch an urgent criminal investigation into X for facilitating the creation and distribution of this vile and abusive deepfake imagery. The Secretary of State is right to call X’s decision to put the creation of these images behind a paywall insulting; indeed, it is the monetisation of abuse. We welcome Ofcom’s formal investigation into sexualised imagery generated by Grok and shared on X. However, will the Minister confirm that individuals creating and sharing this content will also face criminal investigation by the police? Does the Minister not find it strange that the Prime Minister needs to be reassured that X, which is used by many parliamentarians and government departments, will comply with UK law?
While we welcome the move to criminalise nudification apps in the Crime and Policing Bill, we are still waiting for the substantive AI Bill promised in the manifesto. The Grok incident proves that voluntary agreements are not enough. I had to take a slightly deep breath when I listened to what the noble Viscount, Lord Camrose, had to say. Who knew that the Conservative Party was in favour of AI regulation? Will the Government commit to a comprehensive, risk-based regulatory framework, with mandatory safety testing, for high-risk models before they are released to the public, of the kind that we have been calling for on these Benches for some time? We need risk-proportionate, mandatory standards, not voluntary commitments that can be abandoned overnight.
Will the Government mandate the adoption of hashtagging technology that would make the removal of non-consensual images possible, as proposed by the noble Baroness, Lady Owen of Alderley Edge, in Committee on the Crime and Policing Bill—I am pleased to see that the noble Lord, Lord Hanson, is in his place—and as advocated by StopNCII.org?
The Secretary of State mentioned her commitment to the safety of children, yet she has previously resisted our calls to raise the digital age of consent to 16, in line with European standards. If the Government truly want to stop companies profiteering from children’s attention and data, why will they not adopt this evidence-based intervention?
To be absolutely clear, the creation and distribution of non-consensual intimate images has nothing whatever to do with free speech. These are serious criminal offences. There is no free speech right to sexually abuse women and children, whether offline or online. Any attempt to frame this as an issue of freedom of expression is a cynical distortion designed to shield platforms from their legal responsibilities.
Does the Minister have full confidence that Ofcom has the resources and resolve to take on these global tech giants, especially now that it is beginning to ramp up the use of its investigation and enforcement powers? Will the Government ensure that Ofcom uses the full range of enforcement powers available to it? If X continues to refuse compliance, will Ofcom deploy the business disruption measures under Part 7, Chapter 6 of the Online Safety Act? Will it seek service restriction orders under Sections 144 and 145 to require payment service providers and advertisers to withdraw their services from the non-compliant platform? The public expect swift and decisive action, not a drawn-out investigation while the abuse continues. Ofcom must use every tool Parliament has given it.
Finally, if the Government believe that X is a platform facilitating illegal content at scale, why do they continue to prioritise it for official communications? Is it not time for the Government to lead by example and reduce their dependence on a platform that seems ideologically opposed to the values of decency and even perhaps the UK rule of law, especially now that we know that the Government have withdrawn their claim that 10.8 million families use X as their main news source?
AI technologies are developing at an exponential rate. Clarity on regulation is needed urgently by developers, adopters and, most importantly, the women and children who deserve protection. The tech sector can be a force for enormous good, but only when it operates within comprehensive, risk-proportionate regulatory frameworks that put safety first. We on these Benches will support robust action to ensure that that happens.
The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
I thank both noble Lords for their contributions to the debate. We all agree that the circulation of these vile, non-consensual deepfakes has been shocking. Sexually manipulating images of women and children is despicable and abhorrent. The law is clear: sharing or threatening to share a deepfake intimate image without consent, including images of people in their underwear, is a criminal offence. To the noble Lord’s point, individuals who share non-consensual sexual deepfakes should expect to face the full extent of the law. In addition, under the Online Safety Act, services have duties to prevent and swiftly remove the content. If someone has had non-consensual intimate images of themselves created or shared, they should report it to the police, as these are serious criminal offences.
I turn to some of the points that have been raised so far. The Government have been very clear on their approach in terms of both the AI action plan and the legislation that we have brought forward. We have introduced a range of new AI-related measures in this Session to tackle illegal activity; we have introduced a new criminal offence to make it illegal to create or alter an AI model to create CSAM; we are banning nudification apps; and we are introducing a new legal defence to make it possible for selected experts to safely and securely test models for CSAM and non-consensual intimate images and extreme pornography vulnerabilities.
AI is a general-purpose technology with a wide range of applications, which is why we think that the vast majority of AI systems should be regulated at the point of use. In response to the AI action plan, the Government are committed to working with regulators to boost their capabilities. We will legislate where needed and where we see evidence of the gaps. Our track record so far has shown that that is what we do, but we will not speculate, as ever, on legislation ahead of future parliamentary Sessions.
I come to the question of Ofcom enforcement action. On Ofcom’s investigation process, the Secretary of State was clear that she expects an update from Ofcom on next steps as soon as possible and expects Ofcom to use the full legal powers that Parliament has given it to investigate and take the action that is needed. If companies are found to have broken the law, Parliament has given Ofcom significant enforcement measures. These include the power to issue fines of up to 10% of a company’s qualifying worldwide revenue and, in the most serious cases, Ofcom can apply for a court order to impose serious business disruption measures. These are all tools at Ofcom’s disposal as it takes forward its investigations. On the question of whether Ofcom has the resources to investigate online safety, as I think I have mentioned in the House before, Ofcom has been given additional resources year on year to undertake its duties in respect of enforcing the Online Safety Act: that is, I think, £92 million, which is an uplift on previous years.
I come to the question of the Government’s participation in news channels and on X. We will keep our participation under review. We do not believe that withdrawing would solve the problems that we have seen. People get their news from sources such as X and it is important that they hear from a Government committed to protecting women and girls. It is important that they hear what we are doing and hear when we call out vile actions such as these. We think it is extremely important to continue to take action and continue to back Ofcom in the actions that it is taking in respect of this investigation, and in fact all of its investigations under the Online Safety Act.
The noble Lord asked whether it should be mandatory for AI developers to test whether their models can produce illegal material. Enabling AI developers to test for vulnerabilities in their models is essential for improving safeguards and ensuring that they are robust and future-proofed. At present, such testing is voluntary, but we have been clear that no option is off the table when it comes to protecting UK users, and we will act where evidence suggests that further action can be effective or necessary. We are keeping many of the areas that have been raised today under review and we are seeking further evidence. We are looking at what is happening in other jurisdictions and at what is happening here and we will continue to take action.
I also reflect on the point that the noble Lord made that the issues around enforcing illegal activity are nothing to do with free speech. These are entirely separate issues and it is incredibly important to note that this is not about restricting free speech, but about upholding the law and ensuring that the standards that we expect offline are held online. Many tech companies are acting responsibly and making strong endeavours to comply with the Online Safety Act, and we welcome their engagement on that. We need to make sure that our legislation and our enforcement is kept up to date with the great strides in technology that are happening. This means that, in some cases, we will be looking at the real-life impact and taking measures where new issues arise. That is the track record that we have shown and that is what we will continue to do.
My Lords, from any reasonable reading of the Online Safety Act, X either failed completely to carry out a risk assessment in relation to the potential of its Grok AI tool to create harmful content or, if it did so, it did it in such a totally incompetent way that it might as well not have bothered.
I think that this Government are doing exactly the right thing and that we have given Ofcom the powers. I would like to know as soon as possible about Ofcom— well, that would be good but, in parliamentary terms, it can be rather a stretch. But we do need to have a deadline for when we are going to hear from Ofcom on how quickly it is going to do this.
Nobody has referred yet to the victims of this activity. What help and support can we give to those who are being attacked in this way? What advice is being given to them and by whom, so that they can be effectively supported when these devastating images are created?
Baroness Lloyd of Effra (Lab)
My noble friend is right to raise the extremely important point about victim support and the impact that this has on people. We have seen testimonies and reports of how devastating, degrading and humiliating this experience can be. The Revenge Porn Helpline is doing fantastic work in providing specialist support and help with getting images removed from the internet, and I commend it for that activity.
On the question on the investigation process, the Secretary of State has been clear that she expects an update from Ofcom on next steps as soon as possible and that she expects Ofcom to use its full legal powers. We hope that that will be clear as soon as possible.
My Lords, I declare an interest as I am receiving pro bono legal advice on NCII from Mishcon de Reya. I am delighted that the Government are finally enforcing the law that noble Lords in this House pushed so strongly for in the passage of the data Bill, criminalising the non-consensual creation and requesting of intimate images. However, I cannot help but feel frustrated that, along with survivors, I have been asking the Government to enforce this law since it achieved Royal Assent last June. I hope it is now clear to the Government that we cannot afford any similar delays. With this in mind, will the Minister commit to looking again at my amendments to the Crime and Policing Bill that just last month the Government would not accept, which would implement a 48-hour time limit for the takedown of NCII material and a hash registry? Will she also join me in thanking the survivors and campaigners, who have fought for so long for this law, for their bravery and perseverance?
Baroness Lloyd of Effra (Lab)
I thank the noble Baroness for her remarks, and for her expertise and input over the course of many years in this area. On the take-down time, we are looking at the experience in other jurisdictions, as I mentioned. We are also looking at the experience of the timelines that are implemented in this country; that is something that Ofcom will look at. We will look at both the scope and the speed of both those jurisdictions. As I think noble Lords have seen, we will look at measures, and if we believe that they are effective and speak to the harms that we are seeing, we will take action.
My Lords, I was in the other place when the Secretary of State made her Statement. I commend her for the strength of her words, but we are beyond words now. We are living in a country where any woman or child can be stripped to a bikini and turned into abuse material, as the price and entry point of being online. I do not accept the Government’s defence. There are many ways to communicate with the electorate, and to choose a company that monetises the humiliation and degradation of women and girls as part of its business proposition is to demonstrate that this is business as usual. It is not action for change.
I also disagree very strongly with the Minister: it has not been shocking. We have had the amendments that the noble Baroness referred to—most of those came from Members of this House, including the AI CSAM amendment that she referred to. In the last few weeks, the Government have pushed back on the amendments to the Crime and Policing Bill and, before that, to the data Bill. We have amendments on these issues. We foresaw it and, to be honest, we foresaw it in the Online Safety Act, so even on the other side this is not a shock.
I ask the Minister now to commit to placing the violence against women and girls guidance on a statutory footing, accepting amendments on chatbots and LLM risk assessments, and making a move with Ofcom to say that companies are not required only to do a risk assessment; they must, on a mandatory basis, mitigate those very risks that they find. We must not legitimise a platform which sows division, degrades women and sexually humiliates children.
Baroness Lloyd of Effra (Lab)
I thank the noble Baroness for her points and for her expertise that she brings to the House. I should have mentioned that I commend all those who have been speaking up from a position of experience. It is a very difficult thing to do, and it brings a unique perspective into the debate.
I spoke before about the Government withdrawing from using these platforms; we do not think that would be effective. We understand why people feel strongly about it. It is something that we keep under review.
The noble Baroness raised a number of other important issues. We are monitoring how Ofcom’s code on violence against women is being implemented. We think it is very important. I will discuss the many other areas she raised with my colleague who is taking that Bill through and, indeed, with the noble Baroness outside the House if that would be of interest.
Lord Pack (LD)
My Lords, last week the Government stated in this House that 10.8 million families use X as their main news source, which obviously would be many more people in total, but Ofcom’s data shows that only 3% cite X as their primary news source, which is under 2 million people—such a small number, in fact, that it is smaller than the number of people who believe that the moon landings were faked. Is it not time for the Government to rethink their approach to X, and, in particular, to rethink the Home Office’s published social media policy, which positively prioritises and encourages people to use X? Is it not time to start discouraging rather than continuing to encourage it?
Baroness Lloyd of Effra (Lab)
I have responded to this question before. I understand why people feel strongly about it. As I mentioned, the Government keep participation under review, but it is important that we can communicate with people wherever they get their news from. We have things to say about our violence against women and girls strategy, about what is acceptable in terms of social media, and on many other topics. It is important that we reach all people.
The US Under-Secretary of State for Public Diplomacy, Sarah B Rogers, an appointee of President Trump, said in an interview that was broadcast on GB News in the early hours of this morning that if the UK Government were to ban X, nothing was off the table, in what were clearly threatening remarks. She said that the political valence of the British Government is antagonistic to that of X. Given what we are talking about, one would really hope so. Will the Minister confirm that the British Government will act in the interests of the well-being of the British public and the country, stand up to such threats to democracy and not allow themselves to be bullied by the Trump Administration?
Baroness Lloyd of Effra (Lab)
The Government’s motivation is to take action to protect users in the United Kingdom and to support Ofcom in implementing UK law. That is what we have made very clear. We have made it very clear that Ofcom has our full backing in implementing compliance with the Online Safety Act and that we have given Ofcom tools that it can use, and the Secretary of State and others have made it clear that it has our support in using those tools. I hope that clarifies our motivations in these areas.
Lord Wigley (PC)
My Lords, the Minister will have gathered that all parts of this House feel very strongly indeed on this matter. It is quite outrageous that people’s bodies should be used in that way. I pick up one point that was mentioned on the Front Bench: surely, in extremis, there should be custodial sentences available.
Baroness Lloyd of Effra (Lab)
These are serious offences and the noble Lord is right that there is consensus on this. The decisions on prosecution and sentencing are for the police and courts. They should know that, as we have said, they have our support in taking that action.
My Lords, I welcome the Government’s announcement that they are bringing legislation into force this week to tackle this issue, and I welcome the news that Ofcom has launched a formal investigation to determine whether X has complied with its duties under the Online Safety Act. Ofcom should act urgently on this. I also support the Government’s intention to act on gaps identified in our online safety legislation, such as the fact that not all chatbots are covered.
I commend my noble friend Lady Owen of Alderley Edge for her work on banning deepfakes through amendments to the Data (Use and Access) Bill, and her excellent continuing work on this issue. There is an issue about the lack of transparency in how chatbots such as Grok are trained. As I understand it, if an image or multimodal model can generate non-consensual sexual imagery or deepfake pornography, it is certain that the model was trained on large, uncurated web scrapes where such material is common. Does my noble friend the Minister agree that this gives new impetus to the Government tackling the issue of transparency in the training of AI models, which is a matter we are looking at on the Communications and Digital Select Committee in terms of the transparency needed to deal with issues of AI copyright? This is a new and very pressing part of that issue.
Baroness Lloyd of Effra (Lab)
My noble friend raises good questions about training and testing. As she will also know, we are bringing forward measures in the Crime and Policing Bill that will allow testing in certain narrow circumstances, so that developers can make sure that the models they bring forward are not able to disseminate these kinds of awful images or CSAM. These are very important things and we are working very carefully with others to find the right regime for these models.
My Lords, I welcome moves by the Government on this issue. I came off X last September and there is wider debate to be had about that site. Given that we know that the use of AI tools to harm women will only accelerate—recent research has found thousands of nudification apps available—I repeat my question from earlier this week: what more will the Government do to create a robust framework so that AI will be used responsibly in the whole landscape of misogyny and abuse?
In relation to Ofcom, I heard what the Minister said about increased funding year on year, but why therefore does it seem that Ofcom does not have teeth?
Baroness Lloyd of Effra (Lab)
I thank the right reverend Prelate for her comments. In terms of Ofcom’s enforcement powers, it has imposed four financial penalties under the Online Safety Act, including of over £1 million. From the Government’s point of view, we are clear that it should be confident that it has our backing to use the powers that Parliament gave it, and we are resourcing it with the additional funding that we have provided. We believe that that is sufficient and we will see from its updates on its online safety activities exactly what it is doing, and that is part of its accountability to Parliament and the Government.
In terms of other things that we are taking forward, noble Lords will know that we are legislating in the Crime and Policing Bill to criminalise nudification tools. That offence will target tools that are specifically designed to generate non-consensual intimate images and make it illegal for companies to supply those tools.
Baroness Shawcross-Wolfson (Con)
I was very glad to hear the Minister say that she believes in upholding offline standards online, and I hope the Government will consider the amendments from the noble Baroness, Lady Bertin, to the Crime and Policing Bill, and try to regulate online pornography as they do offline pornography. Can the Minister clarify whether the Government’s very welcome ban on nudification apps is going to apply to all apps with this capability, or only to single-purpose apps? There have been some worrying reports today that only single-purpose apps will be covered, and it is very easy to see how developers could add secondary functions to circumvent the law.
Baroness Lloyd of Effra (Lab)
The proposed offence will target tools which are specifically designed to generate non-consensual intimate images. General purpose AI tools which are not designed solely or principally to generate non-consensual images will not be included; this is for those that are designed specifically for that purpose.
My Lords, I hope the whole House welcomes the Secretary of State’s Statement. Can my noble friend say whether it is thought that the social media platform X understands the revulsion caused by its AI Grok tool? I ask this question of my noble friend because the reported comments of the founder of X certainly suggest that he does not, and some aspects of the initial reaction by the company, such as saying it is acceptable as long as it is paid for, suggest that that it simply has no idea about the strength of the public reaction to this.
Baroness Lloyd of Effra (Lab)
I cannot speak for others but, from our perspective, it is clear under the Online Safety Act what illegal content is, and what the child safety duties are. Operating in the United Kingdom means abiding by those; it means doing the risk assessments, taking swift action against priority offences, and abiding by all of the regime in place here in the UK.
The Earl of Effingham (Con)
My Lords, this is all happening on social media, so does the Minister agree with the largest union representing teachers in the UK on banning social media for under-16s?
Baroness Lloyd of Effra (Lab)
There are strong views about access to social media for under-16s, and we understand that it is an area of concern for many, especially parents. We are keeping evidence on the impact of social media on children under review. While a ban is not our current policy, we are closely monitoring what is happening in Australia and looking carefully at the evidence. We have already taken some of the boldest steps to protect children with the Online Safety Act, and we are listening to views, for example, from the NSPCC and others. These include concerns about setting age limits which might mean that people are unprepared for the digital world, which is also a responsibility in terms of media literacy and ensuring that people can operate safely and securely in this new digital world.