To ask His Majesty’s Government what assessment they have made of the Internet Watch Foundation’s Annual Data and Insights Report 2024, published on 23 April, particularly with regard to child sexual abuse material generated by artificial intelligence.
My Lords, I welcome my noble friend Lady Berger to her first Oral Question and thank her for it being on such an important issue that faces us today. The Internet Watch Foundation’s annual report highlights a harrowing increase in the amount of AI-generated child sexual abuse material online. The scale is shocking, with over 424,000 reports in 2024 suspected to contain child sex abuse imagery. The Government are deeply committed to tackling this crisis through the Online Safety Act and are specifically targeting AI CSAM threats in the Crime and Policing Bill. I pay tribute to the work of the IWF, which has been vital in helping us to identify and block such content.
My Lords, I thank the Minister for her reply. As she alluded to, the Internet Watch Foundation’s report points to hundreds of thousands of reports during the 2024 period. It is a record-breaking number of reports, which is driven partly by a number of new threats, including AI-generated child sexual abuse, sextortion and the malicious sharing of sexual imagery. The IWF says that under-18s are now facing a “crisis” of sexual exploitation and risk online. I heard what the Minister said and ask her what the Government intend to do to protect children in the UK and around the world now to ensure that, when the 2025 report comes out next year, we see a significant reduction in the number of these crimes.
My Lords, through the Crime and Policing Bill, the Government will introduce a new suite of measures to tackle the growing threat of AI. This includes criminalising AI models made or adapted to generate child sexual abuse imagery and extending the existing paedophile manuals offence to cover AI-generated child sexual abuse material. In addition, the Home Office will bolster the network of undercover online police officers to target online offenders and develop cutting-edge AI tools and other new capabilities to infiltrate live streams and chat rooms where children are groomed. The Home Office is developing options at pace on potential device operating system-level safety controls to prevent online exploitation and abuse of children. It is also vital that we tackle the widespread sharing of self-generated indecent imagery. The report shows that 91% of the images are self-generated. This is young people who are being groomed and often quite innocently sharing their material, not realising the purpose for which it will be used. This is a huge and pressing issue, and my noble friend quite rightly raises that we need to take action now to tackle this scourge.
My Lords, it is clear that, with the constant evolution of technology, we risk not being able to legislate rapidly enough to keep pace. How are the Government conducting their horizon scanning to ensure that we are always one step ahead of those who seek to abuse children in this way?
The noble Baroness is quite right that we have to keep the technology up to date, and of course we are endeavouring to do that. I should say that UK law applies to AI-generated CSAM in the same way as to real child sexual abuse. Creating, possessing or distributing any child sex abuse images, including those generated by AI, is illegal. Generative AI child sexual abuse imagery is priority illegal content under the Online Safety Act in the same way as real content. However, she is quite right: we have to keep abreast of the technology. We are working at pace across government to make sure that we have the capacity to do that.
My Lords, the Children’s Commissioner, Dame Rachel de Souza, and the IWF have both called for a total ban on apps which allow nudification, where photos of real people are edited by AI to make them appear naked. The commissioner has been particularly critical about the fact that such apps
“go unchecked with extreme real-world consequences”.
Will the Government act and ban these AI-enabled tools outright?
I thank the noble Lord for that question. The Government are actively looking at options to address nudification tools, and we hope to provide an update shortly. It is a matter that we take seriously. If such tools are used to create child sexual abuse material, UK law is clear that creating, possessing or distributing child sexual abuse images, including those generated using nudification tools, is already illegal, regardless of whether it depicts a real child or not.
My Lords, the Minister mentioned that a rather high percentage of the material was generated by young people themselves, without being aware of the implications. What is she doing with the Department for Education to ensure that the risks and challenges of unsafe online behaviour are highlighted to children through schools?
The noble Baroness makes a really important point about media literacy. It is again an issue that my department is taking very seriously, and it is an issue that Ofcom also has a statutory responsibility for, but she is right that schools have an essential part to play in this. The curriculum review which is currently taking place is identifying the need to give children more resilience and to give them the tools to identify what is safe and what is unsafe behaviour and to scrutinise the posts that they see in a more informed way. That work is ongoing in the curriculum review and the interim report from the Department for Education has identified that it is a priority.
My Lords, the rapidly increasing number of AI-generated images in circulation that depict child sexual abuse is deeply disturbing. The creation of such imagery uses the faces or bodies of real children, and much of it falls into category A material, depicting abuse of the most extreme kind. Will the Minister explain what the Government’s plans are to crack down on those who share information specifically on how to use text-to-image-based generative AI tools, a practice which leads to the creation of this appalling material?
My Lords, we are already taking steps to deal with this. We are committed to making sure that our laws tackle child sexual abuse materials and keep pace with technological developments. In the Crime and Policing Bill, we have introduced an offence to criminalise AI models that have been optimised to create child sexual abuse material. We have introduced an offence to criminalise those who maintain or moderate websites that use such services and use shared child sexual abuse imagery—whether it is real or fake, as the noble Lord says. In the Data (Use and Access) Bill, we have updated existing law that criminalises paedophile manuals to cover artificially generated CSAM. So there are a number of steps that we are already taking within the current legislation programme to deal with these incidents.
My Lords, a number of concerns have been raised about Ofcom’s recently released draft illegal content codes of practice. Can my noble friend the Minister say what plans the Government have to monitor the effectiveness of those codes of practice?
It is important to recognise that the measures that Ofcom has set out in the illegal content codes of practice and, last week, in the child safety codes of practice are a landmark change to protect users online. They mark the first time that platforms in the UK are legally required to tackle illegal content and content that is harmful to children. Section 47 of the Online Safety Act requires Ofcom to keep these under review. Additionally, Section 178 requires the Secretary of State to review the effectiveness of the regime two to five years after the legislation comes into force. The report on the outcome of that review must be laid before Parliament. I stress to my noble friend that the Act is not the end of the conversation; it is the foundation. We continue to look at how we can develop the legislation and how Ofcom can strengthen the codes in its own way. We are listening and debating, and we will not hesitate to take further action if it proves to be necessary.
My Lords, as the wording of my noble friend Lady Berger’s original Question and her supplementary question rightly emphasises, the report pinpoints AI-generated child sexual abuse images as a growing area of concern. Many of them were indistinguishable from real photographs, with the IWF suggesting that their growing number risks re-victimising persons who are previous victims of sexual abuse. Over 70% of AI-generated sexual abuse images are hosted on servers in Russia, Japan, the United States and the Netherlands. What is being done to solve the jurisdictional issues that allow perpetrators and disseminators of this appalling abuse to act with impunity?
My noble friend raises a really important point, but I stress that if a service, including file-sharing and storage services, poses a material risk to users in the United Kingdom, it must abide by the Online Safety Act and the illegal content duties, no matter where it is based. Ofcom has recognised the importance of tackling this issue specifically and has identified it as an early priority for enforcement, opening up a programme to assess the measures being implemented by file-sharing and file-storage services to prevent those services being used. My noble friend is right that a lot of these incidents are happening on an international basis. We are working with our colleagues internationally to make sure that we share information and determine the source of some of these materials, because sometimes we need to take action on an international basis.