Question to the Home Office:
To ask the Secretary of State for the Home Department, if she will implement safeguards to tackle crimes being reported online to open-source AI services.
The Government has already taken steps to tackle crimes linked to the misuse of artificial intelligence, including open-source models, through the illegal content duties in the Online Safety Act (2023) and criminal measures to target the creation of sexually explicit deepfake images in the Data (Use and Access) Act (2025).
The Home Department has also tabled an amendment to the Crime and Policing Bill to introduce a statutory defence for AI testers working to ensure that AI models do not create child sexual abuse material, non-consensual intimate imagery or extreme pornography when prompted. This defence will help the AI industry to test their models robustly and implement safeguards to ensure that their models cannot be used to create this appalling material.
Presently, there is no national online capability for online crime reporting to open-source AI models. Details of a crime submitted to an open-source AI model would not be submitted to the police. Members of the public who wish to report a crime online must access their local force website and submit details into an online form contained within. Some local forces use AI chatbots as an initial contact channel for the public, however, should details of a crime be submitted, the user will be directed to the local online crime reporting page.