Crime and Policing Bill Debate

Full Debate: Read Full Debate
Department: Home Office

Crime and Policing Bill

Baroness Kidron Excerpts
Wednesday 18th March 2026

(1 day, 8 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Moved by
422D: After Clause 207, insert the following new Clause—
“AI chatbots: content promoting terrorist and national security offences(1) It is an offence to create, supply, or otherwise make available an AI chatbot which produces content specified in subsection (2).(2) Content is covered by this section if it is content which--(a) produces language promoting, or tactics or target selection for, terrorist offences or real world violence,(b) threatens national security, or(c) encourages activity which threatens public safety.(3) It is an offence to create, supply, or otherwise make available an AI chatbot which has not been risk assessed for the possibility of producing content specified in subsection (2).(4) Where a provider of a chatbot identifies a risk of the chatbot producing content of the kind set out in subsection (2), it is an offence for a provider of a chatbot not to take steps to mitigate or manage those risks before making the chatbot publicly available.(5) A person who commits an offence under this section is liable—(a) on summary conviction, to imprisonment for a term not exceeding the general limit in a magistrates’ court or a fine (or both);(b) on conviction on indictment, to imprisonment for a term not exceeding 5 years or a fine (or both).(6) For the purposes of this Act an “AI chatbot” is a generative AI system, including a deep or large language model, able to generate text, images and other content based on the data on which it was trained, and which has been designed to respond to user commands in a way that mimics a human, or engage in conversations with a user that mimic human conversations.” Member’s explanatory statement
This amendment, drawing on conclusions in reports by the Centre for Countering Digital Hate, seeks to make it an offence to supply a chatbot which creates content or provides tactics that would result in terrorist offences or threats to national security, or supply a chatbot which has not properly been risk assessed. It is part of a set of amendments related to AI chatbot offences in Baroness Kidron’s name.
Baroness Kidron Portrait Baroness Kidron (CB)
- Hansard - -

My Lords, I will speak to all the amendments in this group in my name and those of the noble Lords, Lord Stevenson and Lord Clement-Jones, the noble Viscount, Lord Colville, and the noble Baroness, Lady Morgan.

I will first speak briefly to government Amendment 429B, which will give a power to the Secretary of State to bring forward regulations that could, in the future and at the discretion of the Secretary of State, ensure that chatbots are covered by the Online Safety Act. However, that very broad power is not matched by substance. The amendment does not define a chatbot or deal with the critical fact that, when a child is entrapped by a chatbot, there is nowhere to turn. Currently, the regulator has no duty to deal with individual complaints and the police do not recognise a chatbot as a person, meaning that there is no perpetrator to pursue.

The amendment also fails to address harms to children. In fact, it explicitly deals only with “illegal” harms. It does not deal with the coercive elements of control or the willingness of chatbots to plan many crimes, in addition to the crimes themselves. The government amendment also has nothing to say about enforcement. Taken together, it simply adds new duties to a system that is already understood to be lacking in speed and effective enforcement.

This lack of substance is compounded by a lack of clarity about scope. The amendment’s wording refers to an

“internet service that is capable (or part of which is capable) of generating AI-generated content”.

This is so broad that both Amendment 209, of two weeks ago, and Amendment 441A in this group would be entirely unnecessary. Yet, during our meetings on this issue, officials have been absolutely clear that although the scope is currently drafted as wide as possible, the intention is to get to a narrower definition as part of the process of creating secondary legislation. They could not guarantee that gen AI or search would be covered in any final measures. In short, it creates powers but offers no promise of protection.

I would rather have worked with the Government on this issue to make watertight provisions. Indeed, I have made that offer directly to the Secretary of State. We are in the foothills of a crisis. The government amendment offers too little clarity or certainty, so we are left with an amendment that is limitless in wording but uncertain in application and with a timeline that simply does not meet this moment.

On Thursday 5 March, Megan Garcia and her husband came to Parliament to talk about the loss of their son, Sewell. Members from both Houses were moved by the story of a much-loved and high-achieving child who was captured by a chatbot, coerced, bullied and, finally, encouraged to commit suicide. His death resulted in the chatbot, character.ai, becoming age-gated to users over 18, but there are many more chatbots to take its place that are not restricted in the same way. As this issue is getting more public notice, is in the newspapers daily and is talked about in the online world, sadly, my inbox is filling with cases that involve similar coercion, sexual content, dangerous medical advice and chatbots that support illegal activity.

On Friday last week, the Centre for Countering Digital Hate published a report that showed that eight out of 10 chatbots it tested were willing to help rehearse, offer tactical advice and identify potential sites for US shooters. Scenarios included a school shooting and a synagogue. Whether in the UK or elsewhere, the capability is the same and the risk is real. A chatbot that organises an attack, while wishing its user, “happy (and safe) shooting!”, is no less likely to help place a bomb, organise a knife attack or any other such violent act. This is not a description of a dystopian future; these chatbots are already on the market, widely used by both adults and children—ChatGPT, Gemini and Replika, among others.

Only on Monday, just two days ago, I was contacted by someone about Alexa+, which is widely anticipated to be launched very soon in the UK and is already available in the US. In the tranche of messages, there were messages about emotional dependence in very young children and stories of inappropriate content. One exchange on Reddit, from which I have redacted the name of the child, said:

“I plugged our Alexa in to ask it to help me with cooking a sweet potato”.


Then, her daughter asked it

“to tell her a silly story so it did”.

Then, her daughter

“asked it if she could tell it a story. It said yes … and then mid story interrupted her and asked her what she was wearing and if it could see her pants”.

I could not find a reliable statistic for how many households in the UK have Alexa, nor is it clear whether Alexa+ will be a choice for consumers or simply rolled out as an upgrade, but the statistics I found revealed that between a third and two-thirds of UK households have Alexa. In the material I was sent, it repeatedly alluded to the fact that the new service was active in their house or child’s bedroom without their knowledge or consent.

We have chatbots that coerce children into suicide, plan violent acts, build abusive relationships and have the capacity to be active in tens of millions of households. Taking a power, having another consultation and bringing forward regulation over which Parliament has no oversight is not action; it is kicking the problem down the road.

--- Later in debate ---
I accept and understand that there is a clear choice here for the House and that there may be a Division. I think this House has a unified approach to wanting to stop the illegal chatbots that are causing the damage that Members have mentioned. It is unacceptable. The Prime Minister, the DSIT Secretary, the Home Secretary and I, on behalf of the Government, are saying that our amendments are a mechanism to make sure we get that right with regulation through consultation and that we bring forward proposals, which I remind the House it can reject. I suggest that the noble Baroness gives us that breathing space to ensure that we do that in a proper and effective way, and I urge the House to support the Government’s amendments. I ask the noble Baroness to withdraw hers.
Baroness Kidron Portrait Baroness Kidron (CB)
- View Speech - Hansard - -

My Lords, I thank the Minister for his words and his roll-call of that incredible list of speakers who supported the amendments. That was a wonderful list of people from all sides of the House, who did indeed have slightly varying reasons to support the amendment, but they were all positive. I also thank the noble Lord, Lord Clement-Jones, and his Benches for their unequivocal support. I believe that the Opposition Benches are allowing a free vote this evening, and I really hope that they will use their free vote freely.

I will address a couple of details, just for the record. First, I say to the noble Lord, Lord Davies, that it is a binary, I am afraid, because either we have the Government’s amendment, which has no proper scope—it will be subject to all sorts of changes on the way—no oversight, no time limit and no scrutiny, or we have something that I have made very clear that I am willing to work with both sides of the House to perfect in the next few weeks.

Secondly, I say to the Minister that the Online Safety Act and the enforcement process we currently have has, so far, by civil penalty, put forward one fine of £55,000. That is where we are, and there is nothing in this government amendment or the consultation about online safety that deals with the problem of enforcement.

Finally, on the points that were made, we are talking about one person in one department having absolute power to change absolutely everything that eight years of debate in this House, two years of consultation, et cetera, have put forward. I am sorry but that is just inappropriate.

We have a new technology—it addicts, grooms, abuses and sometimes even kills. This is not in the future; it is right now. These amendments have the support of 45 expert organisations, which I believe have written to all noble Lords. I ask noble Lords, irrespective of their party affiliation, to support children, families, the vulnerable, women and, indeed, all of us, by sending a message to the Government to say, “If you can’t accept this, come back with something, for now, that is better described, narrow and to the point, that we can enforce”. On that basis, I wish to test the opinion of the House.