Viscount Colville of Culross
Main Page: Viscount Colville of Culross (Crossbench - Excepted Hereditary)Department Debates - View all Viscount Colville of Culross's debates with the Home Office
(1 day, 8 hours ago)
Lords ChamberMy Lords, I will speak to Amendments 422D and 433 to 437. I fully support the noble Baroness, Lady Kidron. Her arguments have been entirely backed up by the release only today of the report entitled Invisible No More: How AI Chatbots Are Reshaping Violence Against Women and Girls by Durham University and Swansea University. The research identifies the range of design choices and failures in safety mechanisms that enable, encourage, simulate and normalise violence against women and girls. The report found that fantasies of incest and rape were normalised, and one chatbot, Chub AI, suggested violent rape and domestic abuse as categories.
I reiterate the concerns of the noble Baroness, Lady Kidron, about the long and bureaucratic path to business disruption measures, meaning that harm continues to perpetuate as our system is not agile enough to tackle these rapidly evolving issues. I wish to pay tribute to Professor Clare McGlynn KC for her work co-authoring this ground-breaking report and emphasise the warning she made in today’s Times newspaper. She said:
“Chatbot violence against women represents a rapidly escalating threat. Without early intervention, these harms risk becoming entrenched and scaling quickly, mirroring what happened with deepfake and nudify apps, where early warnings were largely ignored. We must not make the same mistakes again”.
Professor McGlynn and the noble Baroness, Lady Kidron, once again demonstrate their ability to warn against these emerging harms, and I sincerely hope that noble Lords will back the noble Baroness should she wish to divide the House today.
My Lords, I support Amendment 422D and the consequential Amendments 434 to 437, to which I have added my name. In Amendment 429B the Government have gone far to respond to concerns over AI-generated harms, but this amendment, as the noble Baroness, Lady Kidron, has said, gives enormous powers to the Secretary of State to decide the shape of how AI-generated services are controlled in this country. The Minister knows there is concern across the House about exposing this central part of the new tech economy to what are effectively unfettered ministerial powers. Very few noble Lords want to support a skeleton amendment like this.
Government Amendment 429B gives the Secretary of State the right to amend, which is defined later as including the right to
“repeal and apply (with or without modifications)”.
This applies to all of Part 3 of the Online Safety Act illegal content duties in relation to AI services. Parliament will not even have an option to amend regulations on this issue. Proposed new subsection (1) in this amendment seems like a big deal to me, and the noble Lord should be very concerned. The intention seems to be that the basis of the existing regime in Part 3 will be used, but we do not know how the Secretary of State will decide to adapt that regime to fit the particularities of AI services that generate illegal content. As the noble Baroness, Lady Kidron, pointed out, that goes a long way beyond AI services designed to mimic humans and human conversations, which is what chatbots are. If a subsequently elected Government are in thrall of the tech companies, how might they abuse this power?
During the passage of the Online Safety Act, noble Lords spent time and energy defining both a “search service” and a “user-to-user service”, and their responsibility for both designing out and mitigating illegal harms. It seems extraordinary not to have the details of the new services on the face of the legislation. The definition of “AI” in new subsection (17) is oddly uninformative. It simply says:
“‘AI’ is short for artificial intelligence”.
I think we all know that. That does not give us much of a clue about which technology it covers. By contrast, I draw your Lordships’ attention to Article 3(1) of the EU’s Artificial Intelligence Act, which sets out a carefully thought through definition of an AI system:
“‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that … infers, from the input it receives, how to generate outputs such as predictions … or decisions that can influence physical or virtual environments”.
The unclear nature of the AI definition in the amendment is compounded by new subsection (10), which allows for the definition of the provision to be changed and expanded. Once again, Parliament will not be able to amend any regulations derived from this power.
The biggest concern about the amendment is that, although it covers illegal content, it does not cover content that is harmful to children. As a result, I completely support my noble friend Lady Kidron’s Amendment 422D, and its consequential amendments, which would assuage many of my concerns about the scope and power given to Ministers at the expense of Parliament. I also urge noble Lords to vote against government Amendment 429B when it comes up later in the evening.
I also say to the Minister that regulating the wide definition of “AI” covered in Amendment 429B is important. It needs to be brought back as part of wider artificial intelligence legislation. I hope that he can reassure noble Lords that we will hear more about this in the King’s Speech.
My Lords, I support all the amendments in the name of the noble Baroness, Lady Kidron. I will speak to Amendment 433. Worryingly, children are increasingly turning to AI chatbots for all facets of their everyday lives. For many, gone are the days of independent, creative or critical thinking. While chatbots can help children to explore and better understand their world, there are far too many shocking cases of children receiving harmful information and becoming emotionally dependent on these platforms.
As it stands, AI chatbots risk becoming the latest example of an online product that has been rolled out without the right safety guardrails in place, and children are bearing the brunt. It is as if their well-being and mental health are not important. I can hear the AI developers thinking among themselves: “Who cares? It’s only children”. Well, we should care. Childline is hearing more and more from children who are being harmed on these platforms, with cases of false mental health diagnoses, information on how to restrict diets, and the formation of emotional relationships between children and chatbots. In increasingly concerning cases, children who have experienced abuse are told by chatbots that what they experienced was not abuse. These platforms cannot be allowed to give children harmful and misleading safeguarding advice that could prevent them speaking to trusted adults or organisations such as Childline.
The Government’s action to expand the scope of the Online Safety Act to cover illegal content created by chatbots is most welcome, and I thank them for it. However, they cannot stop there. The harmful content that chatbots can generate must be included too. This must cover the harmful content duties in the Online Safety Act, such as preventing the encouragement of self-harm or suicide, and all harms that are unique to AI chatbots. It means preventing chatbots misleading or manipulating children or mimicking human relationships.
The amendment from the noble Baroness, Lady Kidron, would make it a criminal offence to develop or supply an AI chatbot that harmed children. It is as simple as that. Providers must be legally required to risk-assess their services and put effective safeguards in place. Morally, this is the right thing to do. I ask the Minister: if the Government decide that they do not wish to support this amendment, please can they set out today how they will deliver measures that comprehensively protect children from all the risks that these services pose? As I keep saying, and will say one more time, childhood lasts a lifetime. If we truly care, we need to ensure that children are protected from every single type of harm. I look forward to the Minister’s response.