AI Systems: Risks Debate

Full Debate: Read Full Debate

Baroness Foster of Aghadrumsee

Main Page: Baroness Foster of Aghadrumsee (Non-affiliated - Life peer)

AI Systems: Risks

Baroness Foster of Aghadrumsee Excerpts
Thursday 8th January 2026

(1 day, 23 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Baroness Foster of Aghadrumsee Portrait Baroness Foster of Aghadrumsee (Non-Afl)
- Hansard - -

My Lords, I also want to congratulate the noble Lord, Lord Fairfax of Cameron, on securing this question for short debate on such a timely issue. AI is an incredible development for many reasons—R&D, innovation, economic growth, productivity, faster health diagnoses and many other areas. However, this must be balanced with a regulatory environment which allows and encourages all those positive things and provides safeguards against harms. We must be risk-aware, and I hope that the Minister will be able, in closing this debate, to set out where the Government are with their risk analysis and action plan to deal with those risks.

Two sorts of harm can occur with autonomous AI systems. The first is intentional harm, which I hope could be identified and regulated in a straightforward manner. It is the second type of harm—unintentional or reckless harm—which may be more difficult to detect and, therefore, to regulate. So-called superintelligent AI is the riskiest type of AI. Sometimes, as the MI5 director, Kenneth McCallum noticed, it would be reckless to ignore.

Serious harms from AI have already begun to materialise. Before Christmas, I asked the Education Minister in the House a question about the fact that toys with AI, such as speaking teddy bears, were unregulated and had the potential to be very dangerous indeed to very young children. If children interact with AI chatbots and toys instead of their parents, guardians and friends, that could lead to serious harms. There have been documented cases of health deterioration and tragic instances where young people have taken their lives after forming attachments to these systems.

Modern AI systems, I understand, are not built in a piece-by-piece fashion, like a machine, but grown. That means that no one, not even the initial AI developers, understand the AI they create. That is frightening indeed.

Geoffrey Hinton the Nobel Prize-winning British scientist has warned that humanity has never before encountered anything with intellectual or cognitive abilities superior to our own, and that we simply do not know what a world with smarter-than-human AI would look like, much less how to manage or grow it safely.

At a recent conference in Kuala Lumpur on responsible AI—where one of the hosts works for the Commonwealth Parliamentary Association—a joint declaration was issued calling for international co-operation to establish global readiness for the responsible use of AI in the common interest of humanity. The declaration urged parliaments to, among other things, set common rules and regulatory frameworks. I urge His Majesty’s Government to look at that declaration. Hoping for the best and that AI companies have the best of intentions is not an appropriate strategy. I hope that Labour, as it said it would in its manifesto, will look to developing a regulatory environment. I look forward to the Minister’s response on that.