AI Systems: Risks Debate

Full Debate: Read Full Debate

Baroness Ritchie of Downpatrick

Main Page: Baroness Ritchie of Downpatrick (Labour - Life peer)

AI Systems: Risks

Baroness Ritchie of Downpatrick Excerpts
Thursday 8th January 2026

(2 days, 6 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Baroness Ritchie of Downpatrick Portrait Baroness Ritchie of Downpatrick (Lab)
- Hansard - -

My Lords, I commend the noble Lord, Lord Fairfax of Cameron, on bringing forward this important debate on the impact of artificial intelligence. I have read deeply concerning reports from AI companies. For example, the chief scientist of Anthropic, who has already been referred to—that is the company behind the AI Claude—told the Guardian that if his company and others succeed at making AIs able to improve themselves without human assistance, it could be the moment that humans end up losing control.

Undoubtedly, as other noble Members have referred to right across the Committee, AI is important and provides certain benefits in the whole medical field. But there is a need for proper regulation and accountability mechanisms, and we need to see the legislative framework. Therefore, in that regard, can my noble friend the Minister on the Front Bench provide us with an update from the Government’s perspective in relation to the regulatory environment, to regulations and to those accountability mechanisms? I think the noble Baroness, Lady Kidron, already referred to that, and others right across this Room today have referred to the need for the legislative framework.

I hope that my noble friend the Minister can provide us with some detail, because there are warning signs. Even AI companies appear to agree with the scale of the risk. For example, the CEOs of OpenAI, Anthropic and Google DeepMind signed a statement that others have referred to today about the extinction risk posed by AI. This is sobering and opens the question of what is being done by these companies to address these risks. My understanding, thanks to helpful briefings by ControlAI policy advisers, is that no technical solution is in sight, so maybe my noble friend the Minister can provide us with some detail from the governmental perspective in relation to that matter.

I realise that small steps are being made here, but nothing that amounts to a full guarantee that superintelligence, should it get developed, will stay under control. OpenAI seems to agree, stating:

“Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work”.


I look forward to the answers from my noble friend the Minister.