Superintelligent AI

Baroness Foster of Aghadrumsee Excerpts
Thursday 29th January 2026

(1 day, 9 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Baroness Foster of Aghadrumsee Portrait Baroness Foster of Aghadrumsee (Non-Afl)
- Hansard - -

My Lords, I also want to thank and congratulate the noble Lord, Lord Hunt, on securing this question for short debate on such a pressing issue. I also want to thank ControlAI for its support. It is particularly encouraging that this issue has come to the Floor today, because it is the second debate on this matter that has taken place in your Lordships’ House within a month. I think that says a lot about the concern that is growing around this issue.

Serious harms from advanced AI systems have already begun to materialise. I read recently that Anthropic’s AI model was used to conduct a Chinese state-sponsored cyber attack, with 80% to 90% of tasks conducted autonomously by the AI system. As risks from advanced AI do not respect boundaries, this is a global challenge that requires co-ordinated solutions at international level. I am concerned that we are not doing enough to be risk aware, and that the Government are adopting a “wait and see” approach rather than leading on international arrangements. I hope the Minister will be able to set out a plan for international Governments to deal with the risks of superintelligence: that is, systems that would be capable of outsmarting experts, compromising our national security and upending international stability even more than it has been upended already.

I was heartened to note the Kuala Lumpur declaration on responsible AI, which called for international co-operation and a common regulatory framework. That happened through the Commonwealth Parliamentary Association. Sometimes we forget about the Commonwealth as a global organisation that can help to start these conversations. That would be a good place, particularly given our role in the Commonwealth, for us to start the conversation.

I think that global momentum is already here. Recently, 800 prominent figures and more than 100,000 members of civil society came together to sign a statement calling for a prohibition on superintelligence until there is scientific and public consensus. I hear what noble Lords have said today about the difficulties around that, but even the CEOs of leading AI companies have an appetite for this. The CEO of Google DeepMind, based here in the UK, said last week at Davos that he would support a halt in AI development if every other country and company agreed to do so.

Geoffrey Hinton said last week on “Newsnight” that there was a need for international regulation to stop AI being abused. He, like the noble Lord, Lord Hunt, pointed to the Geneva convention on the use of chemical weapons as a template for international action. Despite the fact that we are living through difficult geopolitical times, it is important that that does not stop us from starting the process of looking at these issues.

The UK can lead diplomatically in recognising a moratorium, with verifiable commitments from all major AI-developing nations. We have the convening power through the AI Safety Summit legacy, which was kicked off at Bletchley Park, and we have championed the world’s first network of AI security institutes. We cannot afford to be caught scrambling with retroactive fixes after disastrous strikes. We have seen that pattern before, most prominently now with social media, where we have waited until damage has already occurred. The UK can lead on establishing international agreements for safety, or we can wait.

I urge the Minister to formally acknowledge extinction risk from superintelligent AI as a national security priority and to lead on international efforts to prohibit superintelligence development.