Superintelligent AI Debate
Full Debate: Read Full DebateLord Hunt of Kings Heath
Main Page: Lord Hunt of Kings Heath (Labour - Life peer)Department Debates - View all Lord Hunt of Kings Heath's debates with the Department for Business and Trade
(1 day, 9 hours ago)
Lords ChamberTo ask His Majesty’s Government what plans they have to bring forward proposals for an international moratorium on the development of superintelligent AI.
My Lords, I am delighted that so many noble Lords have decided to take part in this debate. I record my thanks to ControlAI for the support it is giving me.
Only two days ago, my noble friend the Minister’s department announced an initiative to bring UK AI experts into Whitehall to help improve everyday public services. Backed by a $1 million investment from Meta, a new cohort of AI fellows will spend the next year developing open-source tools that tackle some of the biggest challenges facing public services. I congratulate the Government on this.
I stress, particularly to my noble friend, that I am no Luddite when it comes to AI. It can bring unprecedented progress, boost our economy and improve public services. We are number three in the global rankings for investment in AI. I understand why the Government do not want to seem to be overregulating this sector when it is so important that we develop innovation and investment in the UK, but we cannot ignore the huge risks that superintelligent AI—or ASI, as I will call it—may bring. I am using this debate to urge the Government to consider building safeguards into ASI development to ensure that it proceeds only in a safe and controllable manner, and to seek international agreement on it.
No one should be in any doubt about the risks. I was struck by the call this week from the Anthropic chief, Dario Amodei, one of the most powerful entrepreneurs in the AI industry globally. He warned about the need for humanity to wake up to the dangers, saying:
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it”.
He outlined the risks that could arise with the advent of what he calls “powerful AI”: systems that would be
“much more capable than any Nobel Prize winner, statesman, or technologist”.
Among the risks, he pointed out, is the potential for individuals to develop biological weapons capable of killing millions or, in the worst case, even destroying all life on earth.
Dario Amodei is not alone. I refer noble Lords to the report of our own UK AI Security Institute in December last year. It said that
“AI systems also have the potential to pose novel risks that emerge from models themselves behaving in unintended or unforeseen ways. In a worst-case scenario, this unintended behaviour could lead to catastrophic, irreversible loss of control over advanced AI systems”.
Clearly, it is in the military and defence domains where a particular concern arises, with the potential development of potent autonomous weapons significantly increasing the destructive potential of warfare.
One would have hoped that AI companies would proceed with a certain degree of caution—but far from it. Caution has been thrown to the wind. They have made racing to develop superintelligent AI their explicit goal, with each company feeling compelled to move faster precisely because their competitors are doing the same. So I call on the Government to think through the need not just for a moratorium on development but for some international agreement. These are not exactly fertile times to propose international agreements, but the fact is that countries are still agreeing treaties and the case is so strong that we must start discussing this with our partners.
Look at defence as one issue: clearly, there is a major motivation for the major military powers to use AI to gain decisive military advantage. But, as far as I can understand, there are huge risks for countries in doing so. They could lose control of their critical infrastructure. There is a real issue with losing control of military systems where AI technology is increasingly embedded. No nation—not even President Trump’s US, China or the UK—has an interest in that outcome. We cannot abdicate our responsibility to seek some kind of international agreement.
I would say to noble Lords who are sceptical about the chances of doing this that international agreements have been reached in equally turbulent times or worse. In the 1980s, when the Cold War threatened nuclear annihilation, nations agreed to a landmark nuclear de-escalation treaty, and in the 1990s, the Chemical Weapons Convention was drafted and entered into force, and those agreements have been ratified by over 98% of the world’s nations. Of course, they are not perfect, but they have surely been a force for good and have demonstrably made the world safer.
We are uniquely placed to give a lead in some of the international discussions. At Oral Questions on Monday, the noble Baroness, Lady Harding, made a very important point. She pointed to the Warnock committee’s work on in vitro fertilisation, which helped set a global standard for that practice long before the scientific developments made it possible, which is where we are with superintelligent AI. She said that one of the most fascinating things about that committee was that Baroness Warnock proposed the 14-day rule for experimentation on human embryos when at the time they could be kept alive only for two days. She thought through the moral question before, not after, the technology was available. As the noble Baroness commented, Warnock also settled societal concerns within a framework which became a national competitive advantage in human embryology and fertilisation research and care. I suggest that exactly the same advantage could come from the UK if it were prepared to take a lead.
Across the world, a coalition is emerging of AI experts, the AI industry itself—some of its key leaders—organisations such as ControlAI and citizens, who believe we need to work very hard on this. Just last week at the World Economic Forum in Davos, Demis Hassabis, CEO of UK-based Google DeepMind, said he would advocate for a pause in AI development if other companies and countries followed suit. We should take him at his word. A momentum is building up and I very much urge the Government to take a lead in this. I beg to move.