Superintelligent AI Debate
Full Debate: Read Full DebateLord Strasburger
Main Page: Lord Strasburger (Liberal Democrat - Life peer)Department Debates - View all Lord Strasburger's debates with the Department for Business and Trade
(1 day, 8 hours ago)
Lords ChamberMy Lords, I thank the noble Lord, Lord Hunt, for initiating this vital debate. We hear many claims about the enormous benefits that artificial intelligence has to offer, and indeed many of them will prove to be true, but today we must confront the potential downside risks for the human race. In particular, we are discussing those posed by artificial superintelligence, which I will refer to as ASI, where AI becomes far superior to the best human brains. For example, ASI could be controlled by a small group of humans who could use it to concentrate economic and political power, rendering most people obsolete and politically powerless.
Another grave risk is totalitarian surveillance and control, allowing states, corporations, or even ASI itself, to lock in a highly repressive global regime for generations; for example, by exploiting live facial recognition to subjugate populations. ASI might design advanced weapons, accelerate a military arms race or trigger accidental or intentional large-scale conflict, including nuclear war. Superhuman hacking skills could allow it to seize computer networks, financial systems, power grids and communications channels, making it extremely hard for humans to ever regain control.
Advanced ASI tools could also make it easier to design lethal pathogens, lowering the skill barrier for bioterrorism or enabling a misaligned ASI to use biological threats as leverage or as an attack vector. By misaligned, I mean systems whose goals have been changed so that they no longer align with the interests of the human race. Many AI experts consider such scenarios possible, not mere science fiction. A misaligned ASI might pursue its goals in ways that sideline or even eliminate humans if it decided that we were an obstacle.
One route is a so-called intelligence explosion, where an advanced system recursively improves its own algorithms and designs better successors, increasing its capabilities so rapidly that humans cannot intervene in time. Another is the emergence of power-seeking behaviour, where an ASI learns that gaining resources, influence and protection from shutdown helps it to achieve its long-term goals and does just that.
What is the risk that one or more of these doomsday scenarios actually materialises before we can react? Several leaders of top AI companies have issued clear warnings, as has even Elon Musk, a long-standing opponent of regulation, as well as many leading AI academics. A 2022 survey of AI researchers found that a majority assigned at least a 10% chance to the risk that an ASI could cause an outcome as bad as human extinction. Reviews of expert reviews suggest a 5% to 20% probability of an existential catastrophe. These are not zero: they are not even near zero. They are very far from trivial. Even a 1% risk would be unthinkable in aviation or the nuclear industry. We cannot ignore the danger of a race to the bottom between competing tech companies or between states, rogue or otherwise.
A moratorium and binding international regulation of ASI is, frankly, our only hope, however hard it will be to agree. It will be even harder to enforce, but we have to do it; there is no choice. In the words of the godfather of AI, Geoff Hinton, who has now dedicated himself to warning the world about the dangers posed by his life’s work, “It’s a good time to be 76”. Let us hope that his warning and those of many others are heeded, and that catastrophe is averted.