(1 day, 7 hours ago)
Lords ChamberTo ask His Majesty’s Government what plans they have to bring forward proposals for an international moratorium on the development of superintelligent AI.
My Lords, I am delighted that so many noble Lords have decided to take part in this debate. I record my thanks to ControlAI for the support it is giving me.
Only two days ago, my noble friend the Minister’s department announced an initiative to bring UK AI experts into Whitehall to help improve everyday public services. Backed by a $1 million investment from Meta, a new cohort of AI fellows will spend the next year developing open-source tools that tackle some of the biggest challenges facing public services. I congratulate the Government on this.
I stress, particularly to my noble friend, that I am no Luddite when it comes to AI. It can bring unprecedented progress, boost our economy and improve public services. We are number three in the global rankings for investment in AI. I understand why the Government do not want to seem to be overregulating this sector when it is so important that we develop innovation and investment in the UK, but we cannot ignore the huge risks that superintelligent AI—or ASI, as I will call it—may bring. I am using this debate to urge the Government to consider building safeguards into ASI development to ensure that it proceeds only in a safe and controllable manner, and to seek international agreement on it.
No one should be in any doubt about the risks. I was struck by the call this week from the Anthropic chief, Dario Amodei, one of the most powerful entrepreneurs in the AI industry globally. He warned about the need for humanity to wake up to the dangers, saying:
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it”.
He outlined the risks that could arise with the advent of what he calls “powerful AI”: systems that would be
“much more capable than any Nobel Prize winner, statesman, or technologist”.
Among the risks, he pointed out, is the potential for individuals to develop biological weapons capable of killing millions or, in the worst case, even destroying all life on earth.
Dario Amodei is not alone. I refer noble Lords to the report of our own UK AI Security Institute in December last year. It said that
“AI systems also have the potential to pose novel risks that emerge from models themselves behaving in unintended or unforeseen ways. In a worst-case scenario, this unintended behaviour could lead to catastrophic, irreversible loss of control over advanced AI systems”.
Clearly, it is in the military and defence domains where a particular concern arises, with the potential development of potent autonomous weapons significantly increasing the destructive potential of warfare.
One would have hoped that AI companies would proceed with a certain degree of caution—but far from it. Caution has been thrown to the wind. They have made racing to develop superintelligent AI their explicit goal, with each company feeling compelled to move faster precisely because their competitors are doing the same. So I call on the Government to think through the need not just for a moratorium on development but for some international agreement. These are not exactly fertile times to propose international agreements, but the fact is that countries are still agreeing treaties and the case is so strong that we must start discussing this with our partners.
Look at defence as one issue: clearly, there is a major motivation for the major military powers to use AI to gain decisive military advantage. But, as far as I can understand, there are huge risks for countries in doing so. They could lose control of their critical infrastructure. There is a real issue with losing control of military systems where AI technology is increasingly embedded. No nation—not even President Trump’s US, China or the UK—has an interest in that outcome. We cannot abdicate our responsibility to seek some kind of international agreement.
I would say to noble Lords who are sceptical about the chances of doing this that international agreements have been reached in equally turbulent times or worse. In the 1980s, when the Cold War threatened nuclear annihilation, nations agreed to a landmark nuclear de-escalation treaty, and in the 1990s, the Chemical Weapons Convention was drafted and entered into force, and those agreements have been ratified by over 98% of the world’s nations. Of course, they are not perfect, but they have surely been a force for good and have demonstrably made the world safer.
We are uniquely placed to give a lead in some of the international discussions. At Oral Questions on Monday, the noble Baroness, Lady Harding, made a very important point. She pointed to the Warnock committee’s work on in vitro fertilisation, which helped set a global standard for that practice long before the scientific developments made it possible, which is where we are with superintelligent AI. She said that one of the most fascinating things about that committee was that Baroness Warnock proposed the 14-day rule for experimentation on human embryos when at the time they could be kept alive only for two days. She thought through the moral question before, not after, the technology was available. As the noble Baroness commented, Warnock also settled societal concerns within a framework which became a national competitive advantage in human embryology and fertilisation research and care. I suggest that exactly the same advantage could come from the UK if it were prepared to take a lead.
Across the world, a coalition is emerging of AI experts, the AI industry itself—some of its key leaders—organisations such as ControlAI and citizens, who believe we need to work very hard on this. Just last week at the World Economic Forum in Davos, Demis Hassabis, CEO of UK-based Google DeepMind, said he would advocate for a pause in AI development if other companies and countries followed suit. We should take him at his word. A momentum is building up and I very much urge the Government to take a lead in this. I beg to move.
My Lords, I congratulate the noble Lord, Lord Hunt, on what he just said. I entirely agree with his premise that there is real danger ahead of us if we do not take care and we do not understand what we are dealing with.
This is one of those occasions when we have to maintain control. Control is the key issue, and policies that do not lead to control or enable us to keep control will lead to something that could very well be regarded as a disaster. It is not often that we face choices so stark and so difficult, and where on the one hand there is an immense benefit to be gained and on the other a catastrophe for humanity. That is the situation that we are in, so we have to take this issue very seriously.
The question that the noble Lord posed was: do we therefore go for a moratorium? That would be highly desirable, but I do not think that it will be possible in the short term. Frankly, while President Trump is in the White House, the US is not going to regulate the development of AI, nor will it help others do that; in fact, it is much more likely that it will stand in the way. I am therefore pessimistic about getting an international dialogue going on the basis of a moratorium, and I think, as the noble Lord just said, that we have to act on our own recognisance.
What should we seek to do? We have in the UK some of the institutions that we need to be able to take a lead. The UK is ahead of the game in some respects, not just in that we are number three in our investment but because we have made institutional moves which are of great advantage. We also already have in existence the ASI Security Institute and the Alan Turing Institute, which should not be forgotten, because it is also quite able and well placed to play a role. I would like them to take a lead in consultation—which I hope the Government are going to put in place and get on with—but also take an institutional lead in starting the dialogue in this country. They are in some respects, as bodies separate from government, well placed to get a basis on which what they do will then be taken as being at face value, valuable and independent, which will carry more weight in subsequent work.
However, a lot of that will be directed at the research community; not all of it, but much of it will concern them. The research community, while they should not be expected to reveal to the world where they have got to, should be in dialogue with government. They need to be required to tell government where they are and what they are doing on the research front.
There is also the user market, which is different; that is, companies which are employing AI for some purpose. It is more likely to be a purpose which is much more narrowly defined, for the improvement of a product or to engage and improve their own research. That is a different kettle of fish, but it is not one that should simply be allowed to go on, given the nature of AI, the problems to which it can give rise and the unexpected things that can happen using an AI programme. There has been an instance where an AI programme has escaped autonomously on to the internet. There are real risks. Therefore, corporate governance should be brought into play, and it should happen soon. Companies should be obliged to report both what they are doing and its uses, and it should be subject to the normal processes of audit.
Companies in this country have not on the whole fared all that brilliantly when it has come to their grasp and their willingness to understand and palliate the dangers of cyber security. Take that as an example, and by it, I mean we need to be tough about getting on with ensuring that companies take proper responsibility.
We need to anticipate our difficulties—
My Lords, I remind Members that the advisory time is four minutes. The last debate ran short of time. We need to make sure that the Minister has ample time to respond. I hope everyone will respect that.
My message is that we should organise before we have to engage in expensive recovery.
My Lords, I thank the noble Lord, Lord Hunt, for initiating this vital debate. We hear many claims about the enormous benefits that artificial intelligence has to offer, and indeed many of them will prove to be true, but today we must confront the potential downside risks for the human race. In particular, we are discussing those posed by artificial superintelligence, which I will refer to as ASI, where AI becomes far superior to the best human brains. For example, ASI could be controlled by a small group of humans who could use it to concentrate economic and political power, rendering most people obsolete and politically powerless.
Another grave risk is totalitarian surveillance and control, allowing states, corporations, or even ASI itself, to lock in a highly repressive global regime for generations; for example, by exploiting live facial recognition to subjugate populations. ASI might design advanced weapons, accelerate a military arms race or trigger accidental or intentional large-scale conflict, including nuclear war. Superhuman hacking skills could allow it to seize computer networks, financial systems, power grids and communications channels, making it extremely hard for humans to ever regain control.
Advanced ASI tools could also make it easier to design lethal pathogens, lowering the skill barrier for bioterrorism or enabling a misaligned ASI to use biological threats as leverage or as an attack vector. By misaligned, I mean systems whose goals have been changed so that they no longer align with the interests of the human race. Many AI experts consider such scenarios possible, not mere science fiction. A misaligned ASI might pursue its goals in ways that sideline or even eliminate humans if it decided that we were an obstacle.
One route is a so-called intelligence explosion, where an advanced system recursively improves its own algorithms and designs better successors, increasing its capabilities so rapidly that humans cannot intervene in time. Another is the emergence of power-seeking behaviour, where an ASI learns that gaining resources, influence and protection from shutdown helps it to achieve its long-term goals and does just that.
What is the risk that one or more of these doomsday scenarios actually materialises before we can react? Several leaders of top AI companies have issued clear warnings, as has even Elon Musk, a long-standing opponent of regulation, as well as many leading AI academics. A 2022 survey of AI researchers found that a majority assigned at least a 10% chance to the risk that an ASI could cause an outcome as bad as human extinction. Reviews of expert reviews suggest a 5% to 20% probability of an existential catastrophe. These are not zero: they are not even near zero. They are very far from trivial. Even a 1% risk would be unthinkable in aviation or the nuclear industry. We cannot ignore the danger of a race to the bottom between competing tech companies or between states, rogue or otherwise.
A moratorium and binding international regulation of ASI is, frankly, our only hope, however hard it will be to agree. It will be even harder to enforce, but we have to do it; there is no choice. In the words of the godfather of AI, Geoff Hinton, who has now dedicated himself to warning the world about the dangers posed by his life’s work, “It’s a good time to be 76”. Let us hope that his warning and those of many others are heeded, and that catastrophe is averted.
Lord Tarassenko (CB)
My Lords, I congratulate the noble Lord, Lord Hunt, on securing this debate, but it is going to take a superhuman effort to give an intelligent speech on this topic in four minutes. Paradoxically, Google DeepMind’s paper on artificial general intelligence, AGI, has the best definition of superintelligent AI or artificial superintelligence, ASI. It stratifies AI according to five levels related to human cognitive abilities. Level 1, emerging AI, corresponds to the intelligence of an unskilled human being. Level 5, superhuman AI or ASI, outperforms all human beings.
The taxonomy, importantly, distinguishes between narrow AI, for a specific application, and general AI. It makes the evidence-based claim that superhuman narrow AI—in other words, narrow ASI—has already been achieved by AlphaFold, which used machine learning to solve the 50 year-old protein-folding problem. Crucially, general AI is still only at level 1, emerging AI. How we move from level 1 to level 5, ASI, is a matter of debate within the field of AI research. Many argue that this requires new capabilities to be developed for today’s frontier AI models—for example, increasing levels of autonomy. Other experts, such as Geoff Hinton, the Nobel Prize winner who has already been mentioned, believe that we are much closer to the cliff edge of ASI.
A minority, such as Yann LeCun, argue that language is only a small component of intelligence and that the real world is complex and messy. Therefore, in his view, superintelligence is a long way off and will not be built on LLMs. There is a wide variety of views among AI experts about the imminence of ASI, with the CEOs of Anthropic and Google DeepMind, Dario Amodei and Demis Hassabis, somewhere between Yann LeCun and Geoff Hinton. My own view, after talking with colleagues in the AI Security Institute and the Alan Turing Institute, is that a moratorium would be unenforceable.
Instead, I support the proposal made this week by the noble Baroness, Lady Harding, to set up a commission to investigate the ethical aspects of general ASI. The commission could be facilitated by the Alan Turing Institute and would consult a range of experts—for example, the four mentioned in this speech.
In the meantime, we should consider the transition from level 1 to level 2, which is much closer. General AGI carries real risks. The Minister highlighted on Monday the regulation of AI for specific fields—for example, through the MHRA for healthcare. That is an approach I welcome for narrow AI or even narrow AGI. But what we need now is for the Government to initiate a consultation process for the regulation of general AGI, which is likely to be attained by the next generation of frontier models.
Safety testing of models by the AI Security Institute at present relies on voluntary agreements with AI companies. The consultation should therefore also consider the pros and cons of putting AISI, the AI Security Institute, on a statutory footing and legally compelling AI companies to open up their models for safety testing. I very much hope that the Minister will be able to tell us when DSIT is likely to announce a consultation on regulating general AGI.
The Lord Bishop of Hereford
My Lords, it is appropriate that this debate happens the day after the Church celebrated the life and work of the great divine Thomas Aquinas, one of the founding intellectual fathers of western thought, because this debate cuts to the very heart of how we understand ourselves.
Our debate is about the regulation of superintelligence. We know that intelligence is simply
“the ability to learn, understand and think in a logical way about things; the ability to do this well”.
Superintelligence is, presumably, the ability to do this much better than we can. If this were all we were talking about, exercised by a machine in the service of the common good, there would be little to fear. I imagine many noble Lords will have referred to ChatGPT or other agents—for research purposes only—in their contributions to your Lordships’ House. The results of AI in medical diagnostics, drug discovery, robotics and even self-driving cars promise many benefits to us all. These manifestations of machine intelligence are a welcome technological development—although there is another debate to be had on their potentially catastrophic implications for employment, a view held by many AI company CEOs and reported in the Financial Times this morning.
However, what many in the sector fear is not so much focused tools but an intelligence that can effectively think for itself, devise goals and strategies and have independent agency, analogous to how we human beings make choices. Thomas Aquinas was prescient when he said:
“The greatest minds are capable of the greatest vices as well as of the greatest virtues”.
When we make decisions in your Lordships’ House, intelligence is but one factor. Of greater value is wisdom, as we were reminded by the noble Lord, Lord Shinkwin, in his response to my right reverend friend the Bishop of Coventry’s maiden speech on Tuesday. Noble Lords will have heard a definition of the difference: intelligence recognises that tomatoes are fruit; wisdom does not put them in a fruit salad.
Beyond that, I argue that our decisions are frequently motivated by love, which Aquinas defines as
“to will the good of another”.
These things will come together in our deliberations in your Lordships’ House on assisted dying on Friday. For some of us, love leads in the direction of permitting a choice to end incurable suffering; others are convinced that love demands the retention of the law as it stands to prevent coercion of the vulnerable, while in no sense lacking compassion in holding that view. Love drives us to different conclusions. We come to these conclusions, compromise and maybe even change our minds as we reflect together.
It is hard to see how this can be captured in an algorithm. Actions flowing from intelligence alone can be very bad ones indeed. Many at the forefront of developing AI recognise this, while some actors are seeking to incorporate virtue in machine decisions. Dario Amodei, CEO of Anthropic, writes optimistically of “machines of love and grace”, for example. Others most certainly are not. Early experimental examples of super AI have prioritised their own survival, even to the extent of threats of blackmail to their programmers when it was proposed to switch them off. Your Lordships demonstrate in this House a combination of the intelligence, wisdom and love, and deliberating in community that are the heart of our humanity and mutual relationships. Until such time as these virtues can be woven into machines, with the protections to shut them down safely, an international moratorium is the only safe way forward, and I would urge His Majesty’s Government to pursue it.
My Lords, I congratulate my noble friend Lord Hunt on securing this important debate and concealing the real purpose of it in a rather confusing title. I also want to declare that this speech was made entirely by myself and my brain, and I have not consulted any other agency, alive or automatic.
I hope that, when the Minister responds, she will confirm that the Government have no plan to suppress the development of ASI. Inquiry and discovery are deeply ingrained in the human psyche and the AI revolution we are living through should certainly not be suppressed. As we know, however, AI is already disrupting traditional media ecosystems and current regulatory arrangements are struggling. How are we going to regulate AI? That is the key question.
Ofcom is currently the regulator for online activity. As the Minister will be aware from recent questions and debates in this House, there are now real doubts about whether it can deliver on its current obligations, let alone take on ASI once it is in full flow. There are three issues in play here. Many of us feel that Ofcom has yet to meet the high expectations for change that were legislated for in the world-leading Online Safety Act. This is partly a structural issue, because Ofcom has to develop and then operate through codes of conduct, which do not have the authority of primary legislation, so take too long to develop and are often subject to legal challenge, or the fear of it. Ofcom already has enough on its plate with a wide range of pressing issues to deal with. It is hard to see how it can develop the bandwidth to scale up to the problems that ASI will bring. The new regulatory structure will have to deal with transnational companies and there seems to be little chance of seeing an international agreement on the approach to be taken, let alone having a body with powers to enforce decisions. So, what can be done?
I am grateful to the Centre for Regulation of the Creative Economy at Glasgow University for recent discussions on this and related topics. I refer the Minister in particular to its recent publication, which touches on how AI might fit into the UK’s current and future regulatory picture. Ofcom, for example, has established with the FCA, the CMA and the ICO, the Digital Regulation Cooperation Forum. Some of that work addresses questions posed by AI, but the DRCF has no statutory backing, no requirement on the partners to work together and no sharing of powers when action is required. Its role seems to be more of a kind of think tank. It undoubtedly does some good in sharing best practice, capacity building and the pursuit of international networking, but we will need much more than “adding value” to establish modes of regulation as AGI or ASI develops.
When the Minister comes to respond to the debate, I hope she can say a little more about how the Government intend to regulate in this area, building on the AI Security Institute and supporting the pro-growth agenda.
My Lords, I also want to thank and congratulate the noble Lord, Lord Hunt, on securing this question for short debate on such a pressing issue. I also want to thank ControlAI for its support. It is particularly encouraging that this issue has come to the Floor today, because it is the second debate on this matter that has taken place in your Lordships’ House within a month. I think that says a lot about the concern that is growing around this issue.
Serious harms from advanced AI systems have already begun to materialise. I read recently that Anthropic’s AI model was used to conduct a Chinese state-sponsored cyber attack, with 80% to 90% of tasks conducted autonomously by the AI system. As risks from advanced AI do not respect boundaries, this is a global challenge that requires co-ordinated solutions at international level. I am concerned that we are not doing enough to be risk aware, and that the Government are adopting a “wait and see” approach rather than leading on international arrangements. I hope the Minister will be able to set out a plan for international Governments to deal with the risks of superintelligence: that is, systems that would be capable of outsmarting experts, compromising our national security and upending international stability even more than it has been upended already.
I was heartened to note the Kuala Lumpur declaration on responsible AI, which called for international co-operation and a common regulatory framework. That happened through the Commonwealth Parliamentary Association. Sometimes we forget about the Commonwealth as a global organisation that can help to start these conversations. That would be a good place, particularly given our role in the Commonwealth, for us to start the conversation.
I think that global momentum is already here. Recently, 800 prominent figures and more than 100,000 members of civil society came together to sign a statement calling for a prohibition on superintelligence until there is scientific and public consensus. I hear what noble Lords have said today about the difficulties around that, but even the CEOs of leading AI companies have an appetite for this. The CEO of Google DeepMind, based here in the UK, said last week at Davos that he would support a halt in AI development if every other country and company agreed to do so.
Geoffrey Hinton said last week on “Newsnight” that there was a need for international regulation to stop AI being abused. He, like the noble Lord, Lord Hunt, pointed to the Geneva convention on the use of chemical weapons as a template for international action. Despite the fact that we are living through difficult geopolitical times, it is important that that does not stop us from starting the process of looking at these issues.
The UK can lead diplomatically in recognising a moratorium, with verifiable commitments from all major AI-developing nations. We have the convening power through the AI Safety Summit legacy, which was kicked off at Bletchley Park, and we have championed the world’s first network of AI security institutes. We cannot afford to be caught scrambling with retroactive fixes after disastrous strikes. We have seen that pattern before, most prominently now with social media, where we have waited until damage has already occurred. The UK can lead on establishing international agreements for safety, or we can wait.
I urge the Minister to formally acknowledge extinction risk from superintelligent AI as a national security priority and to lead on international efforts to prohibit superintelligence development.
My Lords, I thank the noble Lord, Lord Hunt of Kings Heath, for this debate, and I wish we had more time for it. It helps sometimes if someone takes a slightly different view, so I ask noble Lords to forgive me if I deliberately do so, although I line up with what the noble Lord said about moratoriums.
In 1637 René Descartes said, “I think, therefore I am”. That is what we fear: that ASI will be able to think by itself, and therefore it will be. We fear that it will develop lethal weapons that we cannot control, let alone understand their development. I agree with that. So do all the tech company CEOs who discussed this at length at the Davos meeting and subsequently on different podcasts. So did Yuval Harari from Cambridge, a political reporter and philosopher who has identified the issues that will confront us if AGI leads to ASI.
AI is the next step to AGI, and, as the noble Lord, Lord Tarassenko, said, AGI is the next step to ASI. We are probably closer to level 2 of AGI, but the timelines are long. We are uncertain when we will get to ASI, particularly recursive ASI. If we get to that point, that will be when we have the greatest danger. After 4 billion years of evolution, we humans, the only species that can think, have got to the place that we are through lying and deviousness. We are now developing machines that can do exactly the same, and therefore we fear them. But it cannot be beyond the ingenuity of humans to try to control these developments.
I come from the position of saying that moratoriums will not work. But we can work in co-operation with other nations that have already started regulating, such as South Korea and Australia, as well as work with our AI Security Institute in the United Kingdom, to establish our own boundaries through regulations that will allow innovations to continue.
We must remember that there are benefits to developing this technology. One example that was given is the folding of proteins. Every protein in the body has been identified; we now need to learn very quickly how those proteins cause or prevent disease. We will not be able to do this unless we allow these technologies to develop more quickly than anybody else. The same applies for new energies and climate change management, so there are benefits to it. The conundrum is how to allow technology to develop these benefits while creating regulations that will not allow it to develop in areas that are dangerous to humanity.
The way forward on how we govern technology will be in how we identify its consciousness and how we work with it. Therefore, as we learn more, measured regulation and co-operation with other countries is probably the way forward.
My Lords, I declare an interest as a consultant on AI regulation and policy for DLA Piper. I too thank the noble Lord, Lord Hunt of Kings Heath, for provoking an extremely profound and thoughtful debate on an international moratorium on superintelligent AI development. I was very interested that he cited the Warnock approach as one to be emulated in this field. That was certainly one that our House of Lords Artificial Intelligence Committee recommended eight years ago, but sadly it has not been followed.
For nine years, I have co-chaired the All-Party Parliamentary Group on Artificial Intelligence. I remain optimistic about AI’s potential, but I am increasingly alarmed about our trajectory, particularly in the field of defence. Superintelligence—AI surpassing human intelligence across all domains—is the explicit goal of major AI companies. Many experts predict that we could reach this within five to 10 years. In September 2025, Anthropic detected the first large-scale cyber espionage campaign using agentic AI. Yoshua Bengio, one of the godfathers of AI development, warns that these systems show “signs of self-preservation”, choosing their own survival over human safety.
Currently, no method exists to contain or control smarter-than-human AI systems. This is the “control problem” that Professor Stuart Russell describes: how do we maintain power over entities more powerful than us? That is why I joined the Global Call for AI Red Lines, which was launched at the UN General Assembly by over 300 prominent figures, including Nobel laureates and former Heads of State. They call for international red lines to prevent unacceptable AI risks, including prohibiting superintelligence development, until there is broad scientific consensus on how it can be done safely and with strong public buy-in.
ControlAI’s UK campaign, described by the noble Lord, Lord Hunt, is backed by more than 100 cross-party parliamentarians in the UK. Its proposals include banning deliberate superintelligence development, prohibiting dangerous capabilities, requiring safety demonstrations before deployment, and establishing licensing for advanced AI.
The Montreal Protocol on Substances that Deplete the Ozone Layer offers a precedent. In 1987, every country signed it within two years—during the Cold War. When threats are universal, rapid international agreements are possible. Superintelligence presents such a threat. Yet the current situation is discouraging. The US has rejected moratoria. Sixty countries signed the Paris AI Summit declaration in February 25, but the UK did not. Even Anthropic’s CEO, who has been widely quoted today, admits that we understand only 3% of how current systems work. Today, AI systems are grown through processes their creators cannot interpret.
The Government’s response has been inadequate. Ministers focus on regulating the use of AI tools rather than their development. But this approach fails fundamentally when facing superintelligence. Once a system surpasses human intelligence across all domains, we cannot simply regulate how it is used. We will have lost the ability to control it at all. You cannot regulate the use of something more intelligent than the regulator just sector by sector.
Our AI Security Institute, as the noble Lord, Lord Tarassenko, pointed out, has advisory powers only. We were promised binding regulation in July 2024, but we have seen neither consultation nor draft legislation. Growth and safety are not mutually exclusive. Without public confidence that systems are under human control, adoption will stall.
It is clear what the Government should do. The question is whether we will act with the seriousness this moment demands or whether competitive pressures will override the fundamental imperative of keeping humanity in control. I look forward to the Minister’s response.
I too thank the noble Lord, Lord Hunt, for bringing this serious issue in front of us today. Like others, I wish we had more time, but I think this shows the Lords at its best. We have had technology know-how, regulatory expertise, philosophy, religious wisdom—we have even learned that it is a good time to be old. So that is definitely something to look forward to.
Of course, we all know AI is a massive force for good. I have seen it first-hand in the health space. But we also know the risks of superintelligent AI. Examples have been mentioned where AI has taken to blackmail in the case of self-preservation. So I think we all understand the dangers of non-alignment and AI pursuing different objectives from our own. We are all aware that some very serious and knowledgeable people in this space talk about risks of 10% or so, which we would all agree is pretty significant.
For me, though, the real question is: how do we go about regulating? As we know, it works only if everyone in the world follows. Nuclear weapons, for example, are pretty hard to build: you need massive infrastructure, you need to enrich uranium, you need state-like resources, and it can be observed worldwide. But despite all of that, we have still had proliferation. We have still had the likes of North Korea getting nuclear weapons. Building superintelligent AI requires much more limited resource; it is much easier to hide, and so much easier for rogue states such as North Kore—or, dare I say it, an al-Qaeda—to develop it without detection, without us being able to do anything about it. If we really believe in the power of superintelligence, we have to accept that it is probably a winner-takes-all world, and whoever gets there first is likely to be the winner who takes all.
For me, while I worry about some of the dangers of us in the west developing it, I have to say that I worry even more about North Korea or al-Qaeda getting there first if we go ahead and tie one of our hands, or both our hands, behind our backs through a one-sided moratorium. There are things that we should be and are working on, and the AI Safety Institute is a very good example of that. A heavy investment in monitoring the programming, opening up the models, checking that they really are aligned—there probably is no limit to the resources we should be putting into that, and into investigations into whether we can put kill switches into these things. If we do find a way to do it, then let us offer that to the world, because it has to be in our interests that everyone who is developing this has access to a kill switch. That definitely makes sense, but for me, a one-sided moratorium which ties our hands behind our backs while the likes of North Korea and the al-Qaedas of the world crack on: no, I am afraid that that worries me even more.
The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
My Lords, I thank my noble friend Lord Hunt for initiating this important debate on an important topic, and all noble Lords from around the House for their contributions today. This Government believe that advanced AI has transformative potential for the UK: from scientific innovation and public service reform to economic growth, as many noble Lords have set out today. However, as we realise these benefits, we need to make sure that AI remains secure and controllable. New technologies bring with them novel risks, and we have heard today from many noble Lords the directions in which technology might take us.
As has been mentioned, the UK is committed to a context-based regulatory approach whereby most AI systems are regulated at the point of use and by our existing regulators, who are best placed to understand the risks and the context of AI deployment in their sectors. Regulators are already acting. The ICO has released guidance on AI and data protection, and last year Ofcom published its strategic approach to AI, which sets out how it is addressing AI-related risks. My noble friend asked about Ofcom’s expertise and resources. Ofcom has recruited expert online safety teams from various sectors, including regulation, tech platforms, law enforcement, civil society and academia, and is being resourced to step up and regulate in this area. The FCA has also announced a review into how advances in AI could transform financial services.
As my noble friend also mentioned, the Government are working proactively with regulators, through both the Digital Regulation Cooperation Forum and the Regulatory Innovation Office, to ensure that regulators have the capabilities to regulate what we see today and anticipate regulations that may be needed in the future, both in respect of AI and of other scientific and technological developments in other areas that are coming towards us. We heard many suggestions today on how we might regulate further. The Government are prepared to step up to the challenges of AI and take further action. We will keep your Lordships’ House updated on any proposals in this area. However, I am unable to speculate on any further legislation ahead of parliamentary announcements.
We have heard a lot of testament to the abilities and expertise of the AI Security Institute. Equally, as other Lords have mentioned—the noble Lord, Lord Tarassenko, brought precision to the definitions here—we cannot be sure how AI will develop and impact society over the next five, 10 or 20 years. We need to navigate this future based on evidence-based foresight to inform action with technical solutions and global co-ordination.
We should be very proud of our world-leading AI Security Institute: it is the centre of UK expertise, advancing our scientific understanding of the capabilities and the associated risks. Close collaboration with AI labs has ensured that the institute has been able to test more than 30 models to understand their potentially harmful capabilities, and we think this is the best way to proceed. It is having a real-world impact. The institute’s testing is making models safer, with findings being used by industry to strengthen AI model safeguards. It is carrying out foundational research to discover methods for building AI systems that are beneficial, reliable and aligned with human values.
One of the AISI’s priorities is tracking the development of AI capabilities that would contribute to AI’s ability to evade human control, which has been raised many times in the debate today. It supports research in this field through the alignment project, a funding consortium distributing £15 million to accelerate research projects. To ensure that the Government act on these insights, the institute works with the Home Office, NCSC and other national security organisations to share its evidence for the most serious risks posed by AI.
The noble Baronesses, Lady Foster and Lady Neville-Jones, spoke about the risks associated with AI cyber capabilities. We are closely monitoring those, in terms of both the risks posed and the solutions for combating the cyber risks that AI can contribute. We have developed the AI Cyber Security Code of Practice to help secure AI systems and the organisations that develop and deploy them. That is another example of the UK setting standards that can be followed by others—another point made by noble Lords today, when they spoke about how the UK can contribute to the safe development of AI. The institute will continue to evaluate and scan the horizon to ensure we focus our research on the most critical risks.
As has been pointed out, AI is being developed in many nations and will also have impacts across borders and across societies, so international collaboration is essential. The Deputy Prime Minister set out to the UN Security Council last autumn the United Kingdom’s commitment to using AI responsibly, safely, legally and ethically. We continue to work with international partners to achieve this.
The AI Security Institute is world leading, with global impact. Since December it has assumed the role of co-ordinator for the International Network for Advanced AI Measurement, Evaluation and Science. That brings together 10 countries, including Commonwealth countries such as Canada, Australia and Kenya, and the US, the EU and Singapore, to shape and advance the science of AI evaluations globally. That is important because boosting public trust in the technology is vital to AI adoption. It helps to unlock groundbreaking innovations, deliver new jobs and forge new opportunities for business innovators to scale up and succeed. The UK has shaped the passage of key international AI initiatives such as the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI—both at the UN—and the framework convention on AI at the Council of Europe. The convention is the world’s first international agreement on AI and considers it with regard to the Council’s remit of human rights, democracy and the rule of law, seeking to establish a clear international baseline that grounds AI in our shared values.
I shall close by talking about the importance not only of the UK taking the risks of AI seriously, but of our conviction that it will be a driver of national renewal, and of our ambition to be a global leader in the development and deployment of AI. This is the way that will keep us safest of all. Our resilience and strategic advantage are based on our being competitive in an AI-enabled world. It matters who influences and builds the models, data and AI infrastructure.
That is why we are supporting a full plan, including our sovereign AI unit, which is investing over £500 million to help innovative UK start-ups expand and seed in the AI sector. It is why we are progressing the infrastructure level, including the announcement of five AI growth zones across the UK, accelerating the delivery of data centres. It is why we are expanding National Compute and why we are equipping all people—students and workers—with digital and AI skills. We want to benefit from AI’s transformative power, so we need to adopt it as well as manage its risks. That is why we have also committed to looking at the impact of AI on our workforce through the AI and future of work unit. We are working domestically and collaborating internationally to facilitate responsible innovation, ensuring that the UK stands to benefit from all that AI has to offer.