Superintelligent AI

Lord Hunt of Kings Heath Excerpts
Thursday 29th January 2026

(6 days, 9 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Asked by
Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath
- Hansard - -

To ask His Majesty’s Government what plans they have to bring forward proposals for an international moratorium on the development of superintelligent AI.

Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath (Lab)
- Hansard - -

My Lords, I am delighted that so many noble Lords have decided to take part in this debate. I record my thanks to ControlAI for the support it is giving me.

Only two days ago, my noble friend the Minister’s department announced an initiative to bring UK AI experts into Whitehall to help improve everyday public services. Backed by a $1 million investment from Meta, a new cohort of AI fellows will spend the next year developing open-source tools that tackle some of the biggest challenges facing public services. I congratulate the Government on this.

I stress, particularly to my noble friend, that I am no Luddite when it comes to AI. It can bring unprecedented progress, boost our economy and improve public services. We are number three in the global rankings for investment in AI. I understand why the Government do not want to seem to be overregulating this sector when it is so important that we develop innovation and investment in the UK, but we cannot ignore the huge risks that superintelligent AI—or ASI, as I will call it—may bring. I am using this debate to urge the Government to consider building safeguards into ASI development to ensure that it proceeds only in a safe and controllable manner, and to seek international agreement on it.

No one should be in any doubt about the risks. I was struck by the call this week from the Anthropic chief, Dario Amodei, one of the most powerful entrepreneurs in the AI industry globally. He warned about the need for humanity to wake up to the dangers, saying:

“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it”.


He outlined the risks that could arise with the advent of what he calls “powerful AI”: systems that would be

“much more capable than any Nobel Prize winner, statesman, or technologist”.

Among the risks, he pointed out, is the potential for individuals to develop biological weapons capable of killing millions or, in the worst case, even destroying all life on earth.

Dario Amodei is not alone. I refer noble Lords to the report of our own UK AI Security Institute in December last year. It said that

“AI systems also have the potential to pose novel risks that emerge from models themselves behaving in unintended or unforeseen ways. In a worst-case scenario, this unintended behaviour could lead to catastrophic, irreversible loss of control over advanced AI systems”.

Clearly, it is in the military and defence domains where a particular concern arises, with the potential development of potent autonomous weapons significantly increasing the destructive potential of warfare.

One would have hoped that AI companies would proceed with a certain degree of caution—but far from it. Caution has been thrown to the wind. They have made racing to develop superintelligent AI their explicit goal, with each company feeling compelled to move faster precisely because their competitors are doing the same. So I call on the Government to think through the need not just for a moratorium on development but for some international agreement. These are not exactly fertile times to propose international agreements, but the fact is that countries are still agreeing treaties and the case is so strong that we must start discussing this with our partners.

Look at defence as one issue: clearly, there is a major motivation for the major military powers to use AI to gain decisive military advantage. But, as far as I can understand, there are huge risks for countries in doing so. They could lose control of their critical infrastructure. There is a real issue with losing control of military systems where AI technology is increasingly embedded. No nation—not even President Trump’s US, China or the UK—has an interest in that outcome. We cannot abdicate our responsibility to seek some kind of international agreement.

I would say to noble Lords who are sceptical about the chances of doing this that international agreements have been reached in equally turbulent times or worse. In the 1980s, when the Cold War threatened nuclear annihilation, nations agreed to a landmark nuclear de-escalation treaty, and in the 1990s, the Chemical Weapons Convention was drafted and entered into force, and those agreements have been ratified by over 98% of the world’s nations. Of course, they are not perfect, but they have surely been a force for good and have demonstrably made the world safer.

We are uniquely placed to give a lead in some of the international discussions. At Oral Questions on Monday, the noble Baroness, Lady Harding, made a very important point. She pointed to the Warnock committee’s work on in vitro fertilisation, which helped set a global standard for that practice long before the scientific developments made it possible, which is where we are with superintelligent AI. She said that one of the most fascinating things about that committee was that Baroness Warnock proposed the 14-day rule for experimentation on human embryos when at the time they could be kept alive only for two days. She thought through the moral question before, not after, the technology was available. As the noble Baroness commented, Warnock also settled societal concerns within a framework which became a national competitive advantage in human embryology and fertilisation research and care. I suggest that exactly the same advantage could come from the UK if it were prepared to take a lead.

Across the world, a coalition is emerging of AI experts, the AI industry itself—some of its key leaders—organisations such as ControlAI and citizens, who believe we need to work very hard on this. Just last week at the World Economic Forum in Davos, Demis Hassabis, CEO of UK-based Google DeepMind, said he would advocate for a pause in AI development if other companies and countries followed suit. We should take him at his word. A momentum is building up and I very much urge the Government to take a lead in this. I beg to move.

Superintelligent AI

Lord Hunt of Kings Heath Excerpts
Monday 26th January 2026

(1 week, 2 days ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Asked by
Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath
- Hansard - -

To ask His Majesty’s Government what plans they have to regulate the development of superintelligent AI.

Baroness Lloyd of Effra Portrait The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
- Hansard - - - Excerpts

AI’s superintelligence is the subject of ongoing debate regarding its definition and whether it is achievable. Advanced transformative AI presents both significant opportunities, such as improvements in healthcare and climate action, and risks. As frontier AI evolves, the AI Security Institute helps the Government assess and identify potential emerging risks, which would include pathways towards any kind of superintelligence. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. AI regulated by existing expert regulators will be informed by the AISI findings.

Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath (Lab)
- Hansard - -

My Lords, I am grateful to my noble friend for that considered Answer. Clearly, AI has great potential; the UK is third in the global league of AI investment. I understand the Government’s response, which is essentially a nuanced approach to encourage both proper regulation and investment.

However, superintelligent AI undoubtedly does present risks. The Minister will know that the director-general of MI5 has warned of the

“potential future risks from non-human, autonomous AI systems which may evade human oversight and control”.

Meanwhile, the UK’s AI Security Institute has warned:

“In a worst-case scenario, this … could lead to catastrophic, irreversible loss of control over advanced Al systems”.


The problem is that the companies developing superintelligence do not know the outcome and there are currently no barriers to the development. I urge the Government to take this really seriously and to start talking to other countries about putting some safety controls in.

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- Hansard - - - Excerpts

My noble friend is right to mention the research of the AI Security Institute, which is advice the Government listen to and take very seriously. AI is a general-purpose technology with a wide range of applications, which is why the UK believes that the vast majority of AI should be regulated at the point of use. My noble friend is also right that collaboration with other countries is critical, and the UK’s approach is to engage with many other countries, and through the AI Security Institute with developers so that it has good insight into what is happening in development today.

Technology Adoption Review

Lord Hunt of Kings Heath Excerpts
Monday 15th December 2025

(1 month, 2 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

The noble Lord is right that attracting high-calibre talent to this country is incredibly important. We have a number of ongoing initiatives to do that, including the Global Talent Taskforce, as well as through academia, as my noble friend the Minister with responsibility for science and technology talked about. The digital skills jobs plan will also set out how we can support that aim and get the balance right between growing homegrown talent and attracting those we need to from abroad, so that we have the best chances of growing our science base and the spin-outs.

Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath (Lab)
- View Speech - Hansard - -

My Lords, does my noble friend agree that AI literacy should be extended to the police force and the judiciary? In very recent cases, it is clear that AI provided incorrect quotes in compiling reports and writing judgments; and in the case of the West Midlands Police, a non-existent football match was cited as a reason why Maccabi fans should not be allowed into Birmingham. Do we not have to do a lot more to teach people how to use AI properly?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

My noble friend is absolutely right that AI has huge potential, but that getting right its adoption and the use of critical skills, whether in the public or private sector, is an integral part of ensuring that it drives productivity and all the promised expectations.

Employee Car Ownership Schemes

Lord Hunt of Kings Heath Excerpts
Monday 8th December 2025

(1 month, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Stockwood Portrait Lord Stockwood (Lab)
- View Speech - Hansard - - - Excerpts

I thank my predecessor, who did an excellent job in the Office for Investment. He will understand that we are looking at many different projects that enhance the investment attractiveness of the UK and at our commitment to our climate goals, in which the gigafactories are large and proportionate players.

Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath (Lab)
- View Speech - Hansard - -

My Lords, on the issue of climate goals, will the Minister remind the noble Lord, Lord Forsyth, that the CBI report in February 2025 showed that, since 2023, the net-zero economy had grown by 10.1%, which compares favourably with the general level of growth? Should we not celebrate the net-zero economy and the potential it brings to this country?

Lord Stockwood Portrait Lord Stockwood (Lab)
- View Speech - Hansard - - - Excerpts

I thank my noble friend for the reminder. I agree that the net-zero transition creates the most attractive and best use of our capabilities in the UK. I am happy to support his comment.

Data Adequacy Status: EU Data Protection Standards

Lord Hunt of Kings Heath Excerpts
Thursday 4th December 2025

(2 months ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

To take that question in two parts, we are confident about the EU’s scrutiny of our legislation. The Commission has started its review and published the report that I mentioned in July. The European Data Protection Board published a non-legally binding opinion on its draft decision on 20 October. We are confident that a member state vote will take place ahead of the 27 December deadline. The EU’s proposals to change its data protection framework have only recently been published. We will have a look at the details of those changes as and when they become clear and are confirmed.

Lord Hunt of Kings Heath Portrait Lord Hunt of Kings Heath (Lab)
- View Speech - Hansard - -

My Lords, related to data security is superintelligent AI. Many recent reports have suggested that this is a huge threat to our global security. Are we discussing this with the EU and other international partners to try to mitigate some of the potential damage that could be caused by it?

Baroness Lloyd of Effra Portrait Baroness Lloyd of Effra (Lab)
- View Speech - Hansard - - - Excerpts

We continue to look at all potential AI threats and are immensely assisted in this by the work of the AI Security Institute, which has deepened our understanding of critical security threats posed by all sorts of frontier AI and the type that the noble Lord mentioned. We continue to talk about this to international partners.