AI Systems: Risks Debate

Full Debate: Read Full Debate

Lord Leong

Main Page: Lord Leong (Labour - Life peer)
Thursday 8th January 2026

(2 days, 6 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Lord Leong Portrait Lord in Waiting/Government Whip (Lord Leong) (Lab)
- Hansard - -

My Lords, I acknowledge the interest of the noble Lord, Lord Fairfax, in this area of artificial intelligence and congratulate him on securing this important, wide-ranging and timely debate. I thank all noble Lords who have contributed to it. I will use the time available to set out the Government’s position and respond to noble Lords’ questions. If I am unable to answer all of them because of time constraints, I will go through Hansard, write to noble Lords and place a copy in the Library.

Recent remarks by the director-general of MI5 highlight that advanced AI is now more than just a technical matter: it has become relevant to national security, the economy and public safety. The warning was not regarding panic or science fiction scenarios; it focused on responsibility. As AI systems grow more capable and more autonomous, we must ensure that they function on a scale and at a speed within human control.

The future prosperity of this country will be shaped by science, technology and AI. The noble Lord, Lord Goldsmith, is absolutely right that we have to look at the advantages that AI brings to society and to us all. That is why we have announced a new package of reforms and investment to use AI as a driver of national renewal. But we must be, and are, clear-eyed about this. As the noble Baroness, Lady Browning, mentioned, we cannot unlock the opportunities unless AI is safe for the public and businesses, and unless the UK retains real agency over how the technology is developed and deployed.

That is exactly the approach that this Government are taking. We are acting decisively to make sure that advanced AI remains safe and controllable. I give credit to the previous Government for establishing the world-leading AI Security Institute to deepen our scientific understanding of the risks posed by frontier AI systems. We are already taking action on emerging risks, including those linked to AI chatbots. The institute works closely with AI labs to improve the safety of their systems, and has now tested more than 30 frontier models.

That work is not academic. Findings from those tests are being used to strengthen real-world safeguards, including protections against cyber risks. Through close collaboration with industry, the national security community and our international partners, the Government have built a much deeper and more practical understanding of AI risks. We are also backing the institute’s alignment project, which will distribute up to £15 million to fund research to ensure that advanced AI systems remain controllable and reliable and follow human instructions, even as they become more powerful.

Several noble Lords mentioned the potential of artificial general intelligence and artificial superintelligence, as well as the risks that they could pose. There is considerable debate around the timelines for achieving both, and some experts believe that AGI could be reached by 2030. We cannot be sure how AI will develop and impact society over the next five—perhaps even less than that—10 or 20 years. Navigating this future will require evidence-based foresight to inform action, technical solutions and global co-ordination. Without a shared scientific understanding of these systems, we risk underreacting to threats or overcorrecting against innovation.

Through close collaboration with companies, the national security community and our international partners, the Government have deepened their understanding of such risks, and AI model security has improved as a result. The Government will continue to take a long-term, science-led approach to understand and prepare for risks emerging from AI. This includes preparing for the possibility of rapid AI progress, which could have transformative impacts on society and national security.

We are not complacent. Just this month, the Technology Secretary confirmed in Parliament that the Government will look at what more can be done to manage the risks posed by AI chatbots. She also urged Ofcom to use its existing powers to ensure that any chatbots in scope of the Online Safety Act are safe for children. Some noble Lords may be aware that today Ofcom has the power to impose sanctions on companies of up to 10% of their revenue or £18 million, whichever is greater.

Several noble Lords have touched on regulation, and I will just cover it now. We are clear that AI is a general-purpose technology, with uses across every sector of the economy. That is why we believe most AI should be regulated at the point of use. Existing frameworks already matter. Data protection and equality law protect people’s rights and prevent discrimination when AI systems are used to make decisions about jobs, credit or access to services.

We also know that regulators need to be equipped for what is coming. That is why we are working with them to strengthen their capabilities and ensure they are ready to deal with the challenges that AI presents.

Security does not stop at our borders. The UK is leading internationally and driving collaboration with allies to raise standards, share scientific insight and shape responsible global norms for frontier AI. We lead discussions on AI at the G7, the OECD and the United Nations, and we are strengthening bilateral partnerships, including our ongoing collaboration with India as we prepare for the AI Impact Summit in Delhi next month. I hope this provides assurance to the noble Viscount, Lord Camrose. The AI Security Institute will continue to play a central role globally, including leading the International Network for Advanced AI Measurement, Evaluation and Science, helping to set best practice for model testing and safety worldwide.

In an AI-enabled world, it matters who owns the infrastructure, builds the models and controls the data. That increasingly shapes our lives. That is why we have established a Sovereign AI Unit, backed by around £500 million, to support UK start-ups across the AI ecosystem and ensure that Britain has a stake at every layer of the AI stack.

Several noble Lords asked about our dependence on US companies. Our sovereignty goals should indeed be resilience and strategic advantage, not total self-reliance. We have to face the fact that US companies are the main providers of today’s frontier model capabilities. Our approach is to ensure that the UK can use the best models in the world while protecting UK interests. To achieve this, we have established strategic partnerships with leading frontier model developers, such as the memoranda of understanding with Anthropic, OpenAI and Cohere, to ensure resilient access and influence the development of such capabilities.

We are also investing in advanced, AI-based compute so that researchers can work on national priorities. We are creating AI growth zones across the country to unlock gigawatts of capacity by 2030. Through our advanced market commitment, we are helping promising UK firms to scale, win global business and deploy British chips alongside established providers.

We are pursuing AI safety with such determination, because it is what unlocks opportunity. The UK should not be an AI taker. Businesses and consumers need confidence that AI systems are safe and reliable and do what they are supposed to do. That is why our road map to trusted third-party AI assurance is so important. Trust is what turns safety into growth. It is what allows innovation to flourish.

In January we published the AI Opportunities Action Plan, setting out how we will seize the benefits of this technology for the country. We will train 7.5 million workers in essential AI skills by 2030, equip 1 million students with AI and digital skills, and support talented undergraduates and postgraduates through scholarships at leading STEM universities. I hope this will be welcomed by the noble Lord, Lord Clement-Jones.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, at that point, I will just interrupt the Minister before the witching hour. The Minister has reiterated the approach to focus governance on the user—that is, the sectoral approach—but is that not giving a free pass to general AI developers?

Lord Leong Portrait Lord Leong (Lab)
- Hansard - -

My Lords, I will quickly respond. We have to be very careful about what level we regulate at. AI is multifaceted: at different stacks we have the infrastructure, the data level, the model level and so on. At which level are we going to regulate? We are trying to work with the community to find out what is best before we come up with a solution as far as regulation is concerned. AI is currently regulated at different user levels.

Let me continue. We are appointing AI champions to work with industry and government to drive adoption in high-growth sectors. Our new AI for Science Strategy will help accelerate breakthroughs that matter to people’s lives.

In summary, safe and controllable AI does not hinder progress; rather, it underpins it. It is integral to our goal of leveraging AI’s transformative power and securing the UK’s role as an AI innovator, not merely a user. Safety is not about stopping progress. It is about maintaining human control over advances. The true danger is not overthinking this now; it is failing to think enough.

Baroness Kidron Portrait Baroness Kidron (CB)
- Hansard - - - Excerpts

I thank the Minister for such a generous and wide-ranging response. He said that we are going to regulate at the point of use, yet in this House we saw such a battle about the misuse of UK creative data that is protected by UK law. The UK Government wanted to give it away rather than protect UK law, so that is one example. Equally, the Minister mentioned the sovereign AI fund, but I hear again and again from UK AI companies that they are dominated by US companies in those discussions and that the UK companies are not really getting an advantage. I would like to hear the Minister’s response, given that we have a little time.

Lord Leong Portrait Lord Leong (Lab)
- Hansard - -

I thank the noble Baroness for that; I respect her interest and work in this area. It would take me at least 20 minutes to cover most of what was asked. There were points about regulation at different levels, about AI and copyright and about the Sovereign AI Unit’s funding of £500 million. We need to work at each different level and, as I said, regulation is vital. Personally, I think it will be very difficult for us to have one AI regulation Bill to cover everything, because we may miss something. We need evidence from speaking with academia, the players and so on to help us shape what regulation is required. I want to give the noble Baroness a comprehensive answer and I cannot do that here, so I will write to her accordingly.