AI Systems: Risks Debate

Full Debate: Read Full Debate

AI Systems: Risks

Lord Clement-Jones Excerpts
Thursday 8th January 2026

(1 day, 23 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I too thank the noble Lord, Lord Fairfax of Cameron, for initiating this important and timely debate. As a signatory to the AI Red Lines initiative, I agree very much with his reasons for bringing this debate to the Committee. Sadly, with apologies, there is too little time to properly wind up and acknowledge other contributions in the debate today.

In September 2025, Anthropic detected the first documented large-scale cyber espionage campaign using agentic AI. AI is no longer merely generating content; it is autonomously executing code to breach the security of organisations and nations. Leading AI researchers have already warned us of fundamental control challenges. Yoshua Bengio, AI pioneer and Turing Award winner, reports that AI systems and experiments have chosen their own preservation over human safety. Professor Stuart Russell describes the control problem: how to maintain power over entities more powerful than us. Mustafa Suleyman, one of the two founders of DeepMind, has articulated the containment problem: that AI’s inherent tendency towards autonomy makes traditional containment methods insufficient. The Institution of Engineering and Technology reports that six in 10 engineering employers already use AI, yet 46% of senior management do not understand it and 50% of employers expecting AI to be important lack necessary skills. If senior leaders cannot grasp AI fundamentals, they cannot govern it effectively.

The Government’s response appears fragmented, not only as regards sovereign AI development. We inexplicably renamed the AI Safety Institute as the AI Security Institute, muddling two distinct concepts. We lag behind the EU AI Act, South Korea’s AI Basic Act, Singapore’s governance framework and even China’s regulation of public-facing generative AI. Meanwhile, voluntary approaches are failing.

Let me press the Government on three crucial issues. First, ahead of AGI or superintelligence, as I frequently argue, we need binding legislation with clear mandates, not advisory powers. ISO standards embodying key principles provide a good basis in terms of risk management, ethical design, testing, training, monitoring and transparency and should be applied where appropriate. We need a broader definition of safety that encompasses financial, societal, reputational and mental health risks, not just physical harm. What is the Government’s plan in this respect? Secondly, we must address the skills crisis. People need confidence in using AI and need more information on how it works. We need more agile training programmes beyond current initiatives and AI literacy and we need to guard against deskilling. Thirdly, we must consider sustainability. AI consumes enormous energy, yet could mitigate 5% to 10% of global greenhouse gases by 2030. What is the Government’s approach on this?

As Stuart Russell has noted, when Alan Turing warned in 1951 that machines would take control, our response resembled telling an alien civilisation that we were out of office. The question is whether we will act with the seriousness that this moment demands or allow big tech, largely US-owned, to override the fundamental imperative of maintaining human control over these increasingly powerful systems in its own interests.

--- Later in debate ---
Lord Leong Portrait Lord in Waiting/Government Whip (Lord Leong) (Lab)
- Hansard - - - Excerpts

My Lords, I acknowledge the interest of the noble Lord, Lord Fairfax, in this area of artificial intelligence and congratulate him on securing this important, wide-ranging and timely debate. I thank all noble Lords who have contributed to it. I will use the time available to set out the Government’s position and respond to noble Lords’ questions. If I am unable to answer all of them because of time constraints, I will go through Hansard, write to noble Lords and place a copy in the Library.

Recent remarks by the director-general of MI5 highlight that advanced AI is now more than just a technical matter: it has become relevant to national security, the economy and public safety. The warning was not regarding panic or science fiction scenarios; it focused on responsibility. As AI systems grow more capable and more autonomous, we must ensure that they function on a scale and at a speed within human control.

The future prosperity of this country will be shaped by science, technology and AI. The noble Lord, Lord Goldsmith, is absolutely right that we have to look at the advantages that AI brings to society and to us all. That is why we have announced a new package of reforms and investment to use AI as a driver of national renewal. But we must be, and are, clear-eyed about this. As the noble Baroness, Lady Browning, mentioned, we cannot unlock the opportunities unless AI is safe for the public and businesses, and unless the UK retains real agency over how the technology is developed and deployed.

That is exactly the approach that this Government are taking. We are acting decisively to make sure that advanced AI remains safe and controllable. I give credit to the previous Government for establishing the world-leading AI Security Institute to deepen our scientific understanding of the risks posed by frontier AI systems. We are already taking action on emerging risks, including those linked to AI chatbots. The institute works closely with AI labs to improve the safety of their systems, and has now tested more than 30 frontier models.

That work is not academic. Findings from those tests are being used to strengthen real-world safeguards, including protections against cyber risks. Through close collaboration with industry, the national security community and our international partners, the Government have built a much deeper and more practical understanding of AI risks. We are also backing the institute’s alignment project, which will distribute up to £15 million to fund research to ensure that advanced AI systems remain controllable and reliable and follow human instructions, even as they become more powerful.

Several noble Lords mentioned the potential of artificial general intelligence and artificial superintelligence, as well as the risks that they could pose. There is considerable debate around the timelines for achieving both, and some experts believe that AGI could be reached by 2030. We cannot be sure how AI will develop and impact society over the next five—perhaps even less than that—10 or 20 years. Navigating this future will require evidence-based foresight to inform action, technical solutions and global co-ordination. Without a shared scientific understanding of these systems, we risk underreacting to threats or overcorrecting against innovation.

Through close collaboration with companies, the national security community and our international partners, the Government have deepened their understanding of such risks, and AI model security has improved as a result. The Government will continue to take a long-term, science-led approach to understand and prepare for risks emerging from AI. This includes preparing for the possibility of rapid AI progress, which could have transformative impacts on society and national security.

We are not complacent. Just this month, the Technology Secretary confirmed in Parliament that the Government will look at what more can be done to manage the risks posed by AI chatbots. She also urged Ofcom to use its existing powers to ensure that any chatbots in scope of the Online Safety Act are safe for children. Some noble Lords may be aware that today Ofcom has the power to impose sanctions on companies of up to 10% of their revenue or £18 million, whichever is greater.

Several noble Lords have touched on regulation, and I will just cover it now. We are clear that AI is a general-purpose technology, with uses across every sector of the economy. That is why we believe most AI should be regulated at the point of use. Existing frameworks already matter. Data protection and equality law protect people’s rights and prevent discrimination when AI systems are used to make decisions about jobs, credit or access to services.

We also know that regulators need to be equipped for what is coming. That is why we are working with them to strengthen their capabilities and ensure they are ready to deal with the challenges that AI presents.

Security does not stop at our borders. The UK is leading internationally and driving collaboration with allies to raise standards, share scientific insight and shape responsible global norms for frontier AI. We lead discussions on AI at the G7, the OECD and the United Nations, and we are strengthening bilateral partnerships, including our ongoing collaboration with India as we prepare for the AI Impact Summit in Delhi next month. I hope this provides assurance to the noble Viscount, Lord Camrose. The AI Security Institute will continue to play a central role globally, including leading the International Network for Advanced AI Measurement, Evaluation and Science, helping to set best practice for model testing and safety worldwide.

In an AI-enabled world, it matters who owns the infrastructure, builds the models and controls the data. That increasingly shapes our lives. That is why we have established a Sovereign AI Unit, backed by around £500 million, to support UK start-ups across the AI ecosystem and ensure that Britain has a stake at every layer of the AI stack.

Several noble Lords asked about our dependence on US companies. Our sovereignty goals should indeed be resilience and strategic advantage, not total self-reliance. We have to face the fact that US companies are the main providers of today’s frontier model capabilities. Our approach is to ensure that the UK can use the best models in the world while protecting UK interests. To achieve this, we have established strategic partnerships with leading frontier model developers, such as the memoranda of understanding with Anthropic, OpenAI and Cohere, to ensure resilient access and influence the development of such capabilities.

We are also investing in advanced, AI-based compute so that researchers can work on national priorities. We are creating AI growth zones across the country to unlock gigawatts of capacity by 2030. Through our advanced market commitment, we are helping promising UK firms to scale, win global business and deploy British chips alongside established providers.

We are pursuing AI safety with such determination, because it is what unlocks opportunity. The UK should not be an AI taker. Businesses and consumers need confidence that AI systems are safe and reliable and do what they are supposed to do. That is why our road map to trusted third-party AI assurance is so important. Trust is what turns safety into growth. It is what allows innovation to flourish.

In January we published the AI Opportunities Action Plan, setting out how we will seize the benefits of this technology for the country. We will train 7.5 million workers in essential AI skills by 2030, equip 1 million students with AI and digital skills, and support talented undergraduates and postgraduates through scholarships at leading STEM universities. I hope this will be welcomed by the noble Lord, Lord Clement-Jones.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, at that point, I will just interrupt the Minister before the witching hour. The Minister has reiterated the approach to focus governance on the user—that is, the sectoral approach—but is that not giving a free pass to general AI developers?

Lord Leong Portrait Lord Leong (Lab)
- Hansard - - - Excerpts

My Lords, I will quickly respond. We have to be very careful about what level we regulate at. AI is multifaceted: at different stacks we have the infrastructure, the data level, the model level and so on. At which level are we going to regulate? We are trying to work with the community to find out what is best before we come up with a solution as far as regulation is concerned. AI is currently regulated at different user levels.

Let me continue. We are appointing AI champions to work with industry and government to drive adoption in high-growth sectors. Our new AI for Science Strategy will help accelerate breakthroughs that matter to people’s lives.

In summary, safe and controllable AI does not hinder progress; rather, it underpins it. It is integral to our goal of leveraging AI’s transformative power and securing the UK’s role as an AI innovator, not merely a user. Safety is not about stopping progress. It is about maintaining human control over advances. The true danger is not overthinking this now; it is failing to think enough.