AI Superintelligence Debate

Full Debate: Read Full Debate

Lord Tarassenko

Main Page: Lord Tarassenko (Crossbench - Life peer)
Tuesday 3rd February 2026

(1 day, 13 hours ago)

Lords Chamber
Read Full debate Read Hansard Text
Lord Leong Portrait Lord Leong (Lab)
- Hansard - - - Excerpts

My Lords, AI risks do not respect national borders and require sustained international leadership. I am pleased that the United Kingdom remains at the forefront, convening global partners to build shared understanding of frontier risks and mitigation. We work with the G7, the G20, the OECD, the United Nations and the Council of Europe. Through multilateral forums and bilateral partnerships, we are championing international safety standards and promoting transparency. Our approach ensures that global governance is rooted in democratic values and human rights, fostering a secure, responsible environment in which innovation can flourish safely across all territories.

Lord Tarassenko Portrait Lord Tarassenko (CB)
- Hansard - -

My Lords, the second International AI Safety Report is being published today. It is a scientific assessment, guided by 100 experts from 30 different countries and chaired by Yoshua Bengio, one of the three so-called godfathers of AI. A key finding is that general-purpose AI capabilities are improving more quickly than anticipated. Does the Minister agree that it is now time for DSIT to set up a commission or working group of experts, convened jointly by the AI Security Institute and the Alan Turing Institute, to investigate the potential impact of this increasing rate of progress towards general-purpose superintelligent AI?

Lord Leong Portrait Lord Leong (Lab)
- Hansard - - - Excerpts

My Lords, the Government are taking a proactive, evidence-led approach to the potential emergence of advanced AI. We have empowered the AI Security Institute, the world’s first state-backed body of its kind, to carry out onerous testing of frontier models against clear red lines, including autonomous self-replication and deception. In the last couple of months, the AI Security Institute has conducted more than 30 such tests, and will be working with partners to ensure that AI is safe for the general public.