Superintelligent AI

(Limited Text - Ministerial Extracts only)

Read Full debate
Thursday 29th January 2026

(1 day, 8 hours ago)

Lords Chamber
Read Hansard Text
Baroness Lloyd of Effra Portrait The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology (Baroness Lloyd of Effra) (Lab)
- Hansard - - - Excerpts

My Lords, I thank my noble friend Lord Hunt for initiating this important debate on an important topic, and all noble Lords from around the House for their contributions today. This Government believe that advanced AI has transformative potential for the UK: from scientific innovation and public service reform to economic growth, as many noble Lords have set out today. However, as we realise these benefits, we need to make sure that AI remains secure and controllable. New technologies bring with them novel risks, and we have heard today from many noble Lords the directions in which technology might take us.

As has been mentioned, the UK is committed to a context-based regulatory approach whereby most AI systems are regulated at the point of use and by our existing regulators, who are best placed to understand the risks and the context of AI deployment in their sectors. Regulators are already acting. The ICO has released guidance on AI and data protection, and last year Ofcom published its strategic approach to AI, which sets out how it is addressing AI-related risks. My noble friend asked about Ofcom’s expertise and resources. Ofcom has recruited expert online safety teams from various sectors, including regulation, tech platforms, law enforcement, civil society and academia, and is being resourced to step up and regulate in this area. The FCA has also announced a review into how advances in AI could transform financial services.

As my noble friend also mentioned, the Government are working proactively with regulators, through both the Digital Regulation Cooperation Forum and the Regulatory Innovation Office, to ensure that regulators have the capabilities to regulate what we see today and anticipate regulations that may be needed in the future, both in respect of AI and of other scientific and technological developments in other areas that are coming towards us. We heard many suggestions today on how we might regulate further. The Government are prepared to step up to the challenges of AI and take further action. We will keep your Lordships’ House updated on any proposals in this area. However, I am unable to speculate on any further legislation ahead of parliamentary announcements.

We have heard a lot of testament to the abilities and expertise of the AI Security Institute. Equally, as other Lords have mentioned—the noble Lord, Lord Tarassenko, brought precision to the definitions here—we cannot be sure how AI will develop and impact society over the next five, 10 or 20 years. We need to navigate this future based on evidence-based foresight to inform action with technical solutions and global co-ordination.

We should be very proud of our world-leading AI Security Institute: it is the centre of UK expertise, advancing our scientific understanding of the capabilities and the associated risks. Close collaboration with AI labs has ensured that the institute has been able to test more than 30 models to understand their potentially harmful capabilities, and we think this is the best way to proceed. It is having a real-world impact. The institute’s testing is making models safer, with findings being used by industry to strengthen AI model safeguards. It is carrying out foundational research to discover methods for building AI systems that are beneficial, reliable and aligned with human values.

One of the AISI’s priorities is tracking the development of AI capabilities that would contribute to AI’s ability to evade human control, which has been raised many times in the debate today. It supports research in this field through the alignment project, a funding consortium distributing £15 million to accelerate research projects. To ensure that the Government act on these insights, the institute works with the Home Office, NCSC and other national security organisations to share its evidence for the most serious risks posed by AI.

The noble Baronesses, Lady Foster and Lady Neville-Jones, spoke about the risks associated with AI cyber capabilities. We are closely monitoring those, in terms of both the risks posed and the solutions for combating the cyber risks that AI can contribute. We have developed the AI Cyber Security Code of Practice to help secure AI systems and the organisations that develop and deploy them. That is another example of the UK setting standards that can be followed by others—another point made by noble Lords today, when they spoke about how the UK can contribute to the safe development of AI. The institute will continue to evaluate and scan the horizon to ensure we focus our research on the most critical risks.

As has been pointed out, AI is being developed in many nations and will also have impacts across borders and across societies, so international collaboration is essential. The Deputy Prime Minister set out to the UN Security Council last autumn the United Kingdom’s commitment to using AI responsibly, safely, legally and ethically. We continue to work with international partners to achieve this.

The AI Security Institute is world leading, with global impact. Since December it has assumed the role of co-ordinator for the International Network for Advanced AI Measurement, Evaluation and Science. That brings together 10 countries, including Commonwealth countries such as Canada, Australia and Kenya, and the US, the EU and Singapore, to shape and advance the science of AI evaluations globally. That is important because boosting public trust in the technology is vital to AI adoption. It helps to unlock groundbreaking innovations, deliver new jobs and forge new opportunities for business innovators to scale up and succeed. The UK has shaped the passage of key international AI initiatives such as the Global Dialogue on AI Governance and the Independent International Scientific Panel on AI—both at the UN—and the framework convention on AI at the Council of Europe. The convention is the world’s first international agreement on AI and considers it with regard to the Council’s remit of human rights, democracy and the rule of law, seeking to establish a clear international baseline that grounds AI in our shared values.

I shall close by talking about the importance not only of the UK taking the risks of AI seriously, but of our conviction that it will be a driver of national renewal, and of our ambition to be a global leader in the development and deployment of AI. This is the way that will keep us safest of all. Our resilience and strategic advantage are based on our being competitive in an AI-enabled world. It matters who influences and builds the models, data and AI infrastructure.

That is why we are supporting a full plan, including our sovereign AI unit, which is investing over £500 million to help innovative UK start-ups expand and seed in the AI sector. It is why we are progressing the infrastructure level, including the announcement of five AI growth zones across the UK, accelerating the delivery of data centres. It is why we are expanding National Compute and why we are equipping all people—students and workers—with digital and AI skills. We want to benefit from AI’s transformative power, so we need to adopt it as well as manage its risks. That is why we have also committed to looking at the impact of AI on our workforce through the AI and future of work unit. We are working domestically and collaborating internationally to facilitate responsible innovation, ensuring that the UK stands to benefit from all that AI has to offer.