UK Artificial Intelligence Policy Debate
Full Debate: Read Full DebateMichelle Donelan
Main Page: Michelle Donelan (Conservative - Chippenham)Department Debates - View all Michelle Donelan's debates with the Department for Science, Innovation & Technology
(1 year, 3 months ago)
Written StatementsI am pleased to provide the House with an update on developments in the UK’s Government’s artificial intelligence policy in recent months.
AI promises to revolutionise our economy, society and everyday lives, bringing with it enormous opportunities but also significant new risks. Led by the Department for Science, Innovation and Technology, the UK has established itself as a world leader in driving responsible, safe AI innovation and has committed to host the first major international summit of its kind on the safe use of AI, to be held at Bletchley Park on 1 and 2 November 2023.
AI Safety Summit
The AI safety summit will bring together key countries, as well as leading technology organisations, academia and civil society to inform rapid national and international action at the frontier of AI development. The summit will focus on risks created or significantly exacerbated by the most powerful AI systems. For example, the proliferation of access to information that could undermine biosecurity. In turn, the summit will also consider how safe frontier AI can be used for public good and to improve people’s lives—from lifesaving medical technology to safer transport. It will build on important initiatives already being taken forward in other international fora, including at the UN, OECD, G7 and G20, by agreeing practical next steps to address risks from frontier AI.
On 4 September, the Government launched the start of formal pre-summit engagement with countries and a number of frontier AI organisations. As part of an iterative and consultative process, the Government published the five objectives that will be progressed. These build upon initial stakeholder consultation and evidence gathering, and will frame the discussion up to and at the summit:
a shared understanding of the risks posed by frontier AI and the need for action;
a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
appropriate measures that individual organisations should take to increase frontier AI safety;
areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
showcase how ensuring the safe development of AI will enable AI to be used for good globally.
I look forward to keeping Parliament updated as plans for the summit progress.
Frontier AI Taskforce
Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly. Earlier this year, the Government announced £100 million to set up an expert taskforce to help the UK adopt the next generation of safe AI—the first of its kind.
On 7 September, we renamed the taskforce—formerly the Foundation Model Taskforce—the Frontier AI Taskforce, explicitly acknowledging its role in evaluating risk at the frontier of AI, and systems which could pose significant risks to public safety and global security.
Since the taskforce’s chair, Ian Hogarth, was appointed 12 weeks ago, the taskforce has made rapid progress, recruiting its external advisory board, research teams and developing partnerships with leading frontier AI organisations, to help develop innovative approaches to addressing the risks of AI and harnessing its benefits. I am pleased to be welcoming seven leading advisers to guide and shape the taskforce’s work through its external advisory board. This includes: the Turing prize laureate Yoshua Bengio; the GCHQ Director, Anne Keast-Butler; the Deputy National Security Adviser, Matt Collins; the Chief Scientific Adviser for National Security, Alex Van Someren; the former Chair of the Academy of Medical Royal Colleges, Dame Helen Stokes-Lampard; the Alignment Research Centre researcher Paul Christiano; and the Prime Minister’s representative for the AI safety summit, Matt Clifford, who will join as vice-chair to unite the taskforce’s work with preparations for the summit—all of whom will turbo charge the taskforce’s work by offering expert insight.
We are also drawing on experts to build a world-leading research team. Oxford researcher Yarin Gal has been confirmed as the first taskforce research director. Cambridge researcher David Kreuger will also be working with the taskforce as it scopes its research programme in the run-up the summit. The research team will sit alongside a dedicated team of civil servants—overseen by a senior responsible officer in my Department, reporting into the DSIT permanent secretary as accounting officer. Together, these teams will work to develop sophisticated safety research capabilities for the UK, strengthen UK AI capability and deliver public sector use cases in frontier AI models.
Industry collaboration, including internationally, forms the backbone of UK’s approach to shared AI safety and the work of the taskforce will be no different. The taskforce is harnessing established industry expertise through partnerships with leading AI companies and non-profits, a number of which were outlined in our recent announcement. These partnerships will unlock advice on the national security implications of frontier AI, as well as broader support in assessing the major societal risks posed by AI systems.
AI Regulation
We are moving quickly to establish the right guardrails for AI to drive responsible, safe innovation. In March, we published the AI regulation White Paper, which set out our first steps towards establishing a regulatory framework for AI. We proposed five principles to govern AI, and committed to establishing mechanisms to monitor AI risk, and co-ordinate, evaluate and adapt the regulatory framework as this technology evolves. We received responses from over 400 individuals and organisations across regulators, industry, academia and civil society. We will be publishing our response to the consultation later this year, to ensure we can take into account the outcomes of the AI safety summit in November.
Since publishing the White Paper, we have taken rapid steps to implement our regulatory approach. I am pleased to confirm that my Department has now established a central AI risk function, which will identify, measure and monitor existing and emerging AI risks using expertise from across Government, industry and academia, including the taskforce. It will allow us to monitor risks holistically as well as to identify any potential gaps in our approach.
We committed to an iterative approach that will evolve as new risks or regulatory gaps emerge. We note the growing concern around the risks to safety posed by our increasing use of AI, particularly the advanced capabilities of frontier AI and foundation models. Our work through the taskforce offers vital insights into the issue and we will be convening nations to examine these particular risks at the international level. We will be providing a wider update on our regulatory approach through our response to the AI regulation White Paper later this year.
Alongside this, we are working closely with regulators. Many have started to proactively and independently take action in line with our proposed AI framework, including the Competition and Markets Authority, which yesterday published a report on its initial review of AI foundation models; the Medicines and Healthcare products Regulatory Agency, which has published a road map for software and AI as a medical device; and the Office for Nuclear Regulation, which is piloting an independent sandbox for the use of AI in the nuclear sector, with support from the regulators’ pioneer fund. This demonstrates how our expert UK regulators are taking innovative, world-leading approaches to ensuring AI safety and effectiveness.
We are also examining ways to improve co-ordination and clarity across the regulatory landscape. This includes our work with the Digital Regulation Cooperation Forum (DRCF) to pilot a multi-regulator advisory service for AI and digital innovators, which will be known as the DRCF AI and digital hub. This will provide tailored support to innovators to navigate the AI and wider digital regulatory landscape and capture important insights to support the design and delivery of our AI regulatory framework.
[HCWS1054]