Thursday 9th November 2023

(6 months ago)

Written Statements
Read Hansard Text
Paul Scully Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Paul Scully)
- Hansard - - - Excerpts

I am providing the House with an update following the UK’s AI safety summit on 1 and 2 November 2023 at Bletchley Park—the birthplace of computer science.



As the Prime Minister set out in his speech on 26 October, the world stands at the crossroads of a technological revolution, and if we are to seize the benefits of AI, we must tackle the risks. AI is developing at an unprecedented speed, driven partly by greater computing power and innovations in model design. The capabilities of powerful AI systems will only increase, with profound economic and societal consequences, bringing unprecedented opportunities and risks. I am proud to share what the UK has achieved at this defining moment.



The summit was a first-of-its-kind event that has firmly established the UK as a global leader on AI safety. We brought the world’s leading powers, and major AI industry, academia and civil society organisations together to build a shared understanding of the risks and opportunities of frontier AI; to acknowledge the need for action; and to agree to work together to address these in the interest of all humanity. These common goals and shared principles were encapsulated in the Bletchley declaration, signed by 28 Governments representing not only the current world leaders in AI development, but also a majority of the world’s population and economy.



Over the two days, an unprecedented range of attendees agreed a raft of measures to support those objectives. These included:



A joint agreement for frontier AI models to be tested for safety both before and after they are rolled out;

A shared ambition to invest in public sector AI capability, to ensure that Governments can both steward industry effectively and directly scrutinise from a technical standpoint;

A “State of the Science” report, led by one of the “Godfathers of AI” Yoshua Bengio, which will collate and distil the latest insight from across the global community help build a shared understanding of the capabilities and risks posed by frontier AI.

The UK also used the leadup to the summit to gather feedback and insights from hundreds of UK stakeholders, and work with and encourage leading frontier AI organisations to publish their safety policies. We also launched the world’s first AI safety institute, which will build public sector capability to conduct safety testing and research into AI safety, in partnership with countries around the world.



I am pleased to share more details of the Bletchley declaration and our work against each of our summit objectives.



The Bletchley declaration by countries attending the AI safety summit



The landmark Bletchley declaration agreed an initial, mutual understanding of frontier AI, and the risks associated with it, and set out that countries will work in an inclusive manner to ensure the deployment of responsible, trustworthy, and human-centric AI that is safe. It committed countries to further collaborate on establishing a shared scientific and evidence-based understanding of the relevant risks.



Objective 1—a shared understanding of the risks posed by frontier AI and the need for action; and



Objective 2—a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks.



Informed by demonstrations from the UK’s frontier AI taskforce and a discussion paper on the capabilities and risks of frontier AI, summit attendees learned of the impact and urgency of the key risks from frontier AI. They recognised that potential harms from misuse, loss of control, and the potential for leaps in capability were particularly pressing.



Attendees engaged in substantive discussions on the impact of AI on wider societal issues, including potential harms caused by disinformation and the amplification of existing inequalities. Participants expressed a range of views on which risks should be prioritised, noting that addressing risks at the frontier of AI is not mutually exclusive from addressing existing AI risks and harms.



With the speed of technological change, participants affirmed the importance of continued collaboration and agreed on the urgency of establishing a shared international consensus on the capabilities and risks of frontier AI. In order to maintain public trust, attendees agreed that future decisions on AI safety must be underpinned by appropriate evidence, and recognised the necessity of fast, flexible and collaborative action by all actors, in particular Governments and frontier AI developers, to further understand those risks and ensure effective oversight.



All countries in attendance welcomed the UK’s initiative to deliver a first-of-its-kind state of the science report on frontier AI. Building on the commitment for scientific and evidence-based collaboration as set out in the Bletchley declaration, the report will facilitate a shared science based understanding of the risks and capabilities associated with frontier AI. The UK has commissioned Yoshua Bengio, a pioneer and Turing award winning AI academic, to chair the delivery of the report. He will be supported by a group of leading AI academics and advised by an inclusive, international expert advisory panel made up of representatives from participating countries.



Objective 3—appropriate measures that individual organisations should take to increase frontier AI safety; and



Objective 4—areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.



As set out in the Bletchley declaration, no single part of society can address the impacts of frontier AI alone; delivering on the potential of AI requires sustained attention of Governments, businesses, academia, and civil society, with a particularly strong responsibility for actors developing frontier AI capabilities.



Ahead of the summit, the UK published an overview of emerging frontier AI safety processes and associated practices to share best practice on how such AI systems can be developed in a safe manner. The UK was, therefore, pleased to see leading frontier AI organisations (Amazon, Anthropic, Google DeepMind, Inflection, Meta, Microsoft, OpenAI) respond to this by publishing their own AI safety policies. This formalises their own position on AI safety to enable public scrutiny of their outputs and encourages all frontier AI developers to consider how they can build trust through the further development and publication of such policies.



Building on this, countries and leading AI companies agreed on the importance of bringing together the responsibilities of Governments and AI developers and agreed to a plan for safety testing at the frontier. Participating countries committed, depending on their circumstances, to the development of appropriate state-led evaluation and safety research, while participating companies agreed that they would support the next iteration of their models to undergo appropriate independent evaluation and testing.



As an initial contribution to this new collaboration, the UK detailed its launch of the world’s first AI safety institute, which will build public sector capability to conduct safety testing and research into AI safety. In exploring all the risks, from social harms including bias and misinformation, through to the most extreme risks of all, including the potential for loss of control, the UK will seek to make the work of the safety institute widely available. The UK welcomed commitments from companies in attendance to work with the institute to allow for pre-deployment testing of their frontier AI models and commitments to work in partnership with other countries’ institutes, including the US.



Objective 5—showcase how ensuring the safe development of AI will enable AI to be used for good globally.



Attendees together recognised a shared ambition to unlock the significant potential of frontier AI, which has the ability to transform economies and societies for the better.



Participants welcomed the exchange of ideas and evidence on current and upcoming initiatives, including individual countries’ efforts to utilise AI in public service delivery and elsewhere to improve human wellbeing.



The UK will continue to be a leader on AI for good, and I welcome the announcements of our new £100 million fund for an AI life sciences accelerator mission to bring cutting edge AI to bear on some of the most pressing health challenges facing society, and a £32 million philanthropic partnership to shape the future of our best-in-class UK biobank. AI also has extraordinary potential to support teachers and students, which is why the Government are investing £118 million into the AI skills base, including postgraduate research centres, a new visa scheme, and postgraduate AI scholarships.



Many participants at the summit set out that for AI to be inclusive, it must also be accessible. I was pleased to announce that the UK, with Canada, the United States of America, the Bill and Melinda Gates Foundation, and other partners, announced £80 million for a new AI for development collaboration, working with innovators and institutions across Africa to support the development of responsible AI.



Keeping up momentum



With the frontier of AI constantly moving, the ambitions of the Bletchley declaration and the summit discussions cannot be rooted in a single moment.



I am pleased that the Republic of Korea has agreed to co-host a mini virtual summit on AI in the next six months, with France to host the next in-person summit a year from now.



The summit would not have had this level of success without the engagements we had in the weeks leading up to it. Ministers and senior officials led an inclusive public dialogue on international safety. These substantive, practical discussions not only informed and enriched our conclusions and conversations at the summit, but also allowed a range of voices to be heard in shaping the key policy decisions.



We will continue these conversations with key stake- holders, to ensure that we build on the conclusions reached at the summit and ensure that international agreements are underpinned by a robust domestic regulatory framework for AI.



We will be publishing the AI regulation White Paper response by the end of this year to further set out our next steps on our approach to this fast-paced and transformative technology.

[HCWS21]