(10 months, 2 weeks ago)
Written StatementsToday, the Government are publishing our response to the consultation on the Artificial Intelligence (AI) Regulation White Paper: “A pro-innovation approach to AI regulation”.
The world is on the cusp of an extraordinary new era driven by advances in AI, which presents a once-in-a-generation opportunity for the British people to revolutionise our economy and transform public services for the better and to deliver real, tangible, long-term results for our country. The UK AI market is predicted to grow to over $1 trillion (USD) by 2035—unlocking everything from new skills and jobs to once unimaginable lifesaving treatments for cruel diseases like cancer and dementia. That is why I have made it my ambition for the UK to become the international standard bearer for the safe development and deployment of AI.
We have been working hard to make that ambition a reality, and our plan is working. Last year, we hosted the world’s first AI safety summit, bringing industry, academia and civil society together with 28 leading AI nations and the EU to agree the Bletchley declaration, thereby establishing a shared understanding of the opportunities and risks posed by frontier AI.
We were also the first Government in the world to formally publish our assessment of the capabilities and risks presented by advanced AI; and to bring together a powerful consortium of experts into our AI Safety Institute, committed to advancing AI safety in the public interest.
With the publication of our AI Regulation White Paper in March, we set out our initial steps to develop a pro-innovation AI regulatory framework. Instead of designing a complex new regulatory system from scratch, the White Paper proposed five key principles for existing UK regulators to follow and a central function to ensure the regime is coherent and streamlined and to identify regulatory gaps or confusing overlaps. Our approach must be agile so it can respond to the unprecedented speed of development, while also remaining robust enough in each sector to address the key concerns around potential societal harms, misuse risks and autonomy risk.
This common sense, pragmatic approach has been welcomed and endorsed both by the companies at the frontier of AI development and leading AI safety experts. Google DeepMind, Microsoft, OpenAI and Anthropic all supported the UK’s approach, as did Britain’s budding AI start-up scene, and many leading voices in academia and civil society such as the Centre for Long-Term Resilience and the Centre for the Governance of AI.
Next steps on establishing the rules for governing AI
Since we published the White Paper, we have moved quickly to implement the regulatory framework. We are pleased that a number of regulators have already taken steps in line with our framework such as the Information Commissioner’s Office, the Office for Nuclear Regulation and the Competition and Markets Authority.
We have taken steps to establish the central function to drive coherence in our regulatory approach across Government, starting by recruiting a new multidisciplinary team to conduct cross-sector assessment and monitoring to guard against existing and emerging risks in AI.
Further to this, we are strengthening the team working on AI within the Department for Science, Innovation and Technology across the newly established AI policy directorate and the AI Safety Institute. In recognition of the fact that AI has become central to the wider work of DSIT and Government, we will no longer maintain the branding of a separate “Office for AI”. Similarly, the Centre for Data Ethics and Innovation (CDEI) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its mission. The name highlights the directorate’s role in developing tools and techniques that enable responsible adoption of AI in the private and public sectors, in support of DSIT’s central mission.
In September we also announced the AI and digital hub—a pilot scheme for a brand-new advisory service run by expert regulators in the Digital Regulation Co-operation Forum. It will be laser-focused on helping companies get to grips with AI regulations so they can spend less time form-filling and more time getting their cutting-edge products from the lab on to the market and into British people’s lives.
Building on the feedback from the consultation, we are now focused on ensuring that regulators are prepared to face the new challenges and opportunities that AI can bring to their domains. This consultation response presents a plan to do just that. It sets out how we are building the right institutions and expertise to ensure that our regulation of AI keeps pace with the most pressing risks and can unlock the transformative benefits these technologies can offer.
To drive forward our plans to make Britain the safest and most innovative place to develop and deploy AI in the world, the consultation response announces over £100 million to support AI innovation and regulation. This includes a £10 million package to boost regulators’ AI capabilities, helping them develop practical tools to build the foundations of their AI expertise and ability to address risks in their domain.
We are also announcing a new commitment by UK Research and Innovation that future investments in AI research will be leveraged to support regulator skills and expertise. Further to this, we are announcing a nearly £90 million boost for AI research, including £80 million through the launch of nine new research hubs across the UK and a £9 million partnership with the US on responsible AI as part of our international science partnership fund. These hubs are based in locations across the country and will enable AI to evolve and tackle complex problems across applications, from healthcare treatments to power-efficient electronics.
In addition, we are announcing £2 million of Arts and Humanities Research Council (AHRC) funding to support research that will help to define responsible AI across sectors such as education, policing and creative industries.
In the coming months, we will formalise our regulator co-ordination activities by establishing a steering committee with Government representatives and key regulators. We will also be conducting targeted consultations on our cross-sectoral risk register and monitoring and evaluation framework from spring to make sure our approach is evidence-based and effective.
We are also taking steps to improve the transparency of this work, which is key to building public trust. To this end, we are also calling on regulators to publicly set out their approaches to AI in their domains by April 2024 to increase industry confidence and ensure the UK public can see how we are addressing the potential risks and benefits of AI across the economy.
Adapting to the challenges posed by highly capable general-purpose AI systems
The challenges posed by AI technologies will ultimately require legislative action across jurisdictions, once understanding of risk has matured. However, legislating too soon could stifle innovation, place undue burdens on businesses, and shackle us from being able to fully realise the enormous benefits AI technologies can bring. Furthermore, our principles-based approach has the benefit of being agile and adaptable, allowing us to keep pace with this fast-moving technology.
That is why we established the AI Safety Institute (AISI) to conduct safety evaluations on advanced AI systems, drive foundational safety research, and lead a global coalition of AI safety initiatives. These insights will ensure the UK responds effectively and proportionately to potential frontier risks.
Beyond this, the AISI has built a partnership network of over 20 leading organisations, allowing AISI to act as a hub, galvanising safety work in companies and academia; Professor Yoshua Bengio, as chair, is leading the UK’s international scientific report on advanced AI safety, which brings together 30 countries, including the EU and UN; and the AISI is continuing its regular engagement with leading AI companies that signed up to the Bletchley declaration.
In the consultation response, we build on our pro-innovation framework and pro-safety actions by setting out our early thinking on future targeted, binding requirements on the developers of highly capable general-purpose AI systems. The consultation response also sets out the key questions and considerations we will be exploring with experts and international partners as we continue to develop our approach to the regulation of the most advanced AI systems.
Driving the global conversation on AI governance
Building on the historic agreements reached at the AI safety summit, today we also set out our broader plans regarding how the UK will continue to drive the global debate on the governance of AI.
Beyond our work through the AI Safety Institute, this includes taking a leading role in multilateral AI initiatives such as the G7, OECD and the UN, and deepening bilateral relationships building on the success of agreements with the US, Japan, Republic of Korea and Singapore.
This response paper is another step forward for the UK’s ambitions to lead in the safe development and deployment of AI. The full text of the White Paper consultation response can be found on gov.uk.
[HCWS247]