Artificial Intelligence: Research

(asked on 22nd June 2023) - View Source

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, whether she has made an assessment of the implications for her policies of the open letter asking industry leaders to pause artificial intelligence research, published by the Future of Life Institute on 22 March 2023.


Answered by
Paul Scully Portrait
Paul Scully
This question was answered on 30th June 2023

It is important that industry voices are actively engaged in the discourse around responsible AI. British based companies, like Deepmind, are at the forefront of responsible innovation. However, it should be noted that questions have been raised regarding the veracity of some of the signatures of the open letter on Artificial Intelligence published by the Future of Life Institute (FLI). Some of the researchers whose work was cited in the letter have also apparently raised concerns. It is also important to note that the letter is not expressly targeted towards the UK or any other government.

Government recognises the need to act to adapt the way in which we regulate AI as systems become more powerful, and are put to different use. As Sir Patrick Vallance highlighted in his regulatory review, there is a small window of opportunity to get this right and build a regulatory regime that enables innovation while addressing the risks. Government agrees that a collaborative approach is fundamental to addressing AI risk and supporting responsible AI development and use for the benefit of society. The AI regulation white paper we published on 29 March identifies “trustworthy”, “proportionate” and “collaborative” as key characteristics of the proposed AI regulation framework.

The AI regulation white paper sets out principles for the responsible development of AI in the UK. These principles such as safety, fairness, and accountability are at the very heart of our approach to ensuring the responsible development and use of AI. We will also establish a central risk function to bring together cutting-edge knowledge from industry, regulators, academia and civil society – including skilled computer scientists with a deep technical understanding of AI – to monitor future risks and adapt our approach if necessary. This is aligned with the calls to action in FLI’s letter.

In addition, our Foundation Model Taskforce has been established to strengthen UK capability – in a way that is aligned with the UK’s values – as this potentially transformative technology develops.

The approach to AI regulation outlined in the AI regulation White Paper is also complemented by parallel work on AI Standards, supported by the AI Standards Hub launched in October 2022, and via the Centre for Data Ethics and Innvovation’s AI Assurance Roadmap, published in December 2021. In concert, our holistic approach to AI governance combining regulation with an approach to standards development and AI assurance is in line with efforts to develop shared safety protocols, and will at the same time allow the UK to benefit from AI technologies while protecting people and our fundamental values.

Reticulating Splines