Artificial Intelligence

(asked on 16th June 2023) - View Source

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what recent assessment she has made of the potential risks of artificial intelligence large language models.


Answered by
Paul Scully Portrait
Paul Scully
This question was answered on 26th June 2023

The rapid acceleration of AI foundation models represents enormous opportunities for productivity and public good, bringing an estimated $7 trillion in global growth over the next 10 years. However, this technology could also pose significant national security and safety risks. It is important to ensure the right guardrails are in place, as doing so will let us realise this technology's huge opportunities.

The Government has published its White Paper setting out its proposed approach to AI regulation that is context-based, proportionate and adaptable, drawing on the expertise of regulators and encouraging them to consider AI in their own sectors. A central risk function will undertake horizon scanning to identify new and emerging AI risks.

The government has also committed an initial £100 million to set up the Foundation Model Taskforce to build UK capabilities in foundation models and leverage our existing strengths, and be a global standard bearer for AI safety.

The UK is well positioned to lead the world in AI safety. We have announced plans to host a global AI safety summit later this year to convene leading nations, industry and academia to drive targeted, rapid international action to guarantee safety and security at the frontier of this technology.

Reticulating Splines