Artificial Intelligence: Safety

(asked on 21st April 2026) - View Source

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential merits of requiring independent safety assessments before AI systems with dangerous offensive capabilities are developed.


Answered by
Kanishka Narayan Portrait
Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
This question was answered on 28th April 2026

AI is a general-purpose technology with a wide range of applications, which is why the UK believes that the vast majority of AI systems should be regulated at the point of use. In response to the AI Action Plan, the government committed to work with regulators to boost their capabilities.

The role of the AI Security Institute (AISI) is to build an evidence base of these risks, to inform government decision making and help make AI more secure and reliable.

AISI works in close collaboration with AI companies to assess model safeguards and suggest mitigations. To date, AISI has tested over 30 models from leading AI companies, including OpenAI, Google DeepMind and Anthropic. AISI’s findings lead to tangible changes to AI models before deployment, reducing the risk from day one.

Reticulating Splines