To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Artificial Intelligence: National Security
Tuesday 25th February 2025

Asked by: Lord Patten (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of that low-cost, open source AI models could be used to launch the risk of malicious attacks on UK security.

Answered by Lord Vallance of Balham - Minister of State (Department for Science, Innovation and Technology)

The Government has established a Central AI Risk Function (CAIRF), which brings together policymakers and AI experts with a mission to continuously identify, assess and prepare for AI associated risks.

CAIRF develops and maintains the UK Government’s AI Risk Register. The register is actively maintained by CAIRF to identify individual risks associated with AI that could impact the UK spanning national security, the economy and society.

In addition, the AI Security Institute's (AISI) work is part of this Government's efforts to tackle security threats from AI. AISI evaluates both closed and open-sourced AI models to assess the risks AI poses to security and public safety.

We are also mindful that open source can boost transparency and support AI safety research. The UK Government will carefully balance these important benefits alongside risks as it develops its regulatory approach.