Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of that low-cost, open source AI models could be used to launch the risk of malicious attacks on UK security.
The Government has established a Central AI Risk Function (CAIRF), which brings together policymakers and AI experts with a mission to continuously identify, assess and prepare for AI associated risks.
CAIRF develops and maintains the UK Government’s AI Risk Register. The register is actively maintained by CAIRF to identify individual risks associated with AI that could impact the UK spanning national security, the economy and society.
In addition, the AI Security Institute's (AISI) work is part of this Government's efforts to tackle security threats from AI. AISI evaluates both closed and open-sourced AI models to assess the risks AI poses to security and public safety.
We are also mindful that open source can boost transparency and support AI safety research. The UK Government will carefully balance these important benefits alongside risks as it develops its regulatory approach.