Asked by: Lord Patten (Conservative - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the threats to national security presented by the cutting and other interception of subsea communication cables.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government recognises the increasing threat to the homeland from state actors and that critical national infrastructure, including subsea cables, will continue to be a target.
As the threat landscape evolves, it is essential to ensure that our risk assessments remain robust and fit for purpose. All risks in the National Risk Register, including the risk related to subsea cables, are kept under review to ensure that they are the most appropriate scenarios to inform emergency preparedness and resilience activity. We are currently reviewing and updating our assessments of risks to the UK’s subsea telecommunications cables.
While individual cables are vulnerable to damage, the UK’s international connectivity is resilient, supported by 45 international cables as well as high‑capacity fibre links running through the Channel Tunnel.
DSIT continues to work closely with the Cabinet Office, the Ministry of Defence and other government departments to ensure the security and resilience of the UK’s subsea telecommunications infrastructure.
Asked by: Lord Patten (Conservative - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of that low-cost, open source AI models could be used to launch the risk of malicious attacks on UK security.
Answered by Lord Vallance of Balham - Minister of State (Department for Energy Security and Net Zero)
The Government has established a Central AI Risk Function (CAIRF), which brings together policymakers and AI experts with a mission to continuously identify, assess and prepare for AI associated risks.
CAIRF develops and maintains the UK Government’s AI Risk Register. The register is actively maintained by CAIRF to identify individual risks associated with AI that could impact the UK spanning national security, the economy and society.
In addition, the AI Security Institute's (AISI) work is part of this Government's efforts to tackle security threats from AI. AISI evaluates both closed and open-sourced AI models to assess the risks AI poses to security and public safety.
We are also mindful that open source can boost transparency and support AI safety research. The UK Government will carefully balance these important benefits alongside risks as it develops its regulatory approach.