Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps the Government is able to take to delay or prohibit the public release of a frontier AI model in instances when the UK AI Security Institute assesses that model as posing a serious risk of assisting users in developing chemical, biological, radiological, or nuclear weapons.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
We are optimistic about how AI will transform the lives of British people for the better, but advanced AI could also lead to serious security risks.
The Government believes that AI should be regulated at the point of use, and takes a context-based approach. Sectoral laws give powers to take steps where there are serious risks - for example the Procurement Act 2023 can prevent risky suppliers (including those of AI) from being used in public sector contexts, whilst a range of legislation offers protections against high-risk chemical and biological incidents.
This approach is complemented by the work of the AI Security Institute, which works in partnership with AI labs to understand the capabilities and impacts of advanced AI, and develop and test risk mitigations.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether the Government has established a defined threshold of dangerous capability in frontier AI models, including capabilities relating to chemical, biological, radiological, or nuclear weapons, which would trigger Government action.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The Department for Science, Innovation and Technology (DSIT) has policy responsibility for promoting responsible AI innovation and uptake. Risks related to chemical, biological, radiological, and nuclear weapons (and other dangerous weapons), including defining thresholds for harm in these domains, are managed by a combination of the Home Office, Foreign, Commonwealth and Development Office, Cabinet Office, and the Ministry of Defence. DSIT does not set thresholds for dangerous capabilities in risk domains owned by other departments.
The AI Security Institute (AISI), as part of DSIT, focuses on researching emerging AI risks with serious security implications, such as the potential for AI to help users develop chemical and biological weapons. AISI works with a broad range of experts and leading AI companies to understand the capabilities of advanced AI and advise on technical mitigations. AISI’s research supports other government departments in taking evidence-based action to mitigate risks whilst ensuring AI delivers on its potential for growth. AISI’s Frontier AI Trends Report, published in December 2025, outlines how frontier AI risks are expected to develop in the future.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether the Government has established thresholds for dangerous weapons-related capabilities in frontier AI models.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The Department for Science, Innovation and Technology (DSIT) has policy responsibility for promoting responsible AI innovation and uptake. Risks related to chemical, biological, radiological, and nuclear weapons (and other dangerous weapons), including defining thresholds for harm in these domains, are managed by a combination of the Home Office, Foreign, Commonwealth and Development Office, Cabinet Office, and the Ministry of Defence. DSIT does not set thresholds for dangerous capabilities in risk domains owned by other departments.
The AI Security Institute (AISI), as part of DSIT, focuses on researching emerging AI risks with serious security implications, such as the potential for AI to help users develop chemical and biological weapons. AISI works with a broad range of experts and leading AI companies to understand the capabilities of advanced AI and advise on technical mitigations. AISI’s research supports other government departments in taking evidence-based action to mitigate risks whilst ensuring AI delivers on its potential for growth. AISI’s Frontier AI Trends Report, published in December 2025, outlines how frontier AI risks are expected to develop in the future.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, following Google DeepMind's provision of pre-deployment access to the UK AI Security Institute for safety testing of Gemini 3, whether the Institute received equivalent pre-deployment access to the most recent frontier AI models developed by (a) OpenAI, (b) Anthropic, (c) xAI, and (d) Meta prior to their public release.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities. Where possible, given these sensitivities, the AI Security Institute aims to publish results.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether the Secretary of State or Ministers in the Department has received representations from AI companies regarding the content or timing of the proposed AI Bill.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The Government engages with a wide range of stakeholders on its approach to regulating Artificial Intelligence, including AI companies, academics, and civil society groups.
Details of Ministerial meetings with external organisations are published in the quarterly transparency returns.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To as the Secretary of State for Science, Innovation and Technology, whether his Department provides guidance to businesses on the potential impact of AI systems on employment.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
We want to ensure that people have access to good, meaningful work. AI is already transforming workplaces, demanding new skills, and augmenting existing ones. Government is working to harness its benefits to boost growth, productivity, living standards, and worker wellbeing, while mitigating the risks.
The Department for Education published an analysis in 2023 outlining The impact of AI on UK jobs and training. We are currently considering our approach to updating this analysis.
Further to this, the Get Britain Working White Paper outlines how government will address labour market challenges and spread opportunity and economic prosperity that AI presents to the British public. This includes launching Skills England to create a shared national plan to boost the nation’s skills, creating more good jobs through our modern Industrial Strategy, and strengthening employment rights through DBT’s Plan to Make Work Pay.
DSIT has also published guidance for businesses adopting AI, focusing on good practice AI assurance when procuring and deploying AI systems. AI assurance could significantly manage risks and build trust, supporting business to assess and mitigate the potential impacts of AI adoption.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to introduce skills retraining and workforce support measures, in the context of the deployment of AI technologies in workplaces.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
We want to ensure that people have access to good, meaningful work. AI will impact the labour market and Government is working to harness its benefits in terms of boosting growth, productivity, living standards, and worker wellbeing, while mitigating the risks. We’re planning for varied outcomes and monitoring data to track and prepare for these. The Get Britain Working White Paper sets out how we will address key challenges and that includes giving people the skills to get those jobs and spread opportunity to fix the foundations of our economy to seize AI’s potential.
The Government is supporting workforce readiness for AI through a range of initiatives. The new AI Skills Hub, developed by Innovate UK and PwC, provides streamlined access to digital training. This will support government priorities through tackling critical skills gaps and improving workforce readiness. We are also partnering with 11 major companies to train 7.5 million UK workers in essential AI skills by 2030.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether the AI Security Institute will be given statutory powers to (a) carry out audits, (b) approve the training of powerful AI models and (c) shut down unsafe systems.
Answered by Feryal Clark
Artificial intelligence is the defining opportunity of our generation, and the Government is taking action to harness its economic benefits for UK citizens. As set out in the AI Opportunities Action Plan, we believe most AI systems should be regulated at the point of use, with our expert regulators best placed to do so. Departments are working proactively with regulators to provide clear strategic direction and support them on their AI capability needs. Through well-designed and implemented regulation, we can fuel fast, wide and safe development and adoption of AI.
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether he plans to give statutory powers to the AI Security Institute.
Answered by Feryal Clark
Artificial intelligence is the defining opportunity of our generation, and the Government is taking action to harness its economic benefits for UK citizens. As set out in the AI Opportunities Action Plan, we believe most AI systems should be regulated at the point of use, with our expert regulators best placed to do so. Departments are working proactively with regulators to provide clear strategic direction and support them on their AI capability needs. Through well-designed and implemented regulation, we can fuel fast, wide and safe development and adoption of AI.