To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Palestine Action
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Home Office:

To ask the Secretary of State for the Home Department, with reference to her comments to the BBC on 11 August 2025, whether her Department presented information to the courts during legal proceedings relating to the proscription of Palestine Action on people who are objecting to that proscription because they don't know the full nature of the organisation as a result of court restrictions on reporting while serious prosecutions are under way; and if she will publish this information.

Answered by Dan Jarvis - Minister of State (Cabinet Office)

The material relied upon by the Court in its decision making is referenced throughout the judgment which is publicly available here: R (Ammori) v SSHD OPEN Judgment (final)

The open material referred to during the proceedings can be requested from the court in accordance with the Civil Procedure Rules on Court documents see: PART 5 – COURT DOCUMENTS – Civil Procedure Rules – Justice UK. Any material submitted in closed proceedings is protected by the Justice and Security Act 2013 and will not be disclosed for reasons of national security. It would not be appropriate to comment further during ongoing legal proceedings.

The Independent Reviewer of Terrorism Legislation has access to secret and sensitive national security information in order to carry out his role. He routinely publishes his findings in reports that are available on his website: https://terrorismlegislationreviewer.independent.gov.uk/


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, how her Department defines AI loss of control; and whether that definition is shared across Departments.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, whether his Department has been designated as the lead department for AI loss-of-control risks.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what emergency powers the Government holds to direct private AI developers during a national security incident involving advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what protocols are in place to help ensure rapid information-sharing with AI companies during a national AI emergency.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what role the AI Safety Institute plays in national security preparedness for advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what role the AI Security Institute plays in national security preparedness for advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what assessment his Department has made of the adequacy of current risk modelling for frontier AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The AI Security Institute was established to deepen our understanding of frontier AI risks.

The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses.

The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable.

Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take.


Written Question
Grok
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what information her Department holds on the Artificial Intelligence Security Institute assessment of xAI's Grok.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts.

The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential risks associated with advanced AI systems across government.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.