To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Pornography: Regulation
Thursday 5th March 2026

Asked by: Rebecca Paul (Conservative - Reigate)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, if she will make it her policy that types of pornographic content that it would be illegal to distribute offline, such as scenes depicting incest and scenes of simulated child abuse, are subject to equivalent controls online.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The Online Safety Act (OSA) already places robust duties on online platforms to tackle illegal and harmful pornographic content. Platforms are required to prevent users from encountering such content, and services that host or allow access to pornography must have effective measures, such as age verification, to protect children. In 2025, the government announced that strangulation will be made a priority offence under the OSA, requiring platforms to take swift action against this content.


Following the Independent Pornography Review, a cross-government joint team has been established to inform the government’s approach to pornography policy.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, how her Department defines AI loss of control; and whether that definition is shared across Departments.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, whether his Department has been designated as the lead department for AI loss-of-control risks.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what emergency powers the Government holds to direct private AI developers during a national security incident involving advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what protocols are in place to help ensure rapid information-sharing with AI companies during a national AI emergency.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what role the AI Safety Institute plays in national security preparedness for advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what role the AI Security Institute plays in national security preparedness for advanced AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.


Written Question
Social Media: Safety
Thursday 5th March 2026

Asked by: Roz Savage (Liberal Democrat - South Cotswolds)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what measures are in place to ensure compliance by social media platforms with safety duties under the Online Safety Act 2023, particularly in relation to the protection of younger users.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The Online Safety Act (the Act) requires services, including social media, to protect children from illegal, harmful and age-inappropriate content.

Both the Act’s illegal duties and child safety duties are now in force, with Ofcom having substantial enforcement powers including the ability to issue fines of up to £18 million or 10% of platforms’ qualifying worldwide revenue. Since the duties came into force, Ofcom has opened several enforcement investigations against platforms suspected of failing to meet their obligations. Recent actions include investigations into major pornography providers, file-sharing services for measures to prevent the sharing of child sexual abuse material, and online forums linked to harassment and suicide promotion.


Written Question
Artificial Intelligence: National Security
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what assessment his Department has made of the adequacy of current risk modelling for frontier AI systems.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The AI Security Institute was established to deepen our understanding of frontier AI risks.

The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses.

The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable.

Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take.


Written Question
Grok
Thursday 5th March 2026

Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley)

Question to the Department for Science, Innovation & Technology:

To ask the Secretary of State for Science, Innovation and Technology, what information her Department holds on the Artificial Intelligence Security Institute assessment of xAI's Grok.

Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts.

The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities.