Iqbal Mohamed Portrait

Iqbal Mohamed

Independent - Dewsbury and Batley

6,934 (18.2%) majority - 2024 General Election

First elected: 4th July 2024



Division Voting information

During the current Parliament, Iqbal Mohamed has voted in 298 divisions, and never against the majority of their Party.
View All Iqbal Mohamed Division Votes

Debates during the 2024 Parliament

Speeches made during Parliamentary debates are recorded in Hansard. For ease of browsing we have grouped debates into individual, departmental and legislative categories.

Sparring Partners
Hamish Falconer (Labour)
Parliamentary Under-Secretary (Foreign, Commonwealth and Development Office)
(24 debate interactions)
David Lammy (Labour)
Deputy Prime Minister
(17 debate interactions)
Nusrat Ghani (Conservative)
(13 debate interactions)
View All Sparring Partners
Department Debates
Home Office
(30 debate contributions)
Department of Health and Social Care
(29 debate contributions)
View All Department Debates
Legislation Debates
Universal Credit Act 2025
(2,561 words contributed)
Football Governance Act 2025
(1,545 words contributed)
Mental Health Act 2025
(1,458 words contributed)
View All Legislation Debates
View all Iqbal Mohamed's debates

Dewsbury and Batley Petitions

e-Petitions are administered by Parliament and allow members of the public to express support for a particular issue.

If an e-petition reaches 10,000 signatures the Government will issue a written response.

If an e-petition reaches 100,000 signatures the petition becomes eligible for a Parliamentary debate (usually Monday 4.30pm in Westminster Hall).

Petition Debates Contributed

Fund mandatory offer of testing for Type 1 Diabetes in babies, toddlers, and young children as a routine part of medical assessments at the point of care.

We urge the UK Government to scrap plans to extend ILR from 5 to 10 years. We feel that legal migrants, especially care workers, followed the rules and built lives here under the 5-year promise. We think they support vital services and deserve fairness, not shifting rules.

The Government should keep the current 5-year route to Indefinite Leave to Remain (ILR) and restrict access to government benefits for new ILR holders.

Ban the sale of fireworks to the general public to minimise the harm caused to vulnerable people and animals. Defenceless animals can die from the distress caused by fireworks.

I believe that permitting unregulated use of fireworks is an act of wide-scale cruelty to animals.

We think each year, individuals suffer because of loud fireworks. We believe horses, dogs, cats, livestock and wildlife can be terrified by noisy fireworks and many people find them intolerable.

We call on the Government to extend free bus travel to all people over 60 years old in England outside London. We believe the current situation is unjust and we want equality for everyone over 60.

We want the Government to repeal the Online Safety act.

Act to ensure deliverer of fuel, food, aid, life saving services etc. We think this shouldn't be dependant/on condition of Israeli facilitation as the Knesset voted against UNWRA access to Gaza. We think if military delivery of aid, airdrops, peacekeepers etc, are needed, then all be considered.

Support in education is a vital legal right of children with special educational needs and disabilities (SEND). We ask the government to commit to maintaining the existing law, so that vulnerable children with SEND can access education and achieve their potential.

We think the UK Government must ban all cages for laying hens as soon as possible.

We think it should also ban the use of all cage and crates for all farmed animals including:
• farrowing crates for sows
• individual calf pens
• cages for other birds, including partridges, pheasants and quail

In modern society, we believe more consideration needs to be given to animal welfare and how livestock is treated and culled.

We believe non-stun slaughter is barbaric and doesn't fit in with our culture and modern-day values and should be banned, as some EU nations have done.


Latest EDMs signed by Iqbal Mohamed

23rd March 2026
Iqbal Mohamed signed this EDM on Thursday 26th March 2026

Support for the ceramics industry

Tabled by: Linsey Farnsworth (Labour - Amber Valley)
That this House recognises the role the UK ceramics industry plays in producing essential materials such as bricks and glass; celebrates the industry’s vital contribution to the UK’s defence and housebuilding capabilities and the enduring cultural significance and heritage of the UK’s table and giftware sectors; acknowledges the significant challenges …
11 signatures
(Most recent: 26 Mar 2026)
Signatures by party:
Labour: 9
Democratic Unionist Party: 1
Independent: 1
18th March 2026
Iqbal Mohamed signed this EDM on Thursday 26th March 2026

Mandatory human rights and environmental due diligence law

Tabled by: Martin Rhodes (Labour - Glasgow North)
That this House notes the immediate need for Mandatory Human Rights and Environmental Due Diligence and forced labour bans legislation to support human rights, consumers, businesses, and the environment; further notes that the voluntary framework introduced in the Modern Slavery Act 2015 is now outdated and eclipsed by international standards; …
10 signatures
(Most recent: 26 Mar 2026)
Signatures by party:
Green Party: 5
Labour: 3
Liberal Democrat: 1
Independent: 1
View All Iqbal Mohamed's signed Early Day Motions

Commons initiatives

These initiatives were driven by Iqbal Mohamed, and are more likely to reflect personal policy preferences.

MPs who are act as Ministers or Shadow Ministers are generally restricted from performing Commons initiatives other than Urgent Questions.


Iqbal Mohamed has not been granted any Urgent Questions

Iqbal Mohamed has not been granted any Adjournment Debates

Iqbal Mohamed has not introduced any legislation before Parliament

1 Bill co-sponsored by Iqbal Mohamed

Glaucoma Care (England) Bill 2024-26
Sponsor - Shockat Adam (Ind)


Latest 50 Written Questions

(View all written questions)
Written Questions can be tabled by MPs and Lords to request specific information information on the work, policy and activities of a Government Department
4th Mar 2026
To ask the Minister for the Cabinet Office, what the annual cost is of Government contracts for the licensing of a) Zoom, b) Microsoft, c) Amazon d) Google computing platforms for the civil service.

Information on the annual cost of Government contracts for licensing across the Civil Service is not held centrally.

Chris Ward
Parliamentary Secretary (Cabinet Office)
4th Mar 2026
To ask the Minister for the Cabinet Office, what information his Department holds on whether Government Departments have sought advice from Labour Together on policy development.

This information is not held centrally.

Chris Ward
Parliamentary Secretary (Cabinet Office)
4th Mar 2026
To ask the Minister for the Cabinet Office, what information his Department held on Labour Together's report into journalists prior to the hon. Member for Makerfield’s appointment as Parliamentary Secretary.

There is an established process in place for the appointment of Ministers.

Advice, which may or may not have been provided to the Prime Minister as part of this process, is treated in confidence.

Chris Ward
Parliamentary Secretary (Cabinet Office)
27th Feb 2026
To ask the Minister for the Cabinet Office, what was the purpose of the Prime Minister's visit to Palantir head offices in Washington DC in February 2025.

I refer the Hon Member to my answer of 10th March 2026, Official Report, PQ 112839.

Nick Thomas-Symonds
Paymaster General and Minister for the Cabinet Office
27th Feb 2026
To ask the Minister for the Cabinet Office, what discussions the Prime Minister had with the then-UK Ambassador to the US on visiting Palantir head offices in Washington DC in February 2025.

I refer the Hon Member to my answer of 10th March 2026, Official Report, PQ 112839.

Nick Thomas-Symonds
Paymaster General and Minister for the Cabinet Office
20th Feb 2026
To ask the Minister for the Cabinet Office, if he will include AI loss-of-control scenarios will be included in the next edition of the National Risk Register.

The UK is facing an ever-changing and growing set of risks. All risks in the National Risk Register are kept under review to ensure that they are the most appropriate scenarios to inform emergency preparedness and resilience activity.

The challenges posed by artificial intelligence are referenced in the 2025 National Risk Register as a chronic risk, and incorporated in the Chronic Risks Analysis, the UK's first bespoke assessment for medium to long-term challenges facing the nation.

The Department for Science, Innovation and Technology (DSIT)’s AI risk register covers the full spectrum of AI risks that could impact the UK, spanning national security, defence, the economy and society. The AI Risk Register includes AI-loss-of control scenarios. The Government is committed to protecting UK citizens against the risks that advanced AI could bring, while ensuring we can maximise AI's potential for growth and public service delivery.

Dan Jarvis
Minister of State (Cabinet Office)
30th Jan 2026
To ask the Secretary of State for Business and Trade, how many items of protective body armour his Department has supplied for use by journalists operating in Gaza since October 2023.

The Department for Business and Trade does not supply body armour, and the export of body armour for personal protection when accompanying its user (for their own use) is not subject to export control.

Nonetheless the Department has approved 12 licences for the export of protective body armour for use by news organisations in Israel or Palestine since October 2023. Of these, 9 relate to Media Open Individual Licences which allow export to a wide range of countries. Similar equipment has also been licensed for export for use by NGOs in the region.

The UK is appalled by the extremely high number of fatalities, arrests and detentions of media workers in the State of Palestine. We have called on all parties to fully uphold International Humanitarian Law and ensure protection of civilians including journalists.

Chris Bryant
Minister of State (Department for Business and Trade)
30th Jan 2026
To ask the Secretary of State for Business and Trade, what assessment he has made of the potential impact of the International Court of Justice Advisory Opinion on Israel and the Occupied Palestinian Territories on trade with Israel.

We respect the independence of the International Court of Justice and continue to consider the Court’s Advisory Opinion carefully, with the seriousness and rigour it deserves.

Chris Bryant
Minister of State (Department for Business and Trade)
21st Jul 2025
To ask the Secretary of State for Energy Security and Net Zero, what discussions he has had with Ofgem on reducing electricity and gas standing charges.

The Government knows that, for many consumers, too much of the burden of the bill is placed on standing charges. We are committed to lowering the cost of standing charges and are working constructively with Ofgem, on this issue. Ofgem have conducted a broad public consultation to understand the views of consumers on this issue, receiving over 5,000 responses on their 2024 discussion paper. Since then, Ofgem have been continuing work in two areas.

Firstly, Ofgem have been working to ensure that domestic consumers can choose tariffs with low or no standing charges. Ofgem took a further step towards this goal on 24th July, announcing proposals to require suppliers to offer their customers low or no standing charge tariffs from early 2026. You can read about this here: https://www.ofgem.gov.uk/policy/standing-charges-energy-price-cap-variant-next-steps.

Secondly, Ofgem have been reviewing how ‘fixed’ costs, which tend to be funded through standing charges, should be recovered in the future energy system. This includes whether those fixed costs could be recovered in more progressive ways, and we are working closely with the regulator on this.

Miatta Fahnbulleh
Parliamentary Under-Secretary (Housing, Communities and Local Government)
18th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what progress officials in UK Research and Innovation (UKRI) has made on developing target areas of research for alternative methods for animal testing; and whether UKRI has any plans to consult with civil society organisations who have expertise in this area as part of this process.

On 11 November 2025 the Government published Replacing animals in science: A strategy to support the development, validation and uptake of alternative methods, which outlines the steps we will take to achieve this. The Labour Manifesto commits to partnering with scientists, industry and civil society as we work towards the phasing out of animal testing. The Government consulted civil society, industry and academia during development of the strategy and continues to do so during delivery, including through regular Home Office meetings. We also intend to publish areas of research interest later this year. UKRI has an important role in this but is not the only delivery partner

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
18th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what assessment he has made of the adequacy of the Cyber Security and Resilience Bill's incident reporting criteria for capturing novel failure modes arising from autonomous or adaptive machine learning systems in critical national infrastructure.

The Cyber Security and Resilience (Network and Information Systems) Bill makes vital updates to the Network and Information Systems (NIS) Regulations 2018 to ensure that providers of the UK’s essential services are reporting more forms of harmful cyber incident to their regulators. Where these incidents meet the threshold of a reportable incident, they will need to be reported to the relevant regulator regardless of the cause. This will include incidents caused by the failure of autonomous or adaptive machine learning systems within a regulated entity’s network and information systems.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
18th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what mechanisms exist for the AI Security Institute to receive systematic information about incidents involving autonomous or adaptive machine learning systems in critical national infrastructure as part of its intelligence capacity to research the development of AI capabilities that could contribute towards AI's ability to evade human control, as well the propensity of models to engage in misaligned actions.

The AI Security Institute (AISI) collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of AI impacts.

This close collaboration with industry enables information-sharing to mitigate risks. AISI’s testing has identified a large number of AI model vulnerabilities that labs (such as OpenAI and Anthropic) have addressed prior to release.

AISI is  researching  the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. AISI shares its insights with government departments to help manage the risks AI could pose to critical national infrastructure.

Through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
18th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what discussions she has had with social media companies on measures to support users, including young people, to identify false and misleading political information, in the context of proposals to extend the voting age to 16.

The Government recognises the importance of supporting people, including young people, to identify false and misleading information online.

Media literacy is an important part of our approach. DSIT is improving it through a cross-government approach outlined in the Media Literacy Action Plan published 16 March. In February we launched a pilot campaign and the Kids Online Safety Hub to help parents support children’s resilience to misinformation.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
18th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential risks posed by AI-generated and manipulated content on young voters’ ability to assess political information during election campaigns.

The Government recognises that the huge opportunities offered by AI also come with risks, including potential challenges posed by AI-generated content for the online information environment.

The Online Safety Act regulates AI generated mis/disinformation. This includes the Foreign Interference Offence, requiring companies to take action against state-sponsored disinformation and state-linked interference targeted at the UK and our democratic processes.

Media literacy is also part of our wider approach, building young people’s resilience to mis- and disinformation, including AI-generated content. The government will ensure that media literacy is embedded into the new primary citizenship curriculum, from September 2028.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the electricity demand of proposed AI datacentre developments.

The Government recognises that AI-driven compute, including largescale data centres, will increase electricity demand over the coming years. DSIT works closely with DESNZ and NESO to assess how projected AI-related demand is reflected in long-term energy system planning.

The AI Energy Council, co-chaired by Secretaries of State for DSIT and DESNZ, brings together regulators, energy companies and tech firms to address the growing energy demands of AI in a sustainable and scalable way.

The Council is also exploring how clean and low carbon energy solutions - including renewables and emerging technologies such as small modular reactors - could support future AI infrastructure, consistent with the Government’s clean power ambitions.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, how many short-term, medium-term, and long-term jobs are (a) projected to be created and (b) have been created to date in relation to the Government’s proposed AI growth zones.

AI Growth Zones are expected to create more than 15,000 jobs spanning construction activity, permanent operational roles and wider supply‑chain employment. Job creation will ramp up as infrastructure works progress, with full delivery projected by the early 2030s. These figures are based on information provided by project teams and should be treated as projections rather than firm forecasts.

Ultimately, hiring decisions sit with individual companies, but AI Growth Zones are designed to create high‑skill, long‑term employment in areas with strong potential for economic growth.

The Department does not hold central data that consistently categorises jobs into short‑, medium‑ and long‑term across all AI Growth Zones, nor comprehensive data on jobs created to date, as projects remain at an early stage of delivery.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what progress has been made on the proposed AI datacentre site in Loughton, Essex, announced in January 2025.

Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what due diligence her Department undertook before announcing Nscale’s proposed $2.5 billion investment in UK AI infrastructure in 2025.

Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what estimate her Department has made of the potential number of jobs that will be created as a result of CoreWeave’s investment.

AI Growth Zones are expected to create more than 15,000 jobs spanning construction activity, permanent operational roles and wider supply‑chain employment. Job creation will ramp up as infrastructure works progress, with full delivery projected by the early 2030s. These figures are based on information provided by project teams and should be treated as projections rather than firm forecasts.

Ultimately, hiring decisions sit with individual companies, but AI Growth Zones are designed to create high‑skill, long‑term employment in areas with strong potential for economic growth.

The Department does not hold central data that consistently categorises jobs into short‑, medium‑ and long‑term across all AI Growth Zones, nor comprehensive data on jobs created to date, as projects remain at an early stage of delivery.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, whether her Department plans to publish a breakdown of the £100 billion in private investment it states has been attracted to the UK AI sector since 2024 and what proportion of that is spent onshore within Britain on British goods and services reverse the proportion of investment spent offshore on foreign companies.

The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.

The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.

Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what mechanisms are in place to ensure transparency in the reporting of private investment linked to the Government’s AI strategy and the proportion of that investment which is spent onshore within Britain on British goods and services reverse the proportion of investment spent offshore on foreign companies.

The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.

The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.

Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what proportion of the AI-related private investment announced by her Department since 2024 has been contractually committed.

The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.

The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.

Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, whether her Department audits private investment commitments included in his Department's press announcements on AI infrastructure.

The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.

The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.

Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, what processes her Department uses to verify the value and form of private sector AI investment commitments.

The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.

The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.

Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, how many new datacentres have been constructed as a result of investment by CoreWeave.

CoreWeave's announced investments into the UK total £2.5 billion. CoreWeave has committed £1.5 billion towards the Lanarkshire AI Growth Zone in Scotland, deploying cutting-edge semiconductors at DataVita's data centre campus in Lanarkshire. The earlier £1 billion investment covered the opening of CoreWeave's UK office as its European headquarters, the creation of job opportunities across engineering, operations, and finance, and the deployment of AI computing infrastructure across two data centres in Crawley and London Docklands.

Large AI infrastructure investments are complex and take time to deliver; as government, we want to encourage these investments by supporting them as best we can. Where important investment announcements and commitments are made, Government will continue to work closely with those companies to ensure the delivery of those investments.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
10th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, whether a formal contract has been signed with Nscale for the construction of the proposed AI datacentre in Loughton, Essex.

Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
4th Mar 2026
To ask the Secretary of State for Science, Innovation and Technology, whether her Department has made an assessment of the potential merits of creating a UK Government cloud computing system independent from technology multinationals' services.

The Government recognises the importance of a secure and resilient cloud infrastructure for the delivery of digital public services. As set out in the Roadmap for Modern Digital Government (2026), the government is developing a National Cloud Strategy. As part of this, the government will assess how to strengthen the security and resilience of UK cloud infrastructure and improve the cloud ecosystem.

Ian Murray
Minister of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, whether he will make an assessment of the potential merits of legislative powers of direction over AI developers in the event of a loss-of-control incident.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, whether his Department has undertaken scenario planning exercises for AI loss-of-control events.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what (a) short, (b) medium (c) and long-term actions he is taking to help anticipate and mitigate the potential risks of AI loss-of-control.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what mechanisms are in place to coordinate cross-government preparedness for AI loss-of-control scenarios.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, whether his Department has been designated as the lead department for AI loss-of-control risks.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, how her Department defines AI loss of control; and whether that definition is shared across Departments.

AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.

The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.

Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.

The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what protocols are in place to help ensure rapid information-sharing with AI companies during a national AI emergency.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what emergency powers the Government holds to direct private AI developers during a national security incident involving advanced AI systems.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what role the AI Safety Institute plays in national security preparedness for advanced AI systems.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what role the AI Security Institute plays in national security preparedness for advanced AI systems.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, whether he plans to publish an AI Security Strategy.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what steps he is taking to help improve transparency on departmental responsibility for AI risk.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential risks associated with advanced AI systems across government.

This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.

Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.

The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.

This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.

The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what assessment his Department has made of the adequacy of current risk modelling for frontier AI systems.

The AI Security Institute was established to deepen our understanding of frontier AI risks.

The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses.

The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable.

Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
20th Feb 2026
To ask the Secretary of State for Science, Innovation and Technology, what information her Department holds on the Artificial Intelligence Security Institute assessment of xAI's Grok.

The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts.

The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
29th Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, whether the Artificial Intelligence Security Institute assessed xAI’s Grok for harms prior to launch.

The AI Security Institute regularly test models across leading labs. While we do not provide a running commentary on which models we test due to commercial and security reasons, it actively works with labs to improve safeguards when vulnerabilities have been identified.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
29th Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, whether the Artificial Intelligence Security Institute completed a risk assessment of xAI’s Grok code before it was released to the public.

The AI Security Institute regularly test models across leading labs. While we do not provide a running commentary on which models we test due to commercial and security reasons, it actively works with labs to improve safeguards when vulnerabilities have been identified.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
19th Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, if he will make an assessment of the potential merits of bringing forward legislative proposals to require the mandatory testing of generative AI models to ensure they cannot produce child sexual abuse material.

The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.

We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.

The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
19th Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, what steps she is taking to ensure AI tools are safe by design to prevent the creation of child sexual abuse material.

The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.

We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.

The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
19th Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, when she plans to introduce regulations ensuring generative AI cannot be misused to create extreme sexual abuse material involving children.

The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.

We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.

The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
2nd Jan 2026
To ask the Secretary of State for Science, Innovation and Technology, whether the Secretary of State or Ministers in the Department has received representations from AI companies regarding the content or timing of the proposed AI Bill.

The Government engages with a wide range of stakeholders on its approach to regulating Artificial Intelligence, including AI companies, academics, and civil society groups.

Details of Ministerial meetings with external organisations are published in the quarterly transparency returns.

Kanishka Narayan
Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)