Information between 25th February 2026 - 7th March 2026
Note: This sample does not contain the most recent 2 weeks of information. Up to date samples can only be viewed by Subscribers.
Click here to view Subscription options.
| Written Answers |
|---|
|
Animal Products: Labelling
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 26th February 2026 Question to the Department for Environment, Food and Rural Affairs: To ask the Secretary of State for Environment, Food and Rural Affairs, what recent discussions she has had with the Food Standards Agency on the 2024 Fairer Food Labelling consultation; and if will she implement a mandatory method-of-production labelling on animal food products in England. Answered by Angela Eagle - Minister of State (Department for Environment, Food and Rural Affairs) As set out in the Government’s animal welfare strategy, we are committed to ensuring that consumers have access to clear information on how their food was produced. To support this, the Government will continue working with relevant stakeholders, including the farming and food industry, scientists and NGOs to explore how improved animal welfare food labelling could provide greater consumer transparency, support farmers and promote better animal welfare. The Government will set out next steps in due course. |
|
Food: Labelling
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 26th February 2026 Question to the Department for Environment, Food and Rural Affairs: To ask the Secretary of State for Environment, Food and Rural Affairs, what steps she is taking to help ensure that retailers and supermarkets display clear and consistent animal welfare information on packing and labels to help consumers to make informed choices. Answered by Angela Eagle - Minister of State (Department for Environment, Food and Rural Affairs) As set out in the Government’s animal welfare strategy, we are committed to ensuring that consumers have access to clear information on how their food was produced. To support this, the Government will continue working with relevant stakeholders, including the farming and food industry, scientists and NGOs to explore how improved animal welfare food labelling could provide greater consumer transparency, support farmers and promote better animal welfare. The Government will set out next steps in due course. |
|
Animal Welfare
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 26th February 2026 Question to the Department for Environment, Food and Rural Affairs: To ask the Secretary of State for Environment, Food and Rural Affairs, if she will take steps with Cabinet colleagues to secure a debate on animal welfare and progress on the animal welfare strategy. Answered by Angela Eagle - Minister of State (Department for Environment, Food and Rural Affairs) A Westminster Hall Debate on the Animal Welfare Strategy was held on 21 January 2026. Parliament will be updated in the usual way as the Strategy progresses. |
|
Animal Welfare
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 26th February 2026 Question to the Department for Environment, Food and Rural Affairs: To ask the Secretary of State for Environment, Food and Rural Affairs, if will she publish clear timelines for the delivery of all commitments in the Animal Welfare Strategy. Answered by Angela Eagle - Minister of State (Department for Environment, Food and Rural Affairs) The Animal Welfare Strategy sets out the priority issues the Government will address, focusing on the changes and improvements Defra aims to achieve by 2030. Policies will be delivered throughout this time.
Defra has already launched consultations on phasing out cages for laying hens and improving lamb welfare which run until 9 March. Defra has also confirmed that a public consultation seeking views on how to deliver a full ban on trail hunting will be held this year. Other commitments in the strategy will be taken forward in a phased approach to keep up momentum on improving the lives of millions of animals. |
|
Chemicals: Safety
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Tuesday 3rd March 2026 Question to the Department of Health and Social Care: To ask the Secretary of State for Health and Social Care, what steps he is taking to provide accessible scientific evidence to help ensure public confidence in food safety and environmental policy when new chemical additives are introduced. Answered by Stephen Kinnock - Minister of State (Department of Health and Social Care) All food and feed additives permitted for use in the United Kingdom must undergo a comprehensive, evidence‑based safety assessment before approval. This process evaluates potential risks and ensures additives can only be used in specified food categories, at controlled levels, and with any necessary labelling requirements.
The Food Standards Agency (FSA) is responsible for assessing and authorising new additives and for reviewing changes to existing approvals. To support transparency and public confidence, the FSA publishes its scientific risk assessments and consults publicly on proposed authorisations so that stakeholders and consumers can provide their views before decisions are made. |
|
Jagtar Singh Johal
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Wednesday 4th March 2026 Question to the Foreign, Commonwealth & Development Office: To ask the Secretary of State for Foreign, Commonwealth and Development Affairs, when she is scheduled to next meet representatives from the Sikh Federation to discuss the detention of Jagtar Singh Johal. Answered by Seema Malhotra - Parliamentary Under-Secretary (Foreign, Commonwealth and Development Office) I refer the Hon Member to the answer he received on 9 February in response to Question 108102. |
|
Palestine Action
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Home Office: To ask the Secretary of State for the Home Department, whether she will publish the material her Department disclosed to the courts and the Independent Reviewer of Terrorism Legislation on Palestine Action. Answered by Dan Jarvis - Minister of State (Cabinet Office) The material relied upon by the Court in its decision making is referenced throughout the judgment which is publicly available. R (Ammori) v SSHD OPEN Judgment (final) The open material referred to during the proceedings can be requested from the court in accordance with the Civil Rules on Court documents. PART 5 – COURT DOCUMENTS – Civil Procedure Rules – Justice UK. Any material submitted in closed proceedings is protected by the Justice and Security Act 2013 and will not be disclosed for reasons of national security. The Independent Reviewer of Terrorism Legislation has access to secret and sensitive national security information in order to carry out his role. He routinely publishes his findings in reports that are available on his website: https://terrorismlegislationreviewer.independent.gov.uk/ |
|
Palestine Action
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Home Office: To ask the Secretary of State for the Home Department, with reference to her comments to the BBC on 11 August 2025, whether her Department presented information to the courts during legal proceedings relating to the proscription of Palestine Action on people who are objecting to that proscription because they don't know the full nature of the organisation as a result of court restrictions on reporting while serious prosecutions are under way; and if she will publish this information. Answered by Dan Jarvis - Minister of State (Cabinet Office) The material relied upon by the Court in its decision making is referenced throughout the judgment which is publicly available here: R (Ammori) v SSHD OPEN Judgment (final) The open material referred to during the proceedings can be requested from the court in accordance with the Civil Procedure Rules on Court documents see: PART 5 – COURT DOCUMENTS – Civil Procedure Rules – Justice UK. Any material submitted in closed proceedings is protected by the Justice and Security Act 2013 and will not be disclosed for reasons of national security. It would not be appropriate to comment further during ongoing legal proceedings. The Independent Reviewer of Terrorism Legislation has access to secret and sensitive national security information in order to carry out his role. He routinely publishes his findings in reports that are available on his website: https://terrorismlegislationreviewer.independent.gov.uk/ |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what assessment his Department has made of the adequacy of current risk modelling for frontier AI systems. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) The AI Security Institute was established to deepen our understanding of frontier AI risks. The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses. The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable. Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take. |
|
Grok
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what information her Department holds on the Artificial Intelligence Security Institute assessment of xAI's Grok. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts. The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential risks associated with advanced AI systems across government. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what steps he is taking to help improve transparency on departmental responsibility for AI risk. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, whether he plans to publish an AI Security Strategy. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what emergency powers the Government holds to direct private AI developers during a national security incident involving advanced AI systems. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what protocols are in place to help ensure rapid information-sharing with AI companies during a national AI emergency. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what role the AI Safety Institute plays in national security preparedness for advanced AI systems. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what role the AI Security Institute plays in national security preparedness for advanced AI systems. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security. Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats. The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities. This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities. The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, whether he will make an assessment of the potential merits of legislative powers of direction over AI developers in the event of a loss-of-control incident. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what (a) short, (b) medium (c) and long-term actions he is taking to help anticipate and mitigate the potential risks of AI loss-of-control. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, whether his Department has undertaken scenario planning exercises for AI loss-of-control events. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, what mechanisms are in place to coordinate cross-government preparedness for AI loss-of-control scenarios. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, how her Department defines AI loss of control; and whether that definition is shared across Departments. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
|
Artificial Intelligence: National Security
Asked by: Iqbal Mohamed (Independent - Dewsbury and Batley) Thursday 5th March 2026 Question to the Department for Science, Innovation & Technology: To ask the Secretary of State for Science, Innovation and Technology, whether his Department has been designated as the lead department for AI loss-of-control risks. Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology) AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts. The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours. The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist. |
| Early Day Motions Signed |
|---|
|
Thursday 12th March Iqbal Mohamed signed this EDM on Monday 16th March 2026 Closure of Al-Aqsa Mosque during Ramadan 30 signatures (Most recent: 20 Mar 2026)Tabled by: Imran Hussain (Labour - Bradford East) That this House condemns the closure of Al-Aqsa Sanctuary in Jerusalem by Israeli authorities during the Muslim holy month of Ramadan; notes that this action infringes Palestinians’ right to freedom of worship, violates Israel’s obligations under international humanitarian law and UN resolutions, and breaches the longstanding status quo governing the … |
|
Monday 9th March Iqbal Mohamed signed this EDM on Thursday 12th March 2026 Fipronil and Imidacloprid Pesticides 16 signatures (Most recent: 16 Mar 2026)Tabled by: Rachael Maskell (Labour (Co-op) - York Central) That this House expresses grave concern that fipronil and imidacloprid, pesticides banned for outdoor agricultural use, are still being widely used in domestic veterinary treatments for ticks and fleas in cats and dogs; recognises that the widespread use of these substances contributes significantly to freshwater pollution; highlights that these chemicals … |
|
Thursday 12th February Iqbal Mohamed signed this EDM on Thursday 12th March 2026 Royal Mail postal delivery services 19 signatures (Most recent: 18 Mar 2026)Tabled by: Sorcha Eastwood (Alliance - Lagan Valley) That this House notes ongoing failures in Royal Mail’s delivery performance, including reports of post being batched over one to two weeks rather than delivered daily, in breach of statutory delivery targets; recognises the particular impact on Northern Ireland, rural and remote communities, and those reliant on timely post for … |
|
Thursday 5th March Iqbal Mohamed signed this EDM on Tuesday 10th March 2026 King's Guard's ceremonial bearskin caps 21 signatures (Most recent: 10 Mar 2026)Tabled by: Rachael Maskell (Labour (Co-op) - York Central) That this House commends this Government's commitment to advancing animal welfare, as demonstrated by key reforms including a banning of trial hunting, a banning of boiling live crustaceans, recognising their capacity for pain and ending the cruel practice of puppy farming; acknowledges the dedicated efforts of People for the Ethical … |
|
Wednesday 11th February Iqbal Mohamed signed this EDM on Friday 27th February 2026 Government contract with Palantir Technologies 33 signatures (Most recent: 17 Mar 2026)Tabled by: Apsana Begum (Labour - Poplar and Limehouse) That this House notes that the Ministry of Defence signed a contract with the US firm Palantir in December 2025 worth £240,000,000, by direct award and without tender; further notes that whilst the decision may be justified under the Procurement Act 2023, there is significant public interest in how this … |