First elected: 4th July 2024
Speeches made during Parliamentary debates are recorded in Hansard. For ease of browsing we have grouped debates into individual, departmental and legislative categories.
e-Petitions are administered by Parliament and allow members of the public to express support for a particular issue.
If an e-petition reaches 10,000 signatures the Government will issue a written response.
If an e-petition reaches 100,000 signatures the petition becomes eligible for a Parliamentary debate (usually Monday 4.30pm in Westminster Hall).
Funding so all infants are offered Type 1 Diabetes Testing in routine care
Gov Responded - 17 Jul 2025 Debated on - 9 Mar 2026 View Iqbal Mohamed's petition debate contributionsFund mandatory offer of testing for Type 1 Diabetes in babies, toddlers, and young children as a routine part of medical assessments at the point of care.
Protect Legal Migrants: do not implement the 10-Year ILR proposal
Gov Responded - 4 Dec 2025 Debated on - 2 Feb 2026 View Iqbal Mohamed's petition debate contributionsWe urge the UK Government to scrap plans to extend ILR from 5 to 10 years. We feel that legal migrants, especially care workers, followed the rules and built lives here under the 5-year promise. We think they support vital services and deserve fairness, not shifting rules.
Keep 5-Year ILR and Restrict Access to Benefits for New ILR Holders
Sign this petition Gov Responded - 4 Dec 2025 Debated on - 2 Feb 2026 View Iqbal Mohamed's petition debate contributionsThe Government should keep the current 5-year route to Indefinite Leave to Remain (ILR) and restrict access to government benefits for new ILR holders.
Limit the sale of fireworks to those running local council approved events only
Gov Responded - 18 Nov 2025 Debated on - 19 Jan 2026 View Iqbal Mohamed's petition debate contributionsBan the sale of fireworks to the general public to minimise the harm caused to vulnerable people and animals. Defenceless animals can die from the distress caused by fireworks.
I believe that permitting unregulated use of fireworks is an act of wide-scale cruelty to animals.
Reduce the maximum noise level for consumer fireworks from 120 to 90 decibels
Gov Responded - 7 Nov 2025 Debated on - 19 Jan 2026 View Iqbal Mohamed's petition debate contributionsWe think each year, individuals suffer because of loud fireworks. We believe horses, dogs, cats, livestock and wildlife can be terrified by noisy fireworks and many people find them intolerable.
Extend free bus travel for people over 60 in England
Gov Responded - 12 Feb 2025 Debated on - 5 Jan 2026 View Iqbal Mohamed's petition debate contributionsWe call on the Government to extend free bus travel to all people over 60 years old in England outside London. We believe the current situation is unjust and we want equality for everyone over 60.
Repeal the Online Safety Act
Gov Responded - 28 Jul 2025 Debated on - 15 Dec 2025 View Iqbal Mohamed's petition debate contributionsWe want the Government to repeal the Online Safety act.
Urgently fulfil humanitarian obligations to Gaza
Gov Responded - 8 Aug 2025 Debated on - 24 Nov 2025 View Iqbal Mohamed's petition debate contributionsAct to ensure deliverer of fuel, food, aid, life saving services etc. We think this shouldn't be dependant/on condition of Israeli facilitation as the Knesset voted against UNWRA access to Gaza. We think if military delivery of aid, airdrops, peacekeepers etc, are needed, then all be considered.
Retain legal right to assessment and support in education for children with SEND
Gov Responded - 5 Aug 2025 Debated on - 15 Sep 2025 View Iqbal Mohamed's petition debate contributionsSupport in education is a vital legal right of children with special educational needs and disabilities (SEND). We ask the government to commit to maintaining the existing law, so that vulnerable children with SEND can access education and achieve their potential.
End the use of cages and crates for all farmed animals
Gov Responded - 17 Feb 2025 Debated on - 16 Jun 2025 View Iqbal Mohamed's petition debate contributionsWe think the UK Government must ban all cages for laying hens as soon as possible.
We think it should also ban the use of all cage and crates for all farmed animals including:
• farrowing crates for sows
• individual calf pens
• cages for other birds, including partridges, pheasants and quail
Ban non-stun slaughter in the UK
Gov Responded - 10 Jan 2025 Debated on - 9 Jun 2025 View Iqbal Mohamed's petition debate contributionsIn modern society, we believe more consideration needs to be given to animal welfare and how livestock is treated and culled.
We believe non-stun slaughter is barbaric and doesn't fit in with our culture and modern-day values and should be banned, as some EU nations have done.
These initiatives were driven by Iqbal Mohamed, and are more likely to reflect personal policy preferences.
MPs who are act as Ministers or Shadow Ministers are generally restricted from performing Commons initiatives other than Urgent Questions.
Iqbal Mohamed has not been granted any Urgent Questions
Iqbal Mohamed has not been granted any Adjournment Debates
Iqbal Mohamed has not introduced any legislation before Parliament
Glaucoma Care (England) Bill 2024-26
Sponsor - Shockat Adam (Ind)
Information on the annual cost of Government contracts for licensing across the Civil Service is not held centrally.
There is an established process in place for the appointment of Ministers.
Advice, which may or may not have been provided to the Prime Minister as part of this process, is treated in confidence.
I refer the Hon Member to my answer of 10th March 2026, Official Report, PQ 112839.
I refer the Hon Member to my answer of 10th March 2026, Official Report, PQ 112839.
The UK is facing an ever-changing and growing set of risks. All risks in the National Risk Register are kept under review to ensure that they are the most appropriate scenarios to inform emergency preparedness and resilience activity.
The challenges posed by artificial intelligence are referenced in the 2025 National Risk Register as a chronic risk, and incorporated in the Chronic Risks Analysis, the UK's first bespoke assessment for medium to long-term challenges facing the nation.
The Department for Science, Innovation and Technology (DSIT)’s AI risk register covers the full spectrum of AI risks that could impact the UK, spanning national security, defence, the economy and society. The AI Risk Register includes AI-loss-of control scenarios. The Government is committed to protecting UK citizens against the risks that advanced AI could bring, while ensuring we can maximise AI's potential for growth and public service delivery.
The Department for Business and Trade does not supply body armour, and the export of body armour for personal protection when accompanying its user (for their own use) is not subject to export control.
Nonetheless the Department has approved 12 licences for the export of protective body armour for use by news organisations in Israel or Palestine since October 2023. Of these, 9 relate to Media Open Individual Licences which allow export to a wide range of countries. Similar equipment has also been licensed for export for use by NGOs in the region.
The UK is appalled by the extremely high number of fatalities, arrests and detentions of media workers in the State of Palestine. We have called on all parties to fully uphold International Humanitarian Law and ensure protection of civilians including journalists.
We respect the independence of the International Court of Justice and continue to consider the Court’s Advisory Opinion carefully, with the seriousness and rigour it deserves.
The Government knows that, for many consumers, too much of the burden of the bill is placed on standing charges. We are committed to lowering the cost of standing charges and are working constructively with Ofgem, on this issue. Ofgem have conducted a broad public consultation to understand the views of consumers on this issue, receiving over 5,000 responses on their 2024 discussion paper. Since then, Ofgem have been continuing work in two areas.
Firstly, Ofgem have been working to ensure that domestic consumers can choose tariffs with low or no standing charges. Ofgem took a further step towards this goal on 24th July, announcing proposals to require suppliers to offer their customers low or no standing charge tariffs from early 2026. You can read about this here: https://www.ofgem.gov.uk/policy/standing-charges-energy-price-cap-variant-next-steps.
Secondly, Ofgem have been reviewing how ‘fixed’ costs, which tend to be funded through standing charges, should be recovered in the future energy system. This includes whether those fixed costs could be recovered in more progressive ways, and we are working closely with the regulator on this.
On 11 November 2025 the Government published Replacing animals in science: A strategy to support the development, validation and uptake of alternative methods, which outlines the steps we will take to achieve this. The Labour Manifesto commits to partnering with scientists, industry and civil society as we work towards the phasing out of animal testing. The Government consulted civil society, industry and academia during development of the strategy and continues to do so during delivery, including through regular Home Office meetings. We also intend to publish areas of research interest later this year. UKRI has an important role in this but is not the only delivery partner
The Cyber Security and Resilience (Network and Information Systems) Bill makes vital updates to the Network and Information Systems (NIS) Regulations 2018 to ensure that providers of the UK’s essential services are reporting more forms of harmful cyber incident to their regulators. Where these incidents meet the threshold of a reportable incident, they will need to be reported to the relevant regulator regardless of the cause. This will include incidents caused by the failure of autonomous or adaptive machine learning systems within a regulated entity’s network and information systems.
The AI Security Institute (AISI) collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of AI impacts.
This close collaboration with industry enables information-sharing to mitigate risks. AISI’s testing has identified a large number of AI model vulnerabilities that labs (such as OpenAI and Anthropic) have addressed prior to release.
AISI is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions. AISI shares its insights with government departments to help manage the risks AI could pose to critical national infrastructure.
Through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government recognises the importance of supporting people, including young people, to identify false and misleading information online.
Media literacy is an important part of our approach. DSIT is improving it through a cross-government approach outlined in the Media Literacy Action Plan published 16 March. In February we launched a pilot campaign and the Kids Online Safety Hub to help parents support children’s resilience to misinformation.
The Government recognises that the huge opportunities offered by AI also come with risks, including potential challenges posed by AI-generated content for the online information environment.
The Online Safety Act regulates AI generated mis/disinformation. This includes the Foreign Interference Offence, requiring companies to take action against state-sponsored disinformation and state-linked interference targeted at the UK and our democratic processes.
Media literacy is also part of our wider approach, building young people’s resilience to mis- and disinformation, including AI-generated content. The government will ensure that media literacy is embedded into the new primary citizenship curriculum, from September 2028.
The Government recognises that AI-driven compute, including largescale data centres, will increase electricity demand over the coming years. DSIT works closely with DESNZ and NESO to assess how projected AI-related demand is reflected in long-term energy system planning.
The AI Energy Council, co-chaired by Secretaries of State for DSIT and DESNZ, brings together regulators, energy companies and tech firms to address the growing energy demands of AI in a sustainable and scalable way.
The Council is also exploring how clean and low carbon energy solutions - including renewables and emerging technologies such as small modular reactors - could support future AI infrastructure, consistent with the Government’s clean power ambitions.
AI Growth Zones are expected to create more than 15,000 jobs spanning construction activity, permanent operational roles and wider supply‑chain employment. Job creation will ramp up as infrastructure works progress, with full delivery projected by the early 2030s. These figures are based on information provided by project teams and should be treated as projections rather than firm forecasts.
Ultimately, hiring decisions sit with individual companies, but AI Growth Zones are designed to create high‑skill, long‑term employment in areas with strong potential for economic growth.
The Department does not hold central data that consistently categorises jobs into short‑, medium‑ and long‑term across all AI Growth Zones, nor comprehensive data on jobs created to date, as projects remain at an early stage of delivery.
Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.
Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.
AI Growth Zones are expected to create more than 15,000 jobs spanning construction activity, permanent operational roles and wider supply‑chain employment. Job creation will ramp up as infrastructure works progress, with full delivery projected by the early 2030s. These figures are based on information provided by project teams and should be treated as projections rather than firm forecasts.
Ultimately, hiring decisions sit with individual companies, but AI Growth Zones are designed to create high‑skill, long‑term employment in areas with strong potential for economic growth.
The Department does not hold central data that consistently categorises jobs into short‑, medium‑ and long‑term across all AI Growth Zones, nor comprehensive data on jobs created to date, as projects remain at an early stage of delivery.
The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.
The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.
Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.
The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.
The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.
Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.
The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.
The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.
Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.
The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.
The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.
Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.
The UK AI sector attracted the third highest levels of AI related private investment in the world. Alongside this, the UK produces the second highest number of AI startups globally. This Governments remains focused on ensuring the UK remains the most attractive place in the world to build AI companies and lead on AI adoption.
The £100bn figure referenced refers to the total amount of private investment that firms have pledged to invest into the UK’s AI sector. This pledged investment demonstrates international confidence in the UK’s strong and growing AI ecosystem, supported by the Government’s strategic approach to innovation, world leading research base, and pro investment policy environment - including the UK’s strengths in AI talent, compute, research, and responsible innovation.
Whilst decisions on investment is a matter for private companies, Government has been clear that it will encourage investment that will enable UK firms and people to benefit.
CoreWeave's announced investments into the UK total £2.5 billion. CoreWeave has committed £1.5 billion towards the Lanarkshire AI Growth Zone in Scotland, deploying cutting-edge semiconductors at DataVita's data centre campus in Lanarkshire. The earlier £1 billion investment covered the opening of CoreWeave's UK office as its European headquarters, the creation of job opportunities across engineering, operations, and finance, and the deployment of AI computing infrastructure across two data centres in Crawley and London Docklands.
Large AI infrastructure investments are complex and take time to deliver; as government, we want to encourage these investments by supporting them as best we can. Where important investment announcements and commitments are made, Government will continue to work closely with those companies to ensure the delivery of those investments.
Matters regarding specific delivery and commercial plans for any private project are for the lead private sector investor to confirm. The government engages regularly with the sector to support build out.
The Government recognises the importance of a secure and resilient cloud infrastructure for the delivery of digital public services. As set out in the Roadmap for Modern Digital Government (2026), the government is developing a National Cloud Strategy. As part of this, the government will assess how to strengthen the security and resilience of UK cloud infrastructure and improve the cloud ecosystem.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
The AI Security Institute was established to deepen our understanding of frontier AI risks.
The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses.
The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable.
Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take.
The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts.
The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities.
The AI Security Institute regularly test models across leading labs. While we do not provide a running commentary on which models we test due to commercial and security reasons, it actively works with labs to improve safeguards when vulnerabilities have been identified.
The AI Security Institute regularly test models across leading labs. While we do not provide a running commentary on which models we test due to commercial and security reasons, it actively works with labs to improve safeguards when vulnerabilities have been identified.
The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.
We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.
The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.
The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.
We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.
The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.
The government is committed to tackling the creation of this atrocious material. Creating, possessing, or distributing child sexual abuse material (CSAM), including AI Generated CSAM, is illegal. The Online Safety Act requires services to proactively identify and remove this content.
We are taking further action in the Crime and Policing Bill to criminalise CSAM image generators, and to ensure AI developers can directly test for and address vulnerabilities in their models which enable the production of CSAM.
The Government is clear: no option is off the table when it comes to protecting the online safety of users in the UK, and we will not hesitate to act where evidence suggests that further action is necessary.
The Government engages with a wide range of stakeholders on its approach to regulating Artificial Intelligence, including AI companies, academics, and civil society groups.
Details of Ministerial meetings with external organisations are published in the quarterly transparency returns.