Driving innovation that will deliver improved public services, create new better-paid jobs and grow the economy.
Oral Answers to Questions is a regularly scheduled appearance where the Secretary of State and junior minister will answer at the Dispatch Box questions from backbench MPs
Other Commons Chamber appearances can be:Westminster Hall debates are performed in response to backbench MPs or e-petitions asking for a Minister to address a detailed issue
Written Statements are made when a current event is not sufficiently significant to require an Oral Statement, but the House is required to be informed.
Department for Science, Innovation & Technology does not have Bills currently before Parliament
A bill to make provision about access to customer data and business data; to make provision about services consisting of the use of information to ascertain and verify facts about individuals; to make provision about the recording and sharing, and keeping of registers, of information relating to apparatus in streets; to make provision about the keeping and maintenance of registers of births and deaths; to make provision for the regulation of the processing of information relating to identified or identifiable living individuals; to make provision about privacy and electronic communications; to establish the Information Commission; to make provision about information standards for health and social care; to make provision about the grant of smart meter communication licences; to make provision about the disclosure of information to improve public service delivery; to make provision about the retention of information by providers of internet services in connection with investigations into child deaths; to make provision about providing information for purposes related to the carrying out of independent research into online safety matters; to make provision about the retention of biometric data; to make provision about services for the provision of electronic signatures, electronic seals and other trust services; to make provision about the creation and solicitation of purported intimate images and for connected purposes.
This Bill received Royal Assent on 19th June 2025 and was enacted into law.
e-Petitions are administered by Parliament and allow members of the public to express support for a particular issue.
If an e-petition reaches 10,000 signatures the Government will issue a written response.
If an e-petition reaches 100,000 signatures the petition becomes eligible for a Parliamentary debate (usually Monday 4.30pm in Westminster Hall).
We want the Government to repeal the Online Safety act.
Introduce 16 as the minimum age for children to have social media
Gov Responded - 17 Dec 2024 Debated on - 24 Feb 2025We believe social media companies should be banned from letting children under 16 create social media accounts.
The Online Safety Act (OSA) already places robust duties on online platforms to tackle illegal and harmful pornographic content. Platforms are required to prevent users from encountering such content, and services that host or allow access to pornography must have effective measures, such as age verification, to protect children. In 2025, the government announced that strangulation will be made a priority offence under the OSA, requiring platforms to take swift action against this content.
Following the Independent Pornography Review, a cross-government joint team has been established to inform the government’s approach to pornography policy.
The AI Security Institute collaborates with leading AI developers to measure the capabilities of advanced AI and recommend risk mitigations, to ensure we stay ahead of possible AI impacts.
The Government does not give a running commentary on models being tested or which models we have been granted access to due to commercial and security sensitivities.
The AI Security Institute was established to deepen our understanding of frontier AI risks.
The Institute works with the national security community and government experts to ensure AI technology delivers on its potential for UK growth, while working with companies to assess and manage the potential risks this technology poses.
The Institute’s role is also to ensure AI risk evaluation and understanding is more scientifically rigorous and reliable.
Advancing the scientific field of AI safety will help the UK ensure it has the best evidence available to navigate the uncertain trajectories that advanced AI could take.
The Online Safety Act (the Act) requires services, including social media, to protect children from illegal, harmful and age-inappropriate content.
Both the Act’s illegal duties and child safety duties are now in force, with Ofcom having substantial enforcement powers including the ability to issue fines of up to £18 million or 10% of platforms’ qualifying worldwide revenue. Since the duties came into force, Ofcom has opened several enforcement investigations against platforms suspected of failing to meet their obligations. Recent actions include investigations into major pornography providers, file-sharing services for measures to prevent the sharing of child sexual abuse material, and online forums linked to harassment and suicide promotion.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
This government is taking a long‑term, science‑led approach to understanding and preparing for emerging AI risks, including the possibility of very rapid progress with transformative impacts on society and national security.
Through close collaboration with industry and international allies, the government has deepened its understanding of risks, improved AI model security, and built UK resilience against threats.
The Government’s National Security Strategy sets out our intent to build the UK national security agenda for AI and other frontier technologies. This agenda will support the development of the UK's AI-enabled defence and security capabilities.
This is complimented by the work of the AI Security Institute (AISI), which focuses on emerging AI risks with serious security implications, including cyber misuse, chemical or biological risks, and autonomous AI capabilities.
The Government will remain vigilant and prepare for new AI risks, including rapid advancements that could affect society and national security.
The consultation, published 2nd March, on children’s use of technology, considers a range of further measures to give children a good life online, ensuring they have the childhood they deserve and are prepared for the future.
This includes exploring the option of banning social media below for children below a certain age, as well as restricting access to risky functionalities and “addictive” features – including content recommendation algorithms.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
AI models have the potential to pose novel risks by behaving in unintended or unforeseen ways. The possibility that this behaviour could lead to loss of control over advanced AI systems is taken seriously by many experts.
The AI Security Institute (AISI) is researching the development of AI capabilities that could contribute towards AI’s ability to evade human control, as well the propensity of models to engage in misaligned actions.
Furthermore, through the Alignment Project – a funding consortium distributing up to £27m for research projects – AISI is supporting further foundational research into methods to develop AI systems that operate according to our goals, without unintended or harmful behaviours.
The Government has been clear that we will legislate on AI where needed but we will do so on the basis of evidence where any serious gaps exist.
The Online Safety Act establishes Ofcom as the independent regulator for online safety with powers to sanction in-scope services who do not comply with their duties, including user redress, child safety and age assurance.
Duties on content reporting and complaints procedures require services to enable users to report illegal content, report any breach of a service’s own terms and conditions, and require a service to take appropriate action in response to such complaints.
Ofcom has Government’s full backing to use all the powers given to it by Parliament in the exercise of its regulatory responsibilities.
The Department is working closely with colleagues across Government to strengthen the coordination, development, validation and uptake of non‑animal methods. The Replacing Animals in Science strategy commits to establish governance structures to oversee progress and delivery of the strategies actions, including a set of key performance indicators (KPIs) to assess and monitor the delivery of the strategy. The first cross‑departmental ministerial meeting on the delivery of the strategy is scheduled to take place next month and will provide a formal mechanism to drive progress and ensure alignment across policy areas.
The Online Safety Act’s illegal content safety duties cover illegal extreme pornographic content; ensuring companies put in place safety measures which mitigate and manage risks. Providers must implement safety by design measures to mitigate illegal activity, reduce the risk of users carrying out illegal activity, and take down illegal content when it appears.
I refer the noble Lord to the answer given on 11 February 2026 to Question UIN 109609.
HM Treasury works closely with the UK financial regulators to monitor evolving risks from new technologies, and ensure that the opportunities AI presents can be realised in a safe and responsible way.
The government is engaging closely with the FCA on AI, and we support the approach the FCA is taking to encourage the safe adoption of AI in financial services. This includes several initiatives to support the safe adoption of AI, including the supercharged sandbox which enables firms to safely experiment with AI innovations.
The financial services sector has also been developing AI tools which can be used to detect and prevent fraud. These include HSBC’s pilot with Google to use AI to support financial crime detection. and Mastercard’s use of AI to identify and flag APP scams.
The Department for Science, Innovation and Technology (DSIT) has committed to £58.5 billion investment in R&D over the next 4 years. Of this, UKRI will deliver £38.6 billion towards research and innovation with £14.5 billion allocated towards curiosity-driven research, in recognition of its fundamental importance for our future. This is an increase in funding.
The Science and Technology Facilities Council (STFC) budget has not been cut and it increases across the spending review period. STFC within UKRI is currently working with the sector to model different spending scenarios for its portfolio in particle physics, astronomy and nuclear physics (PPAN). No final spending decisions relating to STFC’s PPAN portfolio have been made. The impacts of different modelled scenarios will be considered alongside feedback from the sector when taking final decisions.
More generally, DSIT has asked UKRI to ensure that its allocation decisions are informed by meaningful consultation with the scientific research community and a robust assessment of potential consequences for the UK’s scientific capability and international standing.
The Government has announced £75m of funding to accelerate alternatives and innovation, with new capabilities being developed across the UK. This funding will help bring forward advanced testing methods that can save lives and support a faster, science‑led route to regulation. £60 million of this is ring‑fenced, multi‑year funding secured through the 2025 Spending Review to provide long‑term stability for strategic programmes. The Department remains fully committed to delivering the actions set out in the Replacing Animals in Science strategy through the funding secured in the Review.
Transparent targets and milestones and Key Performance Indicators (KPIs) for the delivery of the Replacing Animals in Science Strategy will be published later in 2026. It is not yet possible to replace all animal use due to the complexity of biological systems and regulatory requirements for their use. Any work to phase out animal testing must be science-led, in lock step with partners, so we will not be setting arbitrary timelines for overall reduction, but we will publish timelines for specific actions.
Transparent targets and milestones and Key Performance Indicators (KPIs) for the delivery of the Replacing Animals in Science Strategy will be published later in 2026. It is not yet possible to replace all animal use due to the complexity of biological systems and regulatory requirements for their use. Any work to phase out animal testing must be science-led, in lock step with partners, so we will not be setting arbitrary timelines for overall reduction, but we will publish timelines for specific actions.
I refer the hon. Member to the answer I gave on 1 December 2025 to Question UIN 94115.
The Labour Manifesto commits to “partner with scientists, industry, and civil society as we work towards the phasing out of animal testing. The strategy was developed with regulators, industry, academia and civil society and this engagement will continue during implementation of the strategy. Regulators will be represented within new governance structures as part of the implementation process, and we will work closely with experts across these sectors to ensure the strategy remains science‑led, up to date and focused on driving the development, validation and uptake of advanced non‑animal methods.
Officials from Building Digital UK (BDUK) are currently in discussions with broadband suppliers in Oxfordshire on potential voucher project opportunities. Suppliers are aware of the deadline for project applications and are developing project proposals taking this into account.
We have now launched our consultation on children’s use of technology and social media. This is a short, swift consultation which allows the different voices within the debate to be heard. The consultation will close on the 26th May . The government is planning to respond in the summer.
The consultation is backed by a national conversation about the impact of technology on children’s wellbeing. Ministers have already been hearing the views of parents, children and civil society through nationwide events.
The UK is committed to a proportionate AI regulatory approach which is grounded in science and supports growth and innovation.
The European Council has published its proposal for a decision to apply the EU AI Act to a limited extent in Northern Ireland under Article 13(4) of the Windsor Framework. The Act would only apply following an agreement at a Withdrawal Agreement Joint Committee, which will be subject to the mechanisms in Schedule 6B to the Northern Ireland Act 1998.
The UK Government is assessing the proposal and will continue to engage closely with the EU on it. Joint statements on previous Withdrawal Agreement Joint Committee meetings can be found on gov.uk.
The government is committed to delivering gigabit broadband to 99% of UK premises by 2032.
The Secretary of State for Science, Innovation and Technology held jointly with the Chancellor a roundtable with telecoms industry stakeholders on 11 February where investment in the sector was discussed.
DSIT also regularly engages with a wide range of telecoms stakeholders to support investment in the sector and the delivery of gigabit‑capable broadband.
The government is continuing to work in partnership with industry to rollout gigabit coverage and maintain a stable pro‑competition regulatory environment that encourages private investment.
This is complemented by Project Gigabit, where we are delivering gigabit-capable connections to premises not included in suppliers’ commercial delivery plans. As of the end of December 2025, over 1.3 million homes and businesses in rural areas across the UK had been upgraded to gigabit-capable broadband through government-funded programmes.
More than one million further premises, which includes rural shops, have been included in over £2.4 billion worth of Project Gigabit contracts. This includes approximately 910 homes and businesses in Newbury constituency.
In Ofcom’s Connected Nations Report 2025 it was reported that the Newbury constituency has almost 100% geographic 4G coverage from at least one mobile network operator, and 96% coverage from all operators. Businesses should have access to the high-quality connectivity that allows them to thrive, and it is the government’s ambition that all populated areas should have access to higher quality standalone 5G by 2030. Each of the network operators have set out delivery and investment plans that align with this government’s ambition.
The ‘Freedom from Violence and Abuse: a cross-government strategy to build a safer society for women and girls’ committed to creating a joint team to address the issues in Baroness Bertin’s Review. The team is now established and is formed by the Home Office, Department for Science, Innovation and Technology, Ministry of Justice, and Department for Culture, Media and Sport. The team is examining the evidence to inform the government’s approach to pornography policy carefully.
On 10 February, the Government launched the Mobile Market Review (MMR) call for evidence. This is a key milestone in our joint mission with industry to deliver high-quality mobile connectivity for the benefit of people, business and the public sector across the UK. The call for evidence will remain open for 10 weeks and close on 21 April. We will provide an update on next steps later in 2026
The Department has noted Ofcom’s analysis of telecoms investment in its Connected Nations UK Report 2025. Ofcom estimated that telecoms operators collectively invested £7.8 billion in 2020, £8.6 billion in 2021, £8.7 billion in 2022, £10.2 billion in 2023 and £9.2 billion in 2024. Ofcom has adjusted all figures for inflation and presented them in 2024 prices.
The Medical Research Council (MRC), which is part of UK Research and Innovation (UKRI), funds research into vision loss, including age-related macular degeneration (AMD), through a range of schemes. This research spans discovery science and fundamental mechanistic understanding, through to new approaches for diagnosis and intervention. For example, MRC has committed over £4 million to Kings College University for a clinical trial to establish safety and efficacy of photoreceptor transplantation in retinal degeneration and AMD patients as a potential treatment of the condition.
The Advanced Research and Invention Agency (ARIA) has maximum autonomy over its research and project choice, and allocation of funding to research projects and different sectors will be decided by those with relevant technical expertise. ARIA has made nearly £530 million in funding available across its first 10 programmes, which includes biomedical research.
The Government is committed to a competitive mobile market where consumers and businesses have access to high-quality, secure and affordable connectivity. Strong competition in the sector has helped deliver wide consumer choice and some of the lowest mobile prices internationally, even as data use has grown year on year.
The Government launched a Mobile Market Review call for evidence on 10 February which will remain open for 10 weeks. This call for evidence assesses how the market is changing and seeks to understand what more can be done to support investment, innovation, and competition across the mobile sector.
More broadly, it is the responsibility of Ofcom and the Competition and Markets Authority to promote competition and protect consumers in telecoms markets. Where they identify anti‑competitive behaviour, they have powers to investigate and implement measures to promote competition.
We do not hold any contracts with Palantir, however in terms of Identifying information, DSIT utilises the GDPR and Data protection laws in every contract it enters, we also have a dedicated data protection team that review these specific clauses before contract signature.
The Technology Secretary has repeatedly been clear that Government fully supports Ofcom using the full force of the powers that it has been given by Parliament.
Ofcom, the independent regulator for online safety, publishes details of its enforcement action on its website. In total, it has opened investigation into 94 sites since compliance became enforceable, including issuing 8 fines to 5 providers (totalling £2m) one of which has been paid, one is being appealed and the remainder remain within the deadline for payment.
The Government recognises the growing strength of UK start‑ups developing AI safety, governance and assurance technologies. The Spending Review allocated up to £500 million to the Sovereign AI Unit to provide targeted support to enable high-potential startups and scaleups to become national AI champions.
As highlighted in the AI Opportunities Action Plan: One Year On publication, we have taken steps to build the AI assurance ecosystem that underpins safe and responsible use of AI. This includes establishing a new Centre for AI Measurement at the National Physical Laboratory, designed to accelerate the development of secure, transparent and trustworthy AI.
The AI Growth Lab will also act as a cross‑economy AI sandbox, encouraging innovation by enabling responsible AI products and services to be deployed under close supervision in live markets.
We have launched a consultation exploring children’s use of technology. It seeks to understand how children can better be protected online, and how wellbeing can improve and enrich children’s lives. It will gather views on proposals including banning social media for under‑16s and restricting ‘addictive’ online features
The consultation is accessible for all – we hope to hear from parents, children’s organisations, bereaved families and industry - and from children themselves. We have also developed a child and parent-friendly version of the consultation and are progressing a national conversation where we will engage with these groups.
Compute is a critical enabler for AI development and scientific research. This Government is committed to scaling this essential infrastructure to accelerate innovation, drive economic growth and better support our public services.
We are investing up to £2 billion in public compute through to 2030. This will deliver a new national supercomputer in Edinburgh and expand our AI Research Resource twentyfold by 2030 providing free access to compute for researchers, SMEs and the public sector.
The UK has also established five AI Growth Zones, including the Lanarkshire AI Growth Zone, announced this January. We will continue to work with these zones to secure them as the UK's AI powerhouses, as well as identifying new sites around the UK with the potential to become AI Growth Zones.
The Department is working closely with colleagues across Government to strengthen coordination on the development, validation and uptake of non‑animal methods. The first cross‑departmental ministerial meeting on the delivery of the strategy is scheduled to take place next month and will provide a formal mechanism to drive progress and ensure alignment across policy areas.
We currently have a small amount of staff resource allocated to delivering our sustainability, net zero and green innovation objectives. Based on the planned continuation of this work at current levels, the estimated average annual cost over the next five years is approximately £58k.
In parallel, we are reviewing our future accommodation requirements with options under consideration including the retention of our existing IPO‑owned building or relocation to premises that meet net zero compliance standards. Full details are not yet available, and we do not anticipate any changes or associated costs on sustainability programmes for at least the next two years.
Everyone should be able to benefit from the digital world – helping families save money, get a better job, and access services like the NHS more easily.
But we know some people face real barriers and older people are more likely to be offline: data from 2025 shows that 13% of adults aged 65+ did not have home internet access, compared to 3% of adults aged 16-64.
That’s why we published the Digital Inclusion Action Plan and launched the £11.9 million Digital Inclusion Innovation Fund - helping more people, including older people, across the UK get the access, skills and confidence to get online. We are also committed to making digital public services simple and accessible for everyone, by working on renewed digital standards for essential public services and stronger accountability, alongside well‑supported offline routes.
In February, major telecoms providers signed a new charter to end unexpected mid-contract price rises and make social tariffs easier to access, helping millions manage living costs.
I thank the hon. Member for highlighting concerns about affordable software licenses for public libraries. This is a complex issue that has arisen from a change in Microsoft’s policy regarding the transition of libraries from Education to Not-for-Profit (NFP) pricing.
Since the issue has been raised with DSIT, my officials have been working with DCMS, as the Department with responsibility for libraries, and with Microsoft, to address the practical challenges that these important public institutions face in renewing their software licenses without a charity or company number. Microsoft provided library services with initial guidance to assist in obtaining the not-for-profit discounts to which they are entitled.
In the months since this guidance was issued, it has been tested with library services and DCMS have rendered assistance in navigating the process. DCMS have identified areas where improvements can be made to guidance for both library services and resellers, which we will continue to discuss with Microsoft to ensure libraries can access affordable licenses going forward.
The Government keeps the level of funding for astronomy and space science under regular review to ensure it supports the UK’s strategic priorities and delivers value for money. Funding comes from a range of sources, primarily the Science and Technology Facilities Council (STFC) under UK Research & Innovation (UKRI), and the UK Space Agency (UKSA).
Government is investing a record £86bn into R&D over the next four years, including £38.6 billion through UKRI. UKRI must ensure allocation decisions are informed by meaningful consultation with the scientific research community and a robust assessment of potential consequences for the UK’s scientific capability and international standing. STFC is currently working with the sector to model different spending scenarios for particle physics, astronomy and nuclear physics and no final spending decisions have been made.
Beyond UKRI funding, UKSA funds space science through our £511 million commitment to the European Space Agency's core budget at the Council of Ministers in November 2025. Further detail on UKSA funding plans outside of ESA will be detailed in due course.
The government is deeply concerned about the spread of antisemitic content and dealing with it is a priority for this government. We recognise that AI-generated content can undermine trust and spread hate online. Under the Online Safety Act (OSA), enforced by Ofcom, regulated services must tackle AI-generated content that is illegal (including that which stirs up racial hatred, is threatening or abusive, or otherwise meets criminal thresholds), or harmful to children. This includes where content is antisemitic. The Secretary of State wrote to Ofcom in October and November 2025 asking them to do everything possible under the Act to tackle this content.
The department is exploring how to improve detection and transparency around AI-generated material, including through the Deepfake Detection Challenge 2026. We are also improving media literacy, encouraging critical engagement with and awareness of divisive and misleading content.
The government continues to work with community groups and partners to challenge hatred and protect public understanding from harmful content.