To match an exact phrase, use quotation marks around the search term. eg. "Parliamentary Estate". Use "OR" or "AND" as link words to form more complex queries.


Keep yourself up-to-date with the latest developments by exploring our subscription options to receive notifications direct to your inbox

Written Question
Artificial Intelligence: Safety
Friday 13th March 2026

Asked by: Lord Holmes of Richmond (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government, further to the remarks by Lord Leong on 3 February (HL Deb col 1434) about the use of the SPACE framework to ensure safety, transparency and accountability for AI, in which publication, document or statement they set out that approach; and what activity they have taken to implement it.

Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)

The Government’s response to the AI Opportunities Action Plan outlines our regulatory approach to strengthening AI safety, security and robustness. We have accepted - and are acting on - recommendations to enhance regulatory capabilities. We have also announced a new Centre for AI Measurement to develop new AI assurance tools and strengthen the UK AI Assurance ecosystem; committed to ensuring that the AI Security Institute has the ability to deliver on its responsibilities, is trusted by others, and works well with partners; and concluded a call for evidence on the AI Growth Lab, a cross-economy AI sandbox, to inform further development, and identify priority areas for its focus.

The Regulatory Innovation Office supports the government’s pro‑innovation approach to regulation by working with businesses and regulators to cut approval times for innovation and technologies while maintaining safety and public confidence. The Regulatory Innovation Office also coordinates cross‑government action to remove regulatory barriers to growth.

Through such initiatives, the Government has taken important steps to ensure that most AI systems are already regulated at the point of use by our existing expert regulators. We are closely following how the technology develops, and where further action may be required.


Written Question
Digital Technology: Human Rights
Monday 9th March 2026

Asked by: Lord Holmes of Richmond (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what assessment they have made of Demos report A Declaration on Digital Rights: Embedding human rights in a new deal for the digital age, published on 10 February; and what steps they are taking to embed human rights protections in the UK’s regulatory approach to technology and AI.

Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)

We are committed to embedding human rights protections across the UK’s approach to regulating technology and AI. The UK already complies with human rights obligations including via the Human Rights Act (HRA) 1998. Individuals can uphold those rights in UK courts, which have always interpreted the rights set out in the European Convention on Human Rights and applied under the HRA in a flexible way that keeps up with new technology.

The UK has helped to shape the passage of key international AI initiatives, such as signing the Council of Europe’s AI Convention. This is the world’s first legally binding agreement on AI grounded in human rights, democracy and the rule of law. We will implement the Convention in a proportionate, innovation-friendly way, leveraging our existing human rights framework and sector-led regulation to safeguard rights while supporting growth.


Written Question
Artificial Intelligence
Thursday 26th October 2023

Asked by: Lord Holmes of Richmond (Conservative - Life peer)

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government what plans they have to encourage public participation in questions around the use of AI and its impact on society, what role citizens assemblies might play in those plans, and how the Centre for Data Ethics and Innovation will be involved.

Answered by Viscount Camrose - Shadow Minister (Science, Innovation and Technology)

In 2021 the Government published its National AI Strategy – a 10-year vision to make the UK an AI superpower by investing in our ecosystem, driving adoption of AI across sectors, and ensuring we get the governance of AI right. The strategy recognised that public trust and support in government’s approach to and use of AI was crucial to maximise its opportunities and value, whilst minimising its risks.

To develop the Strategy, the Government ran an open survey through the Alan Turing Institute. The survey received over 400 responses, in addition to having engaged over 250 organisations and businesses across different sectors.

The Government also ran a consultation to inform the AI regulation white paper, published this year. We heard from over 400 individuals and organisations, with a wide range of views represented including regulators, industry, academia, and civil society. The Government has also engaged regulators, businesses, start-ups, research groups, trade unions, charities and advocacy groups through roundtables and workshops.

In advance of Government’s AI Summit to be held this month, DSIT has engaged broadly with stakeholders to ensure voices and views of diverse groups and individuals have helped to shape the Summit’s focus. This included four official pre-Summit events with the Royal Society, the British Academy, techUK and The Alan Turing Institute as well as public Q&As on X and twitter.

We will continue to engage with the public to inform our approach to drive responsible innovation in AI including through the work of the Centre for Data Ethics and Innovation (CDEI). The CDEI’s Public Attitudes team conducts an ongoing programme of quantitative and qualitative research to engage the public on AI. This has recently included focus groups and deliberative dialogues with diverse groups to understand public attitudes towards the use of AI in society. CDEI also conducts a large-scale annual survey which monitors public attitudes to data-driven technology and AI, the latest wave of which will be published in November this year. CDEI disseminates the findings from its research widely, and the insight is used across government, academia and the private sector to help ensure trustworthy approaches to AI.