Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department of Health and Social Care:
To ask His Majesty's Government what assessment they have made of the use of AI-enabled assistive technologies, including wearable devices, in supporting people living with dementia; and what steps they are taking to ensure those technologies are safe, effective and accessible while maintaining standards of data protection and patient care.
Answered by Baroness Merron - Parliamentary Under-Secretary (Department of Health and Social Care)
The Government recognises the potential of artificial intelligence (AI) enabled assistive technologies, including wearable devices, to support people living with dementia by promoting independence, safety, and quality of life, and by helping carers and care professionals provide more personalised and responsive support.
To help assess the use of technologies in adult social care, the Government has funded the testing and evaluation of technologies, including AI-enabled technologies, through the Adult Social Care Technology Fund. Emerging evidence indicates positive outcomes for people in receipt of care, care professionals, and the wider health and social care system. People using technology experienced greater independence, safety, wellbeing, and quality of life. We will publish the findings from these projects.
We are in the process of developing trusted, accessible guidance and setting new standards for care technologies, including evidence standards which will help people identify which tech might be most useful for them. This will help people living with dementia, their carers', and care providers know which technologies are fit for purpose, secure, and compatible with the wider health and social care systems in the future, supporting them to invest in technology for the long term.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the impact of increased adoption of AI tools on employment levels in the banking sector; and what steps they are taking to support skills development and the long-term resilience of the financial services labour market.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government recognises that increased adoption of AI in financial services, including banking, has the potential to change the nature of some roles while supporting productivity growth, innovation and improved consumer outcomes. Financial services is already a leading adopter of AI in the UK and will play a key role in delivering the Government’s ambition to have the fastest AI adoption rate in the G7.
The Government is working closely with industry and regulators to better understand the implications of AI adoption, including for the workforce. To support skills development and long-term labour market resilience, we have commissioned work through the Financial Services Skills Commission on how the skills system can support effective adoption of AI and other disruptive technologies. This sits alongside the Government’s wider ambition to equip up to 10 million people with AI skills, helping workers adapt as roles evolve and ensuring the financial services labour market remains competitive and resilient.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the risks posed to AI infrastructure from the disruption to global supply of critical minerals, including helium, as a result of military operations in the Middle East.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
In a fast-evolving situation, Government is closely monitoring the potential impact of disruption to trade and supply chains on the UK economy and AI infrastructure. The UK is working closely with international partners to develop a viable plan to safeguard international shipping in the Strait of Hormuz. The Foreign Secretary issued an updated statement on this situation on April 8th.
A secure supply of critical minerals is vital for the UK's economic growth and security, industrial strategy, and clean energy transition. These risks strengthen the imperative of the UK Critical Minerals Strategy, with its key objectives of optimising domestic production while building resilient UK and global supply networks across critical minerals.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Cabinet Office:
To ask His Majesty's Government what assessment they have made of (1) the use of public sector procurement in supporting the growth of UK-based AI companies, and (2) the impact of that procurement on the development of domestic technology capabilities.
Answered by Baroness Anderson of Stoke-on-Trent - Baroness in Waiting (HM Household) (Whip)
As outlined in the AI Opportunities Action Plan, this Government is committed to supporting growth of the UK AI sector. This commitment is underpinned by the establishment of 5 AI Growth Zones across the UK, which provide the sovereign processing power and the required energy security for homegrown firms to scale and secure our national digital resilience.
Furthermore, the Government will publish new guidance for central government organisations procuring from the AI, steel, shipbuilding and energy infrastructure sectors regarding the appropriate use of national security exemptions. This will help to ensure we maintain sovereign supply chain resilience when it is a critical factor in supporting national security.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Home Office:
To ask His Majesty's Government what assessment they have made of the use of AI chatbot systems to facilitate stalking and harassment; and what steps they are taking to ensure that existing online safety, data protection and criminal law frameworks remain effective in addressing harms arising from the misuse of those technologies.
Answered by Lord Hanson of Flint - Minister of State (Home Office)
The Government continues to take steps to protect the UK public from crimes linked to the misuse of artificial intelligence (AI). This includes when AI is used to aid or facilitate stalking and harassment.
The Online Safety Act already regulates many generative AI services. However, the Government acknowledges that gaps remain, leading to inconsistent coverage of certain AI chatbot services.
We are addressing these gaps as a matter of urgency through an amendment to the Crime and Policing Bill. Through a new delegated power, we will be able to bring currently unregulated AI chatbots into the scope of the Online Safety Act. This will ensure they are subject to requirements to protect users from illegal content and activity.
We are also taking action on so called ‘nudification’ tools, legislating through the Crime and Policing Bill to criminalise the development and supply of tools for generating non-consensual intimate images.
Beyond these measures, we will continue to work closely with law enforcement to tackle the harms presented by AI. The National Centre for VAWG and Public Protection (NCVPP) continues to act as the subject matter expert on ongoing work relating to AI and VAWG in policing, to ensure that safeguarding is a core part of AI tools and models.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the implications for data protection and governance of the involvement of private technology companies in the handling of sensitive data held by public authorities and regulators; and what steps they are taking to ensure that appropriate safeguards relating to data protection, accountability and transparency are in place.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government is committed to ensuring that the involvement of private technology companies in the handling of sensitive data held by public authorities and regulators is subject to robust data protection, accountability, and transparency safeguards. All departments undertaking work involving personal data are required to conduct Data Protection Impact Assessments to ensure appropriate privacy, security, and fairness measures are in place. Where private‑sector tools, including algorithmic or AI‑enabled systems, are procured or used, departments must apply mandatory transparency standards and clearly document how such tools are embedded in decision‑making processes, their technical specifications, and relevant risk mitigations.
At a cross‑government level, the Government Digital Service (GDS), within the Department for Science, Innovation and Technology, is strengthening central coordination and oversight of data protection and privacy risks across government. This includes setting consistent standards, supporting departments on the responsible adoption of new technologies, and working closely with the Information Commissioner’s Office to raise data protection and information security standards across the public sector.
These measures are intended to ensure that the use of private technology companies supports innovation and improved public services, while maintaining high standards of data protection, accountability and public trust.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Home Office:
To ask His Majesty's Government what assessment they have made of the use of facial recognition technologies by police forces and the implications of pausing deployment pending further study of potential racial bias; and what steps they are taking to ensure that such systems are subject to appropriate safeguards, oversight and standards to prevent discriminatory outcomes.
Answered by Lord Hanson of Flint - Minister of State (Home Office)
The Home Office works closely with police forces and stakeholders to assess the use of facial recognition by law enforcement. As part of this engagement, we have consulted on a new legal framework on how and when law enforcement should use biometrics and facial recognition, including the safeguards that should apply to the use of these technologies. That consultation closed on 12 February; we are considering responses and will legislate in due course.
When using the technology, the police must operate within the legal framework, including data protection, equality and human rights legislation, national guidance, a code of practice and force‑level policies. The Home Office is aware of the risk of bias in facial recognition algorithms and all police facial recognition systems funded by the Home Office must be independently tested so that they can be operated at settings where there is negligible bias.
The Home Secretary has also tasked His Majesty’s Inspectorate of Constabulary and Fire & Rescue Services (HMICFRS), with support from the Forensic Science Regulator, to look at whether people have been affected by the bias as part of the inspection of police and relevant law enforcement agencies’ use of retrospective facial recognition. The inspection is in progress and the terms of reference have been published by HMICFRS.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the increasing deployment of generative AI systems in consumer-facing technologies such as voice assistants; and what steps they are taking to ensure that frameworks relating to data protection, consumer protection and product safety remain effective in the deployment of such technologies.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
We are committed to ensuring the UK is the leading adopter of AI in the G7, empowering British workers and businesses to seize its benefits by creating more rewarding jobs, increasing productivity and driving growth in our leading sectors.
AI assurance enables consumers to be confident that the products they buy will work as intended, which is why the Government is taking steps to build the AI assurance ecosystem that underpins safe deployment of AI, as set out in the Roadmap to Trusted Third-Party AI Assurance. This includes establishing the Centre for AI Measurement, led by the National Physical Laboratory, to accelerate the development of new, innovative AI assurance techniques.
The law also requires that all consumer products must be safe before they are placed on the market. The Office for Product Safety and Standards and local authority trading standards have enforcement powers across product safety regulations to take non-compliant or unsafe products off the UK market. The product safety framework will better respond to emerging risks posed by digital technologies, including AI-enabled and smart products, ensuring innovation does not come at the expense of consumer safety.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Business and Trade:
To ask His Majesty's Government what assessment they have made of the implications for competition and market access of the integration of generative AI tools into search engines; and what steps they are taking to ensure fair access for content providers and smaller firms in digital markets.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Competition and Markets Authority (CMA) is the UK’s independent competition authority and is responsible for operating the digital markets regime. It has designated Google with strategic market status in general search and search advertising services. Developments in generative AI were considered during the designation investigation. The CMA is now considering imposing conduct requirements to increase competition.
Asked by: Lord Taylor of Warwick (Non-affiliated - Life peer)
Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what assessment they have made of the safety, reliability and accountability of AI systems deployed by public services; and what steps they are taking to ensure that appropriate safeguards, testing standards and oversight mechanisms are in place.
Answered by Baroness Lloyd of Effra - Baroness in Waiting (HM Household) (Whip)
The Government recognises that the safe, reliable and accountable use of artificial intelligence is important to maintaining public trust in public services.
Departments deploying AI systems are expected to consider risks and impacts throughout the system lifecycle, including during design, development, deployment and operation. This includes compliance with safety, transparency, accountability, data protection rules and regulations.
The Government has published guidance to support this, including the Data and AI Ethics Framework, the AI Playbook for Government and the AI Knowledge Hub, which together provide advice on governance, risk management, testing and oversight.
In addition, the Department for Science, Innovation and Technology has published guidance on AI assurance, and a cross‑government AI Testing and Assurance Framework supports proportionate testing, evaluation and ongoing monitoring.
AI‑enabled services are also expected to meet the GOV.UK Service Standard, including demonstrating that they are safe, secure, reliable and well‑governed.