Question to the Department of Health and Social Care:
To ask the Secretary of State for Health and Social Care, whether he has made an assessment of the adequacy of safeguards in AI when dealing with mental health based queries.
The Department recognises the importance of safeguards when using artificial intelligence (AI) for mental health queries. The United Kingdom has a world-leading regulatory system, and the National Health Service operates within a comprehensive regulatory framework for AI, underpinned by rigorous standards established by bodies including the Medicines and Healthcare products Regulatory Agency, the National Institute for Health and Care Excellence, the Health Research Authority, and the Care Quality Commission. These agencies ensure that AI technologies are safe, effective, and ethically deployed within healthcare settings.
Publicly available AI applications that are not deployed by the NHS, such as ChatGPT or Google’s Gemini, are not regulated as medical technologies and may offer incorrect or harmful information. Users are strongly advised to be careful when using these technologies. The Department recommends that individuals seek advice from the NHS website, which provides clinically approved guidance on mental health-based queries, or that they reach out to healthcare professionals.
The Department continues to work with NHS England and regulators to strengthen oversight and ensure AI in health and care is safe, effective, and accountable.