Artificial Intelligence: Safeguarding Debate
Full Debate: Read Full DebateBaroness Berger
Main Page: Baroness Berger (Labour - Life peer)Department Debates - View all Baroness Berger's debates with the Department of Health and Social Care
(1 day, 10 hours ago)
Lords ChamberTo ask His Majesty’s Government, following recent reports by Open AI that many people have exhibited signs of suicidal ideation or other mental health emergencies while messaging a generative artificial intelligence chatbot, whether they have plans to safeguard such individuals.
My Lords, safeguarding people experiencing suicidal ideation or a mental health crisis is a priority. We recognise the growing use of generative AI chatbots and the potential risks that they can pose, particularly when people seek support during moments of acute distress. Whether content is created by AI or humans, the Online Safety Act places robust duties on all in-scope services, including those deploying chatbots, to prevent users encountering illegal suicide and self-harm content.
My Lords, ChatGPT is giving British teens dangerous advice on suicide, eating disorders and substance abuse. A report from the Center for Countering Digital Hate found that, within two minutes, the AI platform would advise a 13 year-old how to safely cut themselves; within 40 minutes, it would list the required pills for an overdose; and, after 72 minutes, it would generate suicide notes. Can my noble friend confirm that Ofcom will treat ChatGPT and other chatbots as search engines under the Online Safety Act, and assure the House that the regulator has both the powers and the will to enforce the protection of children code when it comes to generative AI platforms such as ChatGPT?