Question to the Department for Science, Innovation & Technology:
To ask His Majesty's Government what discussions they are having with the technology industry to ensure artificial intelligence models are tested robustly before deployment, and to embed safeguards such as suicide prevention into model development.
The Government has ongoing partnerships with artificial intelligence developers to ensure the safety of the models they develop. It is essential that AI models are appropriately tested to ensure safeguards are robust, possible harms are considered and risks mitigated, to ensure the British public are protected.
The role of the AI Security Institute (AISI) is to build an evidence base of these risks, to inform government decision making and help make AI more secure and reliable. AISI works in close collaboration with AI companies to assess model safeguards and suggest mitigations. To date, AISI has tested over 30 models from leading AI companies, including OpenAI, Google DeepMind and Anthropic. AISI’s findings lead to tangible changes to AI models before deployment, reducing the risk from day one.
Once deployed, many AI services are captured by the Online Safety Act 2023, which places robust duties on all in-scope user-to-user and search services, including those deploying generative artificial intelligence chatbots, to prevent users from encountering illegal suicide and self-harm content. These duties apply regardless of whether content is created by AI or by humans.