Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what steps her Department is taking to ensure that safety-by-design principles are integrated into AI systems from inception rather than as retrospective additions especially given the persistence in harmful online content including deep-fake CSAMs that are visible across the internet.
The government is committed to tackling the atrocious harm of child sexual exploitation and abuse (CSEA). Making, distributing or possessing child sexual abuse material (CSAM) is a serious criminal offence, and the Online Safety Act requires services to proactively identify and remove such content.
The Act requires in-scope services, including AI services, to take a safety by design approach to tackling these harms. Ofcom has set out safety measures, including requiring risky services to use technology to detect known images and scan for links to such content. There are also measures to tackle online grooming.
We are taking further action in the Crime and Policing Bill to criminalise AI models which have been optimised to create CSAM and creating a new legal defence which will allow designated experts (such as AI developers and third sector organisations) to stringently test whether AI systems can generate CSAM, and develop safeguards to prevent it.
The government remains committed to taking further steps, if required, to ensure that the UK is prepared for the changes that AI will bring.