Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether she has made an assessment of the potential impact of (a) generative artificial intelligence and (b) in-authentic and non-verified social media accounts on the (i) nature and (ii) prevalence of online fraud; and whether she has made an assessment of the implications for her Department's policies of comments on generative artificial intelligence by the Chief Executive of Ofcom to the Lords Communications and Digital Committee on 11 July 2023.
Following the AI Regulation Whitepaper, the government is establishing a central AI risk function which will identify, measure and monitor existing and emerging AI risks using expertise from across government, industry, and academia. This will allow us to monitor risks — including fraud.
All companies in scope of the Online Safety Bill will need to take action to prevent fraudulent content - including AI-generated content or content posted by AI bots - appearing on their platforms and swiftly remove it if it does.
As Ofcom recognised, the Bill provides Ofcom with a powerful set of tools to understand how bots are used, and how services are assessing their risks and appropriate safety measures. In line with requirements in the Bill, the Government will review the operation of the Online Safety framework two to five years after the safety duties come into force, and we expect AI to be an important part of this.