Asked by: Siobhan Baillie (Conservative - Stroud)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether she has made an assessment of the potential impact of generative artificial intelligence on the (a) nature and (b) scale of harms associated with (i) inauthentic and (ii) non-verified social media accounts.
Answered by Paul Scully
Following the AI Regulation Whitepaper, the government is establishing a central AI risk function which will identify, measure and monitor existing and emerging AI risks using expertise from across government, industry, and academia. This will allow us to monitor risks — including online harms.
The Online Safety Bill will require services to tackle AI-generated content on services in scope. Content produced by AI bots on those services will be in scope of the regulation if they are controlled by a user and interact with other users.
In addition, adult users will have the choice to filter-out non-verified users, including generative AI bots that impersonate others or spread harmful content.
Asked by: Siobhan Baillie (Conservative - Stroud)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether she has made an assessment of the potential impact of (a) generative artificial intelligence and (b) in-authentic and non-verified social media accounts on the (i) nature and (ii) prevalence of online fraud; and whether she has made an assessment of the implications for her Department's policies of comments on generative artificial intelligence by the Chief Executive of Ofcom to the Lords Communications and Digital Committee on 11 July 2023.
Answered by Paul Scully
Following the AI Regulation Whitepaper, the government is establishing a central AI risk function which will identify, measure and monitor existing and emerging AI risks using expertise from across government, industry, and academia. This will allow us to monitor risks — including fraud.
All companies in scope of the Online Safety Bill will need to take action to prevent fraudulent content - including AI-generated content or content posted by AI bots - appearing on their platforms and swiftly remove it if it does.
As Ofcom recognised, the Bill provides Ofcom with a powerful set of tools to understand how bots are used, and how services are assessing their risks and appropriate safety measures. In line with requirements in the Bill, the Government will review the operation of the Online Safety framework two to five years after the safety duties come into force, and we expect AI to be an important part of this.