Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the scale of spoofing scams using UK telephone numbers; and whether she plans to strengthen obligations on network operators to prevent fraudulent number allocation and misuse.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The most recent Ofcom research on the scale of spoofing calls reveals that in February 2025, two in five phone users said they received a suspicious call in the last three months. Tackling fraud and pursuing the criminals behind it is a priority for the government. We are working closely with industry and regulators to reduce spoofing and other forms of telecoms-enabled fraud.
In November 2025, the Government published the second Telecommunications Fraud Sector Charter, signed by major mobile network operators including BT EE, Virgin Media O2, and VodafoneThree. Through the Charter, signatories' committed to measures to tackle spoofing, including adopting common standards to reduce fraud and abuse across all network-originated messaging channels.
As the independent regulator, Ofcom also consulted in 2025 on proposals to strengthen rules on overseas calls that falsely present UK numbers, including updates to its Calling Line Identification Guidance. The Government supports this work and continues to engage with Ofcom and industry to protect customers. More recently, on 9 March 2026, the Home Office also published its new Fraud Strategy which sets out how the Government will work with all partners, including law enforcement and industry, to make the UK a much harder place for criminals to operate.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the effectiveness of online safety protections for vulnerable adults, including neurodivergent adults such as those with autism and ADHD; and whether she plans to take steps to improve safeguarding and platform accountability.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The Online Safety Act places legal duties on platforms to make their services safer for all users, including vulnerable adults and those that are neurodivergent.
Services are required to protect users from illegal content and activity online, which may impact vulnerable adults disproportionately. In addition, the largest services will also have additional duties put on them, to offer adults user empowerment tools. These will allow adults to have greater control over their online experience.
Ofcom has robust enforcement powers and we have been clear that Ofcom has the government’s full backing to take enforcement action.
We continue to build on the Act to keep users safe online, such as making content that promotes self-harm priority offences.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment her Department has made of the likelihood of children and young people migrating to alternative online services if age verification is introduced unevenly across service types, and what assessment she has made of the potential for app store or operating system level age assurance to mitigate such displacement.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
Ofcom will publish a report on the effectiveness of age assurance in terms of compliance with the duties under the Online Safety Act by July 2026, and a separate report on the role of app stores in protecting children by January 2027. We are also seeking views on a range of measures, including how age assurance can support effective implementation, as part of the government’s consultation to ensure children’s experiences online are safe and enriching.
We will not hesitate to take further action to protect children online whenever the evidence suggests we need to do so.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, if she will take steps to help ensure that official child right’s impact assessments are undertaken to inform the evaluation of different policy options during the consultation, Driving action to improve children’s relationship with mobile phones and social media.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
We will assess a range of impacts when deciding how we will act on social media, including children’s rights and their wellbeing. To inform those assessments, we will consult children and young people directly through the national conversation and consultation, because their views and voices must be heard.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential merits of introducing a statutory duty of care for children’s safety on Gen AI companies to ensure they are held accountable for the safety of children.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
In the UK, AI systems are regulated at the point of use under existing frameworks such as data protection law, competition law, equality law, and other forms of sectoral and cross-sectoral regulation.
Generative AI services that allow users to share content with one another, search live website to provide search results, or publish pornographic content are regulated under the Online Safety Act. These services must protect users from illegal content and children from harmful and age-inappropriate content. The Technology Secretary has confirmed that the government is considering how the Online Safety Act applies to AI chatbots and whether more is needed to protect users.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, what assessment she has made of the potential implications for her policies of the report from the Molly Rose Foundation entitled the Children’s exposure to (a) suicide, (b) self-harm, (c) depression and (d) eating disorder content online, published in October 2025.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The government thanks the Molly Rose Foundation for its research.
Under the Online Safety Act, intentionally encouraging or assisting suicide is a priority offence for providers’ illegal content duties, and the government is taking action to give illegal self-harm content the same status, something the Molly Rose Foundation has long campaigned for.
Services likely to be accessed by children must use highly effective age assurance to prevent children encountering content that encourages, promotes or provides instructions for suicide, self-harm or eating disorders.
Ofcom has enforcement powers under the Act and has announced investigations into over 60 services suspected of failing to comply with their duties, including a pro-suicide forum.
Asked by: Gill German (Labour - Clwyd North)
Question to the Department for Science, Innovation & Technology:
To ask the Secretary of State for Science, Innovation and Technology, whether her Department plans to introduce statutory safeguards to help prevent AI chatbots from being used to simulate sexual (a) activity and (b) scenarios involving children.
Answered by Kanishka Narayan - Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)
The government is committed to tackling the atrocious harm of child sexual exploitation and abuse.
The strongest protections in the Online Safety Act are for children – regulated services must remove illegal content and prevent children from encountering harmful content, including where it is AI generated.
The government has introduced an offence in the Crime and Policing Bill which criminalises possessing, creating or distributing AI tools designed to generate child sexual abuse material. We are committed to ensuring the UK is prepared for the changes AI will bring. When it comes to keeping children safe online, we will not hesitate to act.