Online Safety: Children and Young People Debate
Full Debate: Read Full DebateDan Norris
Main Page: Dan Norris (Labour - North East Somerset and Hanham)Department Debates - View all Dan Norris's debates with the Department for Science, Innovation & Technology
(1 day, 16 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I absolutely agree that the companies must take those responsibilities seriously, because that will be the law. I am keen that we, as legislators, make sure that the law is as tight as it possibly can be to protect as many children as possible. We will never be able to eradicate everything online, and this is not about innovation. It is about making sure that we get this absolutely right for the next generation and for those using platforms now, so I thank my hon. Friend for his intervention.
The first meeting I called when I was elected the MP for Darlington was with the headteachers of every school and college in my town. I asked them to join together to create a town-wide forum to hear the voices of children and young people on what needs to change about online safety. The first online safety forum took place a couple of weeks ago, and the situation facing young people—year 10s, specifically—is much worse than I had anticipated.
The young people said that online bullying is rife. They said it is common for their peers to send and doctor images and videos of each other without consent, to spread rumours through apps, to track the locations of people in order to bully them through apps, to organise and film fights through apps, to be blackmailed on apps, to speak on games and apps to people they do not know, and to see disturbing or explicit images unprompted and without searching for them. They also said it is common to see content that makes them feel bad about themselves. This has to stop.
The last Government’s Online Safety Act 2023 comes into force in April 2025. The regulator, Ofcom, will publish the children’s access assessments guidance in January 2025. This will give online services that host user-generated content, search services and pornography services in the UK three months to assess whether their services are likely to be accessed by children. From April 2025, when the children’s codes of practice are to be published, those platforms and apps will have a further three months to complete a children’s risk assessment. From 31 July 2025, specific services will have to disclose their risk assessments to Ofcom. Once the codes are approved by Parliament, providers will have to take steps to protect users. There is to be a consultation on the codes in spring 2025, and I urge everybody interested in the topic—no matter their area of expertise or feelings on it—to feed into that consultation. The mechanism for change is in front of us, but my concern is that the children’s codes are not strong enough.
I congratulate my hon. Friend on securing this important debate. Could she comment on the use of artificial intelligence to create child sexual abuse materials? That is a key issue now. Many years ago, I trained with the National Society for the Prevention of Cruelty to Children as a child protection officer, and what I learned back then is that we have to get ahead of all the technologies in order to deal with the challenges effectively. Does she have any thoughts on that point? She may be coming to it in her own remarks.
I thank my hon. Friend for raising that great threat. My area of expertise on the issue is children’s and service users’ voices. There is definitely space for Ofcom and the Government to try to regulate the illegal manufacturing of images through AI. When I asked children in my constituency whether they had ever seen something that they knew was made by AI, they said yes—they had seen images of people that they knew were not real—but the notifications and warnings to tell them that it was AI were not as explicit as they could be. In other words, they could tell for themselves, but the notifications were not comprehensive enough for other children, who may not have noticed. This is a real danger.
There will always be content created online that we cannot police. We have to accept—as we do with any other piece of legislation—that there will be criminal actors, but I have called this debate because there are ways to protect children from harmful content, including by using the right age verification model. I am keen to focus my contribution on trying to protect children from content, in the round, that is harmful to them.
As I said before, the mechanism for change is in front of us, but my concern is that the children’s codes are not strong enough. The children in my town have told me—and I am sure everybody here knows it—that the current age verification requirements are easily passed through, and that content on some sites is deeply disturbing and sent to them without them asking for it. That means that the sites are hosting content that is deeply disturbing for children, and that the age verification is not fit for purpose. We need to talk either about stopping those sites from hosting that content, which is very difficult, or about changing the age verification process.