Draft Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Draft Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025

Danny Chambers Excerpts
Tuesday 4th February 2025

(1 day, 12 hours ago)

General Committees
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

I appreciate the opportunity to speak in this Committee, Sir Christopher. Like at least one other Member in the room, I lived the Online Safety Bill for a significant number of months—in fact, it seemed to drag on for years.

As the Minister said, the Online Safety Act is long overdue. We have needed this legislation for 30 years, since I was a kid using the internet in the early ’90s. There has always been the risk of harm on online platforms, and there have always been places where people can be radicalised and can see misogynistic content or content that children should never be able to see. In this case, legislation has moved significantly slower than society—I completely agree with the Minister about that—but that is not a reason for accepting the statutory instrument or agreeing with the proposed threshold conditions.

On the threshold conditions, I am unclear as to why the Government have chosen 34 million and 7 million for the average monthly active users. Is it 34 million because Reddit happens to have 35 million average UK users—is that why they have taken that decision? I absolutely believe that Reddit should be in scope of category 1, and I am pretty sure that Reddit believes it should be in scope of category 1 and have those additional duties. Reddit is one of the places where the functionalities and content recommendation services mean that people, no matter what age they are, can see incredibly harmful content. They can also see content that can be incredibly funny—a number of brilliant places on Reddit allow people can look at pictures of cats, which is my favourite way to use the internet—but there are dark places in Reddit forums, where people can end up going down rabbit holes. I therefore agree that platforms such as Reddit should be in scope of category 1.

The Minister spoke about schedule 11 and the changes that were made during the passage of the Act. The Minister is absolutely right. Paragraph 1(5) of that schedule states:

“In making regulations under sub-paragraph (1), the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”

However, that does not undo the fact that we as legislators made a change to an earlier provision in that schedule. We fought for that incredibly hard and at every opportunity—in the Bill Committee, on the Floor of the House, in the recommitted Committee and in the House of Lords. At every stage, we voted for that change to be made, and significant numbers of outside organisations cared deeply about it. We wanted small high-risk platforms to be included. The provision that was added meant that the Secretary of State must make regulations relating to

“any other characteristics of that part of the service or factors relating to that part of the service that the Secretary of State considers relevant.”

That was what the Government were willing to give us. It was not the original amendment that I moved in Bill Committee, which was specifically about small high-risk platforms, but it was enough to cover what we wanted.

What functionalities could and should be brought in scope? I believe that any service that allows users to livestream should be in the scope of category 1. We know that livestreaming is where the biggest increase in self-generated child sexual abuse material is. We know that livestreaming is incredibly dangerous, as people who are desperate to get access to child sexual abuse material can convince vulnerable young people and children to livestream. There is no delay where that content can be looked at and checked in advance of it being put up, yet the Government do not believe that every service that allows six-year-olds to livestream should be within the scope of category 1. The Government do not believe that those services should be subject to those additional safety duties, despite the fact that section 1 of the Online Safety Act 2023 says platforms should be “safe by design”. However, this is not creating platforms that are safe by design.

The regulations do not exclude young people from the ability to stream explicit videos to anyone because they only include services with over 34 million users, or over 7 million when it comes to content recommendation, and I agree that services in those cases are problematic. However, there are other really problematic services, causing life-changing—or in some cases, life-ending—problems for children, young people and vulnerable adults that will not be in the scope of category 1.

Generally, I am not a big fan of a lot of things that the UK Government have done; I have been on my feet, in the Chamber, arguing against a significant number of those things. This is one of the things that makes me most angry, because the Government, by putting forward this secondary legislation, are legislating in opposition to the will and intention of the Houses of Parliament. I know that we cannot bind a future Government or House, but this is not what was intended or agreed and moved on, nor what Royal Assent was given on; that was on the basis that we had assurances from Government Ministers that they would look at those functionalities and small but high-risk platforms.

For what Ofcom has put out in guidance and information on what it is doing on small but high-risk platforms, why are we not using everything that is available? Why are Government not willing to use everything available to them to bring those very high-risk platforms into the scope of category 1?

The changes that category 1 services would be required to make include additional duties; for a start, they are under more scrutiny—which is to be expected—and they are put on a specific list of category 1 services which will be published. That list of category 1 services includes platforms such as 4chan, that some people may have never heard of. Responsible parents will see that list and say, “Hold on a second. Why is 4chan on there? I don’t want my children to be going on there. It is clearly not a ginormous platform, therefore it must be on there because it is a high-risk service.” Parents will look at that list and talk to their children about those platforms. In terms of the category 1 list, never mind the additional duties, that would have a positive impact. Putting suicide forums on that list of category 1 services would have a positive impact on the behaviour of parents, children, and the teachers who teach those young people how to access the internet safely.

I guarantee that a significant number of teachers and people that are involved with young people have never heard of 4chan, but putting it on that list would give them an additional tool to enable them to approach young people and talk about the ways in which they use the internet.

Danny Chambers Portrait Dr Danny Chambers (Winchester) (LD)
- Hansard - -

I thank the hon. Lady for speaking so passionately on this matter. As the Liberal Democrat mental health spokesperson, something that we are increasingly coming across is that it is not just adults asking children to livestream, but children, peer-to-peer, who do not realise that it is illegal. As the hon. Lady touched on, the mental health impact is huge but also lifelong. Someone can have a digital footprint that they can never get rid of, and children who are uninformed and uneducated to the impacts of their decisions could be affected decades into the future.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

I completely agree. That is an additional reason why livestreaming is one of my biggest concerns. That functionality should have been included as a matter of course. Any of the organisations that deal with young people and the removal of child sexual abuse material online, such as the Internet Watch Foundation, will tell you that livestreaming is a huge concern. The hon. Member is 100% correct.

That is the way I talk to my children about online safety: once something is put online—once it is on the internet—it cannot ever be taken back. It is there forever, no matter what anyone does about it, and young people may not have the capacity to understand that. If systems were safe by design, young people simply would not have access to livestreaming at all; they would not have access to that functionality, so there would be that moment of thinking before they do something. They would not be able to do peer-to-peer livestreaming that can then be shared among the entire school and the entire world.

We know from research that a significant number of child sexual abuse materials are impossible to take down. Young people may put their own images online or somebody else may share them without their consent. Organisations such as the Internet Watch Foundation do everything they can to try to take down that content, but it is like playing whack-a-mole; it comes up and up and up. Once they have fallen into that trap, the content cannot be taken back. If we were being safe by design, we would ensure, as far as possible—as far as the Government could do, we could do or Ofcom could do—that no young person would be able to access that functionality. As I said, it should have been included.

I appreciate what the Government said about content recommendation and the algorithms that are used to ensure that people stay on platforms for a significant length of time. I do not know how many Members have spent much time on TikTok, but people can start watching videos of cats and still be there an hour and a half later. The algorithms are there to try to keep us on the platform. They are there because, actually, the platforms make money from our seeing the advertisements. They want us to see exciting content. Part of the issue with the content recommendation referenced in the conditions is that platforms are serving more and more exciting and extreme content to try to keep us there for longer, so we end up with people being radicalised on these platforms—possibly not intentionally by the platforms, but because their algorithm serves more and more extreme content.

I agree that that content should have the lower threshold in terms of the number of users. I am not sure about the numbers of the thresholds, but I think the Government have that differentiation correct, particularly on the addictive nature of algorithmic content. However, they are failing on incredibly high-risk content. The additional duties for category 1 services involve a number of different things: illegal content risk assessments, duties relating to terms of service, children’s risk assessments, adult empowerment duties and record-keeping duties. As I said, the fact that those category 1-ranked platforms will be on a list is powerful in itself, but adding those additional duties is really important.

Let us say that somebody is undertaking a risky business—piercing, for example. Even though not many people get piercings in the grand scheme of things, the Government require piercing organisations to jump through additional hoops because they are involved in dangerous things that carry a risk of infection and other associated risks. They are required to meet hygiene regulations, register with environmental health and have checks of their records to ensure that they know who is being provided with piercings, because it is a risky thing. The Government are putting additional duties on them because they recognise that piercing is risky and potentially harmful.

However, the Government are choosing not to put additional duties on incredibly high-risk platforms. They are choosing not to do that. They have been given the right to do that. Parliament has made its will very clear: “We want the Government to take action over those small high-risk platforms.” I do not care how many hoops 4chan has to jump through. Give it as many hoops as possible; it is an incredibly harmful site, and there are many others out there—hon. Members mentioned suicide forums, for example. Make them jump through every single hoop. If we cannot ban them outright—which would be my preferred option—make them keep records, make them have adult-empowerment duties, and put them on a list of organisations that we, the Government or Ofcom reckon are harmful.

If we end up in a situation where, due to the failures of this Act, young people commit suicide, and the platform is not categorised properly, there is then a reduction in the amount of protections, and in the information that they have to provide about deceased children to the families, because they are not categorised as category 1 or 2B. We could end up in a situation where a young person dies as a result of being radicalised on a forum—because the Government decided it should not be in scope—but that platform does not even have to provide the deceased child’s family with access to that online usage. That is shocking, right? If the Government are not willing to take the proper action required, at least bring these platforms into the scope of the actions and requirements related to deceased children.

I appreciate that I have taken a significant length of time—although not nearly as long as the Online Safety Act has taken to pass, I hasten to say—but I am absolutely serious about the fact that I am really, really angry about this. This is endangering children. This is endangering young people. This is turning the Online Safety Act back into what some people suggested it should be at the beginning, an anti-Facebook and anti-Twitter Act, or a regulation of Facebook and Twitter— or X—Act, rather than something that genuinely creates what it says in section 1 of the Act: an online world that is “safe by design”.

This is not creating an online world that is safe by design; this is opening young people and vulnerable adults up to far more risks than it should. The Government are wilfully making this choice, and we are giving them the opportunity to undo this and to choose to make the right decision—the decision that Parliament has asked them to make—to include functionalities such as livestreaming, and to include those high-risk platforms that we know radicalise people and put them at a higher risk of death.