Draft Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 Debate
Full Debate: Read Full DebateKirsty Blackman
Main Page: Kirsty Blackman (Scottish National Party - Aberdeen North)Department Debates - View all Kirsty Blackman's debates with the Department for Science, Innovation & Technology
(1 day, 15 hours ago)
General CommitteesIt is a pleasure to serve under your chairship, Sir Christopher. I am disappointed in this statutory instrument. I recognise the Minister’s acknowledgment of the small sites, high-harm issue, but the issue is far more important and we are missing an opportunity here. Can the Minister set out why the regulations as drafted do not follow the will of Parliament, accepted by the previous Government and written into the Act, that thresholds for categorisation can be based on risk or size? That was a long-argued point that went through many iterations.
The then Minister accepted the amendment that was put forward and said:
“many in the House have steadfastly campaigned on the issue of small but risky platforms.” —[Official Report, 12 September 2023; Vol. 737, c. 806.]
He confirmed that the legislation would now give the Secretary of State the discretion to decide whether to set a threshold based on the number of users or the functionalities offered, or both factors, with the change ensuring that the framework was as flexible as possible in responding to the risk landscape. That has been thrown away in this new legislation. The Minister just said that we must do everything in our power, and yet the Government are throwing out a crucial change made to the Act to actually give them more power. They are getting rid of a power by changing this.
The amendment was to ensure that small sites dedicated to harm, such as sites providing information on suicide or self-harm or set up to target abuse and hatred at minority groups, like we saw in the riots in the summer, were subject to the fullest range of duties. When Ofcom published its advice, however, it disregarded this flexibility and advised that regulation should be laid bringing only the large platforms into category 1.
Is the hon. Member as concerned as I am that the Government seem to be ignoring the will of Parliament in their decision? Is he worried that young people particularly will suffer as a result?
Absolutely—I am. The Secretary of State’s decision to proceed with this narrow interpretation of the Online Safety Act provisions, and the failure to use the power they have to reject Ofcom’s imperfect advice, will allow small, risky platforms to continue to operate without the most stringent regulatory restrictions available. That leaves significant numbers of vulnerable users—women and individuals from minority groups—at risk of serious harm from targeted activity on these platforms.
I will set a few more questions for the Minister. How do His Majesty’s Government intend to assess whether Ofcom’s regulatory approach to small but high-harm sites is proving effective, and have any details been provided on Ofcom’s schedule of research about such sites? What assessment have the Government made of the different harms occurring on small, high-harm platforms? Have they broken this down by type of harm, and will they make such information available? Have the Government received legal advice about the use of service disruption orders for small but high-harm sites? Do the Government expect Ofcom to take enforcement action against small but high-harm sites, and have they made an assessment of the likely timescales for enforcement action? Will the Government set out criteria against which they expect Ofcom to keep its approach to small but high-harm sites under continual review, as set out in their draft statement of strategic priorities for online safety?
Was the Minister aware of the previous Government’s commitment that Select Committees in both Houses would be given the opportunity to scrutinise draft Online Safety Act statutory instruments before they were laid? If she was, why did that not happen in this case? Will she put on record her assurances that Online Safety Act statutory instruments will in future be shared with the relevant Committees before they are laid?
For all those reasons, I will vote against the motion.
I appreciate the opportunity to speak in this Committee, Sir Christopher. Like at least one other Member in the room, I lived the Online Safety Bill for a significant number of months—in fact, it seemed to drag on for years.
As the Minister said, the Online Safety Act is long overdue. We have needed this legislation for 30 years, since I was a kid using the internet in the early ’90s. There has always been the risk of harm on online platforms, and there have always been places where people can be radicalised and can see misogynistic content or content that children should never be able to see. In this case, legislation has moved significantly slower than society—I completely agree with the Minister about that—but that is not a reason for accepting the statutory instrument or agreeing with the proposed threshold conditions.
On the threshold conditions, I am unclear as to why the Government have chosen 34 million and 7 million for the average monthly active users. Is it 34 million because Reddit happens to have 35 million average UK users—is that why they have taken that decision? I absolutely believe that Reddit should be in scope of category 1, and I am pretty sure that Reddit believes it should be in scope of category 1 and have those additional duties. Reddit is one of the places where the functionalities and content recommendation services mean that people, no matter what age they are, can see incredibly harmful content. They can also see content that can be incredibly funny—a number of brilliant places on Reddit allow people can look at pictures of cats, which is my favourite way to use the internet—but there are dark places in Reddit forums, where people can end up going down rabbit holes. I therefore agree that platforms such as Reddit should be in scope of category 1.
The Minister spoke about schedule 11 and the changes that were made during the passage of the Act. The Minister is absolutely right. Paragraph 1(5) of that schedule states:
“In making regulations under sub-paragraph (1), the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
However, that does not undo the fact that we as legislators made a change to an earlier provision in that schedule. We fought for that incredibly hard and at every opportunity—in the Bill Committee, on the Floor of the House, in the recommitted Committee and in the House of Lords. At every stage, we voted for that change to be made, and significant numbers of outside organisations cared deeply about it. We wanted small high-risk platforms to be included. The provision that was added meant that the Secretary of State must make regulations relating to
“any other characteristics of that part of the service or factors relating to that part of the service that the Secretary of State considers relevant.”
That was what the Government were willing to give us. It was not the original amendment that I moved in Bill Committee, which was specifically about small high-risk platforms, but it was enough to cover what we wanted.
What functionalities could and should be brought in scope? I believe that any service that allows users to livestream should be in the scope of category 1. We know that livestreaming is where the biggest increase in self-generated child sexual abuse material is. We know that livestreaming is incredibly dangerous, as people who are desperate to get access to child sexual abuse material can convince vulnerable young people and children to livestream. There is no delay where that content can be looked at and checked in advance of it being put up, yet the Government do not believe that every service that allows six-year-olds to livestream should be within the scope of category 1. The Government do not believe that those services should be subject to those additional safety duties, despite the fact that section 1 of the Online Safety Act 2023 says platforms should be “safe by design”. However, this is not creating platforms that are safe by design.
The regulations do not exclude young people from the ability to stream explicit videos to anyone because they only include services with over 34 million users, or over 7 million when it comes to content recommendation, and I agree that services in those cases are problematic. However, there are other really problematic services, causing life-changing—or in some cases, life-ending—problems for children, young people and vulnerable adults that will not be in the scope of category 1.
Generally, I am not a big fan of a lot of things that the UK Government have done; I have been on my feet, in the Chamber, arguing against a significant number of those things. This is one of the things that makes me most angry, because the Government, by putting forward this secondary legislation, are legislating in opposition to the will and intention of the Houses of Parliament. I know that we cannot bind a future Government or House, but this is not what was intended or agreed and moved on, nor what Royal Assent was given on; that was on the basis that we had assurances from Government Ministers that they would look at those functionalities and small but high-risk platforms.
For what Ofcom has put out in guidance and information on what it is doing on small but high-risk platforms, why are we not using everything that is available? Why are Government not willing to use everything available to them to bring those very high-risk platforms into the scope of category 1?
The changes that category 1 services would be required to make include additional duties; for a start, they are under more scrutiny—which is to be expected—and they are put on a specific list of category 1 services which will be published. That list of category 1 services includes platforms such as 4chan, that some people may have never heard of. Responsible parents will see that list and say, “Hold on a second. Why is 4chan on there? I don’t want my children to be going on there. It is clearly not a ginormous platform, therefore it must be on there because it is a high-risk service.” Parents will look at that list and talk to their children about those platforms. In terms of the category 1 list, never mind the additional duties, that would have a positive impact. Putting suicide forums on that list of category 1 services would have a positive impact on the behaviour of parents, children, and the teachers who teach those young people how to access the internet safely.
I guarantee that a significant number of teachers and people that are involved with young people have never heard of 4chan, but putting it on that list would give them an additional tool to enable them to approach young people and talk about the ways in which they use the internet.
I thank the hon. Lady for speaking so passionately on this matter. As the Liberal Democrat mental health spokesperson, something that we are increasingly coming across is that it is not just adults asking children to livestream, but children, peer-to-peer, who do not realise that it is illegal. As the hon. Lady touched on, the mental health impact is huge but also lifelong. Someone can have a digital footprint that they can never get rid of, and children who are uninformed and uneducated to the impacts of their decisions could be affected decades into the future.
I completely agree. That is an additional reason why livestreaming is one of my biggest concerns. That functionality should have been included as a matter of course. Any of the organisations that deal with young people and the removal of child sexual abuse material online, such as the Internet Watch Foundation, will tell you that livestreaming is a huge concern. The hon. Member is 100% correct.
That is the way I talk to my children about online safety: once something is put online—once it is on the internet—it cannot ever be taken back. It is there forever, no matter what anyone does about it, and young people may not have the capacity to understand that. If systems were safe by design, young people simply would not have access to livestreaming at all; they would not have access to that functionality, so there would be that moment of thinking before they do something. They would not be able to do peer-to-peer livestreaming that can then be shared among the entire school and the entire world.
We know from research that a significant number of child sexual abuse materials are impossible to take down. Young people may put their own images online or somebody else may share them without their consent. Organisations such as the Internet Watch Foundation do everything they can to try to take down that content, but it is like playing whack-a-mole; it comes up and up and up. Once they have fallen into that trap, the content cannot be taken back. If we were being safe by design, we would ensure, as far as possible—as far as the Government could do, we could do or Ofcom could do—that no young person would be able to access that functionality. As I said, it should have been included.
I appreciate what the Government said about content recommendation and the algorithms that are used to ensure that people stay on platforms for a significant length of time. I do not know how many Members have spent much time on TikTok, but people can start watching videos of cats and still be there an hour and a half later. The algorithms are there to try to keep us on the platform. They are there because, actually, the platforms make money from our seeing the advertisements. They want us to see exciting content. Part of the issue with the content recommendation referenced in the conditions is that platforms are serving more and more exciting and extreme content to try to keep us there for longer, so we end up with people being radicalised on these platforms—possibly not intentionally by the platforms, but because their algorithm serves more and more extreme content.
I agree that that content should have the lower threshold in terms of the number of users. I am not sure about the numbers of the thresholds, but I think the Government have that differentiation correct, particularly on the addictive nature of algorithmic content. However, they are failing on incredibly high-risk content. The additional duties for category 1 services involve a number of different things: illegal content risk assessments, duties relating to terms of service, children’s risk assessments, adult empowerment duties and record-keeping duties. As I said, the fact that those category 1-ranked platforms will be on a list is powerful in itself, but adding those additional duties is really important.
Let us say that somebody is undertaking a risky business—piercing, for example. Even though not many people get piercings in the grand scheme of things, the Government require piercing organisations to jump through additional hoops because they are involved in dangerous things that carry a risk of infection and other associated risks. They are required to meet hygiene regulations, register with environmental health and have checks of their records to ensure that they know who is being provided with piercings, because it is a risky thing. The Government are putting additional duties on them because they recognise that piercing is risky and potentially harmful.
However, the Government are choosing not to put additional duties on incredibly high-risk platforms. They are choosing not to do that. They have been given the right to do that. Parliament has made its will very clear: “We want the Government to take action over those small high-risk platforms.” I do not care how many hoops 4chan has to jump through. Give it as many hoops as possible; it is an incredibly harmful site, and there are many others out there—hon. Members mentioned suicide forums, for example. Make them jump through every single hoop. If we cannot ban them outright—which would be my preferred option—make them keep records, make them have adult-empowerment duties, and put them on a list of organisations that we, the Government or Ofcom reckon are harmful.
If we end up in a situation where, due to the failures of this Act, young people commit suicide, and the platform is not categorised properly, there is then a reduction in the amount of protections, and in the information that they have to provide about deceased children to the families, because they are not categorised as category 1 or 2B. We could end up in a situation where a young person dies as a result of being radicalised on a forum—because the Government decided it should not be in scope—but that platform does not even have to provide the deceased child’s family with access to that online usage. That is shocking, right? If the Government are not willing to take the proper action required, at least bring these platforms into the scope of the actions and requirements related to deceased children.
I appreciate that I have taken a significant length of time—although not nearly as long as the Online Safety Act has taken to pass, I hasten to say—but I am absolutely serious about the fact that I am really, really angry about this. This is endangering children. This is endangering young people. This is turning the Online Safety Act back into what some people suggested it should be at the beginning, an anti-Facebook and anti-Twitter Act, or a regulation of Facebook and Twitter— or X—Act, rather than something that genuinely creates what it says in section 1 of the Act: an online world that is “safe by design”.
This is not creating an online world that is safe by design; this is opening young people and vulnerable adults up to far more risks than it should. The Government are wilfully making this choice, and we are giving them the opportunity to undo this and to choose to make the right decision—the decision that Parliament has asked them to make—to include functionalities such as livestreaming, and to include those high-risk platforms that we know radicalise people and put them at a higher risk of death.
I thank all Members for their very powerful contributions to the debate. This instrument will bring us one step closer to a safer online world for our citizens. It is clearer than ever that it is desperately needed: transparency, accountability and user empowerment matter now more than ever.
The Opposition spokesperson, the hon. Member for Huntingdon, asked whether we agree on the need for companies not to wait for the duties in the Act to be implemented, but to ensure that safety is baked in from the start. I absolutely agree, and he will be aware that the Secretary of State has made that point on many occasions. He also raised the issue of proportionality. I confirm that many of the duties on categorised services are subject to the principle of proportionality, which requires Ofcom to consider measures that are technically feasible to providers of a certain size or capacity, and in some cases duties are based on the assessment of risk of harm presented by the service.
For example, in determining what is proportionate for the user empowerment duties on content for category 1 services, the findings of the most recent user empowerment assessments are relevant. They include the incidence of relevant content on the service in addition to the size and capacity of the provider. Where a code of practice is relevant to a duty, Ofcom must have regard to the principles on proportionality, and what is proportionate for one kind of service might not be for another.
The hon. Member for Huntingdon is absolutely right that the pornography review has been completed. The Government are reviewing that at the moment and will publish it in due course.
In response to the hon. Members for Newton Abbot and for Aberdeen North (Kirsty Blackman) and to the right hon. and learned Member for Kenilworth and Southam (Sir Jeremy Wright), when the Online Safety Act was introduced, category 1 thresholds were due to be assessed based on the level of risk and harm for adults—as the Members read out very clearly. That was removed during the passage of the Bill by the previous Government.
As things stand, although Baroness Morgan’s successful amendment made it possible for threshold conditions to be based solely on functionalities, it did not change the basis of Ofcom’s research, which for category 1 is easy, quick and wide dissemination of content. The Secretary of State had to consider that. I will repeat that for all Members to hear again: the Secretary of State has to act within the powers given to him in schedule 11 when setting out the threshold and conditions. The powers do not allow for thresholds to be determined by another body, as per the amendment.
Although the hon. Member for Aberdeen North very powerfully read out the Act, it very clearly sets out that it does not actually do what she is asking for it to do. We absolutely agree that small but risky sites need to be covered, but as it stands, the Secretary of State does not have the powers to include them.
Sorry, I have lots of points to cover. If I have not covered the hon Member’s concerns in my response, she is more than welcome to intervene later.
These small but risky services are of significant concern to the Government, and they will still have to protect against illegal content and, where relevant, content that is harmful to children. Ofcom also has a dedicated taskforce to go after them. I hope that answers the hon. Member’s question.
The hon. Member for Newton Abbot also raised the review of Ofcom’s approach. The regulator has already trialled an approach of targeting small but risky services through its regulation of video-sharing platforms. Indeed, a number of those services improved their policies and content moderation in response. All the adult platforms under the VSP regime, large and small, have implemented age verification through this route to ensure that under-18s cannot access pornography on their services. In instances where services fail to make necessary changes, they will face formal enforcement action from Ofcom. Ofcom has a proven track record and the Government have every faith in its ability to take action against non-compliant services.
The hon. Member also raised issues around how Ofcom will enforce action against small but risky services. Ofcom will have robust enforcement powers available to use against companies that fail to fulfil their duties and it will be able to issue enforcement decisions. Action can include fines of up to £18 million or 10% of qualifying worldwide revenue in the relevant year, whichever is higher, and Ofcom can direct companies to take specific steps to comply with its regulation.
I am going to make some progress. On livestreaming, Ofcom considered that functionality, but concluded that the key functionalities that spread content easily, quickly and widely are content recommender systems and forwarding or resharing user-generated content.
Services accessed by children must still be safe by design, regardless of whether they are categorised. Small but risky services will also still be required to comply with illegal content duties. The hon. Member for Aberdeen North should be well aware of that as she raised concerns on that issue.
On child safety, there were questions about how online safety protects children from harmful content. The Act requires all services in scope to proactively remove and prevent users from being exposed to priority illegal content, such as illegal suicide content and child sexual exploitation and abuse material. That is already within the remit.
In addition, companies that are likely to be accessed by children will need to take steps to protect children from harmful content and behaviour on their services, including content that is legal but none the less presents a risk of harm to children. The Act designates content that promotes suicide or self-harm as in the category of primary priority content that is harmful to children. Parents and children will also be able to report pro-suicide or pro-self-harm content to the platform and the reporting mechanism will need to be easy to navigate for child users. On 8 May, Ofcom published its draft children’s safety codes of conduct, in which it proposed measures that companies should employ to protect children from suicide and self-harm content, as well as other content.
Finally, on why category 1 is not based on risk, such as the risk of hate speech, when the Act was introduced, category 1 thresholds were due to be assessed on the level of risk of harm to adults from priority content disseminated by means of that service. As I said earlier, that was removed during the Act’s passage by the then Government and replaced with consideration of the likely functionalities and how easily, quickly and widely user-generated content is disseminated, which is a significant change. Although the Government understand that that approach has its critics, who argue that the risk of harm is the most significant factor, that is the position under the Act.
The Minister is making the case that the Secretary of State’s hands are tied by the Act —that it requires stuff in relation to the number of users. Can she tell us in which part of the Act it says that, because it does not say that? If she can tell us where it is in the Act, I am quite willing to sit down and shut up about this point, but it is not in the Act.
The legislation allows the Secretary of State to deviate from Ofcom’s advice and to publish a statement explaining why. However, the core consideration for category 1 under schedule 11 is—I repeat for the third time—how easily, quickly and widely regulated user-generated content is disseminated by means of a service. As a result, for category 1, Ofcom concluded that the content is disseminated with increased breadth as the number of users increases.
The decision to proceed with the threshold combination recommended by Ofcom, rather than discounting user-number thresholds, reflects that any threshold condition created by the Government should consider the factors as set out in the Act, including easy, quick and wide dissemination for category 1, and the evidence base. That is what the Act says. As a result, the Government decided to not proceed with an approach that deviated from Ofcom’s recommendation, particularly considering the risk of unintended consequences.
I am more than happy to write to the hon. Member for Aberdeen North with the full details. I understand that she feels very passionately about this point, but the Act is the Act. Although I am grateful for her contribution, I have to follow what the Act says, based on the legal advice that I get.
Thank you, Sir Christopher—I appreciate that prod. I did look at Standing Orders this morning, but could not find that bit, so that is incredibly helpful.
On what the Minister said about schedule 11 and the notes that she has been passed from her team on that point, I appreciate her commitment to share the Government’s legal advice. That will be incredibly helpful; it would have been helpful to have it in advance of this Committee.
In schedule 11, it says:
“In making regulations under sub-paragraph (1), the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
Perhaps I cannot read English, or perhaps the Minister, her legal advisers and the team at DSIT read it in a different way from me, but the Secretary of State having to take something into account and the Secretary of State being bound by something are two different things—they are not the same. It does not say that the Secretary of State must regulate only on the specific number of users.
In fact, schedule 11 says earlier that the Secretary of State
“must make regulations specifying conditions…for the user-to-user part of regulated user-to-user services relating to each of the following”,
which are the
“number of users…functionalities of that part of the service, and…any other characteristics of that part of the service or factors”.
The Secretary of State must therefore make regulations in relation to any other characteristics of that part of the service or factors
“relating to that part of the service that the Secretary of State considers relevant.”
He must do that, but he must only take into account the number of users. The Government, however, have decided that taking into account is much more important than “must” do something. They have decided to do that despite Parliament being pretty clear in the language it has used.
I am not terribly happy with the Online Safety Act. It is a lot better than the situation we have currently, but it is far from perfect. As the Minister said, I argued in favour of keeping the stuff about legal but harmful content for adults. I argued against the then Government’s position on that, but the Act is the Act that we have.
The Minister’s point does not make sense. The Secretary of State has to take into account the number of users and how quickly things are disseminated, but he must make regulations about functionalities or factors that he considers relevant. Therefore, it seems that he does not consider suicide forums and livestreaming to be relevant; if he did, he would surely be bound by the “must” and would have to make regulations about them. It is frustrating that the Act does not do what it is supposed to do and does not protect young people from livestreaming. The Minister said that it protects people from seeing that illegal content, but it does not prevent them from creating it.
The Government could make regulations so that every platform that has a livestreaming functionality, or even every platform that has child users on it—there is a lot in the Act about the proportion of children who use a service—is automatically included in category 1 because they consider them to be high risk.
It would not be right for either of us to ask the Minister to disclose legal advice—that clearly would not be appropriate—but I am grateful for the Minister’s offer to share a slightly more expansive description of why the Government have come to the conclusion that they have.
On the hon. Lady’s point about what the Act actually says, we have both quoted paragraph 1(5) of schedule 11, which deals with whether the language that has found its way into the ministerial statement is the be-all and end-all of the Minister’s conclusions. We both think it is not. If it is the case, as I think the Minister is arguing, that the ability to disseminate “easily, quickly and widely” is essentially a synonym for the scale of the service and the number of its users, what does the hon. Lady think of the amendment that Baroness Morgan made in the other place to paragraph 1(4), which says that when the regulations we are considering specify
“the way or ways in which the relevant conditions are met”,
for category 1 threshold conditions
“at least one specified condition about number of users or functionality must be met”?
The crucial word that was added is “or”. If the number of users were required to establish what the hon. Lady has described, the word “or” would be inappropriate.
I absolutely agree, and that is a helpful clarification.
If the Government have decided that it is too difficult to regulate high-risk platforms as category 1, and that they do not matter enough because they do not have enough of an impact, they should stand up and tell us that. Rather than saying that their hands have been tied by the Act—they manifestly have not—they need to take ownership of their actions. If they have decided that such platforms are not important enough or that they cannot be bothered having a fight with Ofcom about that, they should be honest and say, “This is the position we have decided to take.” Instead, they are standing up and saying, “Our hands have been tied,” but that is just not correct: their hands have not been tied by the Act.
I appreciate that the Minister will get in touch with me about the legal advice, but it will be too late. This statutory instrument will have been through the process by that time, and people will have been put at risk as a result of the Government’s failure. They have the power to take action in relation to functionalities and factors, and in relation to suicide forums, livestreaming and the creation of child sexual abuse material, and they are choosing not to.
If the Government have decided that it is too difficult to do that, that those platforms are not risky enough and that not enough people are being harmed by them, they need to hold their hands up and say, “We’ve decided that this is the position we are going to take.” They must not hide behind the legislation, which does not say what they are telling us it says. They should just be honest about the fact that they have decided that they cannot be bothered to take action. They cannot be bothered to have a fight with Ofcom because it is not important enough. Hiding behind the legislation is incredibly cowardly—it does not say that.