(1 day, 11 hours ago)
General CommitteesI beg to move,
That the Committee has considered the draft Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025.
Thank you for coming to save the day, Sir Christopher; it is an honour to serve under your chairmanship. These regulations were laid before Parliament on 16 December 2024. As the Online Safety Act 2023 sets out, the Secretary of State must set thresholds for three categories of service: category 1, category 2A and category 2B. The services that fall into each of those categories will be required to comply with additional duties, with category 1 services having the most duties placed on them. The duties are in addition to the core duties that apply to all user-to-user and search services in scope.
The 2023 Act requires that specific factors must be taken into account by the Secretary of State when deciding thresholds for each category. The threshold conditions for user-to-user services must be set on user numbers and functionalities as well as any other characteristics or factors relating to the user-to-user part of the service that the Secretary of State deems relevant.
For category 1, the key consideration is the likely impact of the number of users of the user-to-user part of the service and its functionalities, on how quickly, easily and widely regulated user-generated content is disseminated by means of the service. For category 2A, the key consideration is the likely impact of the number of users of the search engine on the level of risk of harm to individuals from search content that is illegal or harmful to children. For category 2B, the key consideration is the likely impact of the number of users of the user-to-user part of the service and its functionalities on the level of risk of harm to individuals from illegal content or content that is harmful to children disseminated by means of the service.
Those conditions form the basis of Ofcom’s independent research and advice, as published in March 2024, which the Secretary of State was required to consider when setting threshold conditions. In laying these regulations before Parliament, the Secretary of State has considered the research carried out and the advice from Ofcom and agreed to its recommendations.
I understand that this decision will not please everyone. In particular, I recognise that the thresholds are unlikely to capture so-called “small but risky services”, as per Baroness Morgan’s successful amendment, which made it possible to create a threshold condition by reference only to functionalities and any other factors or characteristics. However, it is important to note that all regulated user-to-user and search services, no matter their size, will be subject to existing illegal content duties and, where relevant, child safety duties. The categories do not change that fact.
If the codes on illegal content duties currently laid before Parliament pass without objection, the duties will be in effect by this spring. They will force services to put in place systems and processes to tackle illegal content. If a service is likely to be accessed by children, the child safety duties will require services to conduct a child safety risk assessment and provide safety measures for child users. We expect that those will come into effect this summer, on the basis that the codes for the duties will have passed by then.
Together, the illegal content and child safety duties will mark the biggest material change in online safety for UK citizens since the internet era began. We expect the Online Safety Act to cover more than 100,000 services of various sizes, showing that the legislation goes far and wide to ensure important protections for users, particularly children, online.
The instrument before us will enable additional duties for categorised services. All categorised services must comply with transparency reporting duties. They must also have terms on the ability of parents to access information about children’s use of a service in the event of a child’s death. Category 1 services will have the most additional requirements. They will have to give adults more choice about the content they see and the people they interact with, and they must protect journalistic and news publisher content and content of democratic importance. The duties will also ensure that we can hold these companies to account over their terms of service, ensuring that they keep the promises they make to their users.
Once in force, the regulations will enable Ofcom to establish a public register of categorised services, which it expects to publish this summer. Ofcom will then consult on the draft codes of practice and guidance where relevant for additional duties. Ofcom will also do additional work to tackle small but risky services.
Ofcom’s work to tackle egregious content and enhance accountability does not stop with this instrument, which takes me back to the small but risky services that I mentioned. The horrifying stories I have heard about these sites during a number of debates recently are truly heartbreaking; we must do everything in our power to prevent vulnerable people from falling victim to such circumstances. I was pleased to see Ofcom set out in September 2024 its targeted approach to tackling small but risky services, which includes a dedicated supervision taskforce and a commitment to move to rapid enforcement action where necessary. That followed a letter from the Secretary of State to Ofcom inquiring about those services.
I am confident that the regulatory framework, combined with the bespoke taskforce, will work to keep all UK citizens safe online, but I must stress that the Secretary of State will hold the thresholds under review going forward. If there is evidence that the categories have become outdated or that they inadequately protect users, he will not shy away from updating them or reviewing the legislation, as he has made clear recently.
Finally, the online world that we are looking to govern is complex and ever-changing. The Act will not solve every problem, but it will bring real benefit to children and adults who have to contend with an unsafe online world for far too long. We should see the instruments we are debating as a step in that process and a first iteration, not as something fixed or set in stone, because there is much more to do. Our foremost priority is the timely implementation of the Act to enforce the additional duties as soon as possible. Years of delay and indecision have already come at a heartbreaking cost for vulnerable children and adults. Now it is time to deliver, but that relies on Parliament approving the categorisation thresholds without delay.
It is a pleasure to serve under your chairmanship, Sir Christopher. The Online Safety Act will be one of the lasting accomplish-ments of the last Government. It is world-leading legislation that places significant new responsibilities and duties on social media platforms and search services, to increase safety online. Most importantly, this vital legislation ensures that children are better protected online.
If it is worrying that children aged eight to 17 spend between two and five hours online per day, then it is deeply concerning that half of 13-year-olds reported seeing hardcore, misogynistic pornographic material on social media sites. It is for those reasons that Conservative Ministers ensured that there were the strongest measures in the Online Safety Act to protect children. For example, platforms will be required to prevent children from accessing harmful and age-inappropriate content and will provide parents and children with clear and accessible ways to report problems online when they arise.
Furthermore, the Act requires all in-scope services that allow pornography to use highly effective age assurance to prevent children from accessing it, including services that host user-generated content and services that publish pornography. Ofcom has robust enforcement powers available to use against companies who fail to fulfil their duties. The Act also includes provisions to protect adult users, as it ensures that major platforms are more transparent about what kinds of potentially harmful content they allow. It gives users more control over the types of content they want to see.
The Act allocates regulated services into different categories to ensure that regulatory requirements are applied proportionately. The thresholds that we are debating follow Ofcom’s work and consultation on what platforms should be set as category 1, category 2A and category 2B. The highest-risk platforms—the largest social media and pornography sites—will be designated as category 1 and will bear the highest duty of care. Category 2A will contain the highest-risk search engines, such as Google and Bing, and category 2B will contain the remaining high-risk and high-reach sites.
The regulations enable Ofcom to designate services subject to additional duties. That will address content that promotes, encourages or provides instructions for suicide, self-harm or eating disorders, as well as content that is abusive or incites hate. Where users are likely to access this content, category 1 providers will be required to proactively offer adults optional features to reduce the likelihood of their encountering such content or to alert them to its nature. There are concerns that category 1 sites may omit smaller platforms with harmful content, and it may be prudent for the Government to look at redefining that at a later date.
The Online Safety Act’s impact assessment concludes that more than 25,000 companies may be within scope of the new regulatory framework. Companies designated into higher categories will face additional risks as they face more duties. Can the Minister reassure tech companies, especially small and medium-sized businesses, that her Department will continue to work with them to ensure that cost is affordable and proportionate?
I note that Ofcom expects the illegal harms safety duties to become enforceable around March 2025, once technology companies have assessed the risk of online harms on their platforms. Does the Minister agree that platforms do not need to wait, and should already be taking action to improve safety on their sites? Can the Minister confirm that she is encouraging platforms to take this proactive action?
Separately from the Online Safety Act, the last Government launched the pornography review to explore the effectiveness of regulation, legislation and the law enforcement response to pornography. I understand that that review has now concluded. Can the Minister provide her reassurance that the review’s final report will be published imminently?
I would be grateful for the Minister’s comments on these points. The Online Safety Act is a pivotal piece of legislation and makes the UK the safest place in the world to be a child online. I am proud of the previous Government’s role in passing it, and I urge the Minister to ensure that it is fully implemented as soon as possible.
It is a pleasure to serve under your chairship, Sir Christopher. I am disappointed in this statutory instrument. I recognise the Minister’s acknowledgment of the small sites, high-harm issue, but the issue is far more important and we are missing an opportunity here. Can the Minister set out why the regulations as drafted do not follow the will of Parliament, accepted by the previous Government and written into the Act, that thresholds for categorisation can be based on risk or size? That was a long-argued point that went through many iterations.
The then Minister accepted the amendment that was put forward and said:
“many in the House have steadfastly campaigned on the issue of small but risky platforms.” —[Official Report, 12 September 2023; Vol. 737, c. 806.]
He confirmed that the legislation would now give the Secretary of State the discretion to decide whether to set a threshold based on the number of users or the functionalities offered, or both factors, with the change ensuring that the framework was as flexible as possible in responding to the risk landscape. That has been thrown away in this new legislation. The Minister just said that we must do everything in our power, and yet the Government are throwing out a crucial change made to the Act to actually give them more power. They are getting rid of a power by changing this.
The amendment was to ensure that small sites dedicated to harm, such as sites providing information on suicide or self-harm or set up to target abuse and hatred at minority groups, like we saw in the riots in the summer, were subject to the fullest range of duties. When Ofcom published its advice, however, it disregarded this flexibility and advised that regulation should be laid bringing only the large platforms into category 1.
Is the hon. Member as concerned as I am that the Government seem to be ignoring the will of Parliament in their decision? Is he worried that young people particularly will suffer as a result?
Absolutely—I am. The Secretary of State’s decision to proceed with this narrow interpretation of the Online Safety Act provisions, and the failure to use the power they have to reject Ofcom’s imperfect advice, will allow small, risky platforms to continue to operate without the most stringent regulatory restrictions available. That leaves significant numbers of vulnerable users—women and individuals from minority groups—at risk of serious harm from targeted activity on these platforms.
I will set a few more questions for the Minister. How do His Majesty’s Government intend to assess whether Ofcom’s regulatory approach to small but high-harm sites is proving effective, and have any details been provided on Ofcom’s schedule of research about such sites? What assessment have the Government made of the different harms occurring on small, high-harm platforms? Have they broken this down by type of harm, and will they make such information available? Have the Government received legal advice about the use of service disruption orders for small but high-harm sites? Do the Government expect Ofcom to take enforcement action against small but high-harm sites, and have they made an assessment of the likely timescales for enforcement action? Will the Government set out criteria against which they expect Ofcom to keep its approach to small but high-harm sites under continual review, as set out in their draft statement of strategic priorities for online safety?
Was the Minister aware of the previous Government’s commitment that Select Committees in both Houses would be given the opportunity to scrutinise draft Online Safety Act statutory instruments before they were laid? If she was, why did that not happen in this case? Will she put on record her assurances that Online Safety Act statutory instruments will in future be shared with the relevant Committees before they are laid?
For all those reasons, I will vote against the motion.
I appreciate the opportunity to speak in this Committee, Sir Christopher. Like at least one other Member in the room, I lived the Online Safety Bill for a significant number of months—in fact, it seemed to drag on for years.
As the Minister said, the Online Safety Act is long overdue. We have needed this legislation for 30 years, since I was a kid using the internet in the early ’90s. There has always been the risk of harm on online platforms, and there have always been places where people can be radicalised and can see misogynistic content or content that children should never be able to see. In this case, legislation has moved significantly slower than society—I completely agree with the Minister about that—but that is not a reason for accepting the statutory instrument or agreeing with the proposed threshold conditions.
On the threshold conditions, I am unclear as to why the Government have chosen 34 million and 7 million for the average monthly active users. Is it 34 million because Reddit happens to have 35 million average UK users—is that why they have taken that decision? I absolutely believe that Reddit should be in scope of category 1, and I am pretty sure that Reddit believes it should be in scope of category 1 and have those additional duties. Reddit is one of the places where the functionalities and content recommendation services mean that people, no matter what age they are, can see incredibly harmful content. They can also see content that can be incredibly funny—a number of brilliant places on Reddit allow people can look at pictures of cats, which is my favourite way to use the internet—but there are dark places in Reddit forums, where people can end up going down rabbit holes. I therefore agree that platforms such as Reddit should be in scope of category 1.
The Minister spoke about schedule 11 and the changes that were made during the passage of the Act. The Minister is absolutely right. Paragraph 1(5) of that schedule states:
“In making regulations under sub-paragraph (1), the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
However, that does not undo the fact that we as legislators made a change to an earlier provision in that schedule. We fought for that incredibly hard and at every opportunity—in the Bill Committee, on the Floor of the House, in the recommitted Committee and in the House of Lords. At every stage, we voted for that change to be made, and significant numbers of outside organisations cared deeply about it. We wanted small high-risk platforms to be included. The provision that was added meant that the Secretary of State must make regulations relating to
“any other characteristics of that part of the service or factors relating to that part of the service that the Secretary of State considers relevant.”
That was what the Government were willing to give us. It was not the original amendment that I moved in Bill Committee, which was specifically about small high-risk platforms, but it was enough to cover what we wanted.
What functionalities could and should be brought in scope? I believe that any service that allows users to livestream should be in the scope of category 1. We know that livestreaming is where the biggest increase in self-generated child sexual abuse material is. We know that livestreaming is incredibly dangerous, as people who are desperate to get access to child sexual abuse material can convince vulnerable young people and children to livestream. There is no delay where that content can be looked at and checked in advance of it being put up, yet the Government do not believe that every service that allows six-year-olds to livestream should be within the scope of category 1. The Government do not believe that those services should be subject to those additional safety duties, despite the fact that section 1 of the Online Safety Act 2023 says platforms should be “safe by design”. However, this is not creating platforms that are safe by design.
The regulations do not exclude young people from the ability to stream explicit videos to anyone because they only include services with over 34 million users, or over 7 million when it comes to content recommendation, and I agree that services in those cases are problematic. However, there are other really problematic services, causing life-changing—or in some cases, life-ending—problems for children, young people and vulnerable adults that will not be in the scope of category 1.
Generally, I am not a big fan of a lot of things that the UK Government have done; I have been on my feet, in the Chamber, arguing against a significant number of those things. This is one of the things that makes me most angry, because the Government, by putting forward this secondary legislation, are legislating in opposition to the will and intention of the Houses of Parliament. I know that we cannot bind a future Government or House, but this is not what was intended or agreed and moved on, nor what Royal Assent was given on; that was on the basis that we had assurances from Government Ministers that they would look at those functionalities and small but high-risk platforms.
For what Ofcom has put out in guidance and information on what it is doing on small but high-risk platforms, why are we not using everything that is available? Why are Government not willing to use everything available to them to bring those very high-risk platforms into the scope of category 1?
The changes that category 1 services would be required to make include additional duties; for a start, they are under more scrutiny—which is to be expected—and they are put on a specific list of category 1 services which will be published. That list of category 1 services includes platforms such as 4chan, that some people may have never heard of. Responsible parents will see that list and say, “Hold on a second. Why is 4chan on there? I don’t want my children to be going on there. It is clearly not a ginormous platform, therefore it must be on there because it is a high-risk service.” Parents will look at that list and talk to their children about those platforms. In terms of the category 1 list, never mind the additional duties, that would have a positive impact. Putting suicide forums on that list of category 1 services would have a positive impact on the behaviour of parents, children, and the teachers who teach those young people how to access the internet safely.
I guarantee that a significant number of teachers and people that are involved with young people have never heard of 4chan, but putting it on that list would give them an additional tool to enable them to approach young people and talk about the ways in which they use the internet.
I thank the hon. Lady for speaking so passionately on this matter. As the Liberal Democrat mental health spokesperson, something that we are increasingly coming across is that it is not just adults asking children to livestream, but children, peer-to-peer, who do not realise that it is illegal. As the hon. Lady touched on, the mental health impact is huge but also lifelong. Someone can have a digital footprint that they can never get rid of, and children who are uninformed and uneducated to the impacts of their decisions could be affected decades into the future.
I completely agree. That is an additional reason why livestreaming is one of my biggest concerns. That functionality should have been included as a matter of course. Any of the organisations that deal with young people and the removal of child sexual abuse material online, such as the Internet Watch Foundation, will tell you that livestreaming is a huge concern. The hon. Member is 100% correct.
That is the way I talk to my children about online safety: once something is put online—once it is on the internet—it cannot ever be taken back. It is there forever, no matter what anyone does about it, and young people may not have the capacity to understand that. If systems were safe by design, young people simply would not have access to livestreaming at all; they would not have access to that functionality, so there would be that moment of thinking before they do something. They would not be able to do peer-to-peer livestreaming that can then be shared among the entire school and the entire world.
We know from research that a significant number of child sexual abuse materials are impossible to take down. Young people may put their own images online or somebody else may share them without their consent. Organisations such as the Internet Watch Foundation do everything they can to try to take down that content, but it is like playing whack-a-mole; it comes up and up and up. Once they have fallen into that trap, the content cannot be taken back. If we were being safe by design, we would ensure, as far as possible—as far as the Government could do, we could do or Ofcom could do—that no young person would be able to access that functionality. As I said, it should have been included.
I appreciate what the Government said about content recommendation and the algorithms that are used to ensure that people stay on platforms for a significant length of time. I do not know how many Members have spent much time on TikTok, but people can start watching videos of cats and still be there an hour and a half later. The algorithms are there to try to keep us on the platform. They are there because, actually, the platforms make money from our seeing the advertisements. They want us to see exciting content. Part of the issue with the content recommendation referenced in the conditions is that platforms are serving more and more exciting and extreme content to try to keep us there for longer, so we end up with people being radicalised on these platforms—possibly not intentionally by the platforms, but because their algorithm serves more and more extreme content.
I agree that that content should have the lower threshold in terms of the number of users. I am not sure about the numbers of the thresholds, but I think the Government have that differentiation correct, particularly on the addictive nature of algorithmic content. However, they are failing on incredibly high-risk content. The additional duties for category 1 services involve a number of different things: illegal content risk assessments, duties relating to terms of service, children’s risk assessments, adult empowerment duties and record-keeping duties. As I said, the fact that those category 1-ranked platforms will be on a list is powerful in itself, but adding those additional duties is really important.
Let us say that somebody is undertaking a risky business—piercing, for example. Even though not many people get piercings in the grand scheme of things, the Government require piercing organisations to jump through additional hoops because they are involved in dangerous things that carry a risk of infection and other associated risks. They are required to meet hygiene regulations, register with environmental health and have checks of their records to ensure that they know who is being provided with piercings, because it is a risky thing. The Government are putting additional duties on them because they recognise that piercing is risky and potentially harmful.
However, the Government are choosing not to put additional duties on incredibly high-risk platforms. They are choosing not to do that. They have been given the right to do that. Parliament has made its will very clear: “We want the Government to take action over those small high-risk platforms.” I do not care how many hoops 4chan has to jump through. Give it as many hoops as possible; it is an incredibly harmful site, and there are many others out there—hon. Members mentioned suicide forums, for example. Make them jump through every single hoop. If we cannot ban them outright—which would be my preferred option—make them keep records, make them have adult-empowerment duties, and put them on a list of organisations that we, the Government or Ofcom reckon are harmful.
If we end up in a situation where, due to the failures of this Act, young people commit suicide, and the platform is not categorised properly, there is then a reduction in the amount of protections, and in the information that they have to provide about deceased children to the families, because they are not categorised as category 1 or 2B. We could end up in a situation where a young person dies as a result of being radicalised on a forum—because the Government decided it should not be in scope—but that platform does not even have to provide the deceased child’s family with access to that online usage. That is shocking, right? If the Government are not willing to take the proper action required, at least bring these platforms into the scope of the actions and requirements related to deceased children.
I appreciate that I have taken a significant length of time—although not nearly as long as the Online Safety Act has taken to pass, I hasten to say—but I am absolutely serious about the fact that I am really, really angry about this. This is endangering children. This is endangering young people. This is turning the Online Safety Act back into what some people suggested it should be at the beginning, an anti-Facebook and anti-Twitter Act, or a regulation of Facebook and Twitter— or X—Act, rather than something that genuinely creates what it says in section 1 of the Act: an online world that is “safe by design”.
This is not creating an online world that is safe by design; this is opening young people and vulnerable adults up to far more risks than it should. The Government are wilfully making this choice, and we are giving them the opportunity to undo this and to choose to make the right decision—the decision that Parliament has asked them to make—to include functionalities such as livestreaming, and to include those high-risk platforms that we know radicalise people and put them at a higher risk of death.
It is a great and unexpected pleasure to serve under your chairmanship, Sir Christopher. I want to take this opportunity to say something about why I think these regulations are a mistake. I agree with a great deal of what the hon. Member for Aberdeen North (Kirsty Blackman) has just said—I will seek not to repeat it—but it is probably worth noting at the outset that, as the Minister has rightly explained, these regulations are not the only means by which we will hold online services to account under this legislation.
A category 1 designation allows Ofcom—the regulator —to impose additional constraints on a platform. I think that is an entirely fair point to make, but as the hon. Lady observed, something like 100,000 online services are likely to be in scope of this Act overall. It is worth noting that, in Ofcom’s assessment, something like 12 to 16 services only would qualify for category 1 status if, as is currently the case, size was the only criterion and we set the limit—as these regulations seek to do—at 7 million monthly users.
As the hon. Lady explained, over a considerable period of time, with a considerable amount of energy expended, Parliament decided that it was appropriate to include in the category 1 designation not just the largest services, but those services where a great deal of harm may be concentrated but the services are, in themselves, much smaller. Those services being smaller might happen organically, or it might, of course, happen because that harmful content seeks refuge from the regulation applied to the larger services by migrating to smaller ones.
There is good reason, therefore, to think that having smaller services potentially included in category 1 designation is a tool that Ofcom, and indeed the Government, will want to have available.
Those platforms, such as ones that specialise in suicide or self-harm, might well be the kind of platforms that we find ourselves increasingly concerned about and that the Government will increasingly be asked to do something about. I have to say to the Minister that it is not sensible to remove from the regulator’s hand the tools that it might want to use to do what the Government will undoubtedly ask it to do—the Government themselves will come under pressure to do something about that.
Again, as has been explained, what or who we include in that category 1 designation really matters, because of the additional powers and constraints that Ofcom will have available to it in relation to category 1 services. Those powers include the only powers available under this Act to protect adults from anything that is not illegal content—including vulnerable adults, by the way. There will come a time when the Government, I suspect, will wish they had more to deal with problems of that nature. As the hon. Member for Aberdeen North explained, the Act gives those powers, so it is bizarre in the extreme that the Government should choose voluntarily not to use them. It is bizarre, also, because the Labour party in opposition was clear in its support for the change.
The hon. Member for Newton Abbot quoted one example of something that the shadow spokesman at the time, the hon. Member for Pontypridd (Alex Davies-Jones), who now has Government responsibilities elsewhere, said during the passage of the Bill. I will quote another example to the Committee. She said:
“Categorisation of services based on size rather than risk of harm will mean that the Bill will fail to address some of the most extreme harms on the internet.”––[Official Report, Online Safety Public Bill Committee, 12 July 2022; c. 168.]
I think she was absolutely right then, and still is now. The draft regulations, I am afraid, do exactly what she said the Act should not do: they limit the criterion for the designation of category 1, and these additional powers, to size only.
We should think about the Government’s rationale for what they are doing. In December, the Secretary of State made a written statement to set out the reasoning for the measures that the Government have put before the Committee:
“In making these Regulations, I have considered factors as required by the Act. Amendments made during the passage of the Act, changed the consideration for Category 1 from the ‘level of risk of harm to adults from priority content that is harmful to adults disseminated by means of the service’ to ‘how easily, quickly and widely regulated user-generated content is disseminated by means of the service.’ This was a significant change”.—[Official Report, 16 December 2024; Vol. 759, c. 12WS.]
In other words, I think the Secretary of State was arguing that he has no option but to limit to a scale criterion-only designation for category 1, because that is how the Act has changed. That is fundamentally mistaken, if I may say so to the Minister. I do not expect her to have all this before her—I know her officials will take careful note—but the Act states at paragraph 1(5) of schedule 11:
“In making regulations under sub-paragraph (1)”—
the draft regulations we are discussing—
“the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on”—
and this is the part the Secretary of State drew out in his statement—
“how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
Without doubt, therefore, the Secretary of State has to take the number of users into account, but it is not the only criterion. There is a fundamental misunderstanding —at least, I hope that is what it is—in the ministerial statement, which suggests that that is the only criterion to be considered. It is not, and I think it is a mistake to ignore the others, which, again, have already been drawn out in the debate.
To be clear, these draft regulations mean that no smaller platform—under the level of 7 million monthly users—can ever be considered as a category 1 platform, unless or until the Government and Ofcom change their approach to the categorisation process. I repeat the point, and I make no apologies for doing so, that that is specifically contrary to what Parliament had intended in the passage of the Act.
The hon. Member for Aberdeen North and I are not the only ones making this observation. There are multiple organisations with whom we and then the Labour party worked closely to get this Act passed for the protection of those about whom the Labour party is charged with worrying. Those include organisations such as the Samaritans, Mind, the Centre for Countering Digital Hate, the Antisemitism Policy Trust and the Molly Rose Foundation, all of which care deeply about the effectiveness of this legislation, as I am sure we all do.
It is true, and the Minister may make this point, that Ofcom’s advice suggested the course of action the Government are now taking. However, “advice” is the key word. The Government were not obliged to take it, and in this instance I think they would have been wiser to resist it. Ofcom will not have all the tools it could have to deal with smaller services where greater harm may be concentrated, despite what the Act allows. I have to say that tying one hand behind Ofcom’s back is not sensible, even when Ofcom is itself asking us to do so. That is especially true when the Government place such heavy reliance on the Online Safety Act—as they are entitled to—to deal with the multiple online harms that arise.
I have lost count, as I suspect others in this Committee have, of the number of times that Ministers have referred to the Online Safety Act when challenged about harmful materials or behaviours online and said, “This is the answer. This Act gives us powers to act against services that do not do what they should.” They are right that it is not a perfect piece of legislation, and none of us involved in its generation would claim that it was, but it does give Government and regulators the powers to act. However, that does us no good at all if, in subsequent pieces of statutory legislation, the Government choose not to use those tools or put them beyond Ofcom’s reach. That is what the regulations do.
I have to say to the Minister that government is hard enough. She should not throw away the tools she needs to do the job that she has promised everyone that she will do. This is a mistake, and I hope that even at this late stage the Minister will find a way to avoid making it.
It is a pleasure to serve under your chairship, Sir Christopher. I will not repeat many of the points that have already been made, but I want to express my concern that these changes do not bring into scope small but potentially dangerous platforms, including those that bring about specific, targeted abuse and harms, as well as those that disguise themselves as support for preventing self-harm, suicide and eating disorders, but actually promote that ideology and cause further harm.
As the right hon. and learned Member for Kenilworth and Southam (Sir Jeremy Wright) just said, the Government have missed an opportunity to correct what they continue to say this statutory instrument addresses. I also echo the comments made by the hon. Member for Aberdeen North (Kirsty Blackman) on what I see as a blunt tool, which is the setting of the limits at 7 million and 34 million. Reading into the regulations and the explanatory documents shows that the figures are worked out using a six-month mean average, so there is absolutely nothing to prevent one of these platforms, should they want to flout or get below the threshold, from simply delisting or deregistering a number of their users over that six-month rolling period, which would see them fall out of scope of the regulations.
I was not previously in this place, but from listening to other Members speak about previous pieces of legislation that came from the Online Safety Act that considered the level of risk rather than using numbers as a blank term, I encourage the Government, as previous speakers have done, to go back and look at what the legislation is about achieving—protecting our online users.
I thank all Members for their very powerful contributions to the debate. This instrument will bring us one step closer to a safer online world for our citizens. It is clearer than ever that it is desperately needed: transparency, accountability and user empowerment matter now more than ever.
The Opposition spokesperson, the hon. Member for Huntingdon, asked whether we agree on the need for companies not to wait for the duties in the Act to be implemented, but to ensure that safety is baked in from the start. I absolutely agree, and he will be aware that the Secretary of State has made that point on many occasions. He also raised the issue of proportionality. I confirm that many of the duties on categorised services are subject to the principle of proportionality, which requires Ofcom to consider measures that are technically feasible to providers of a certain size or capacity, and in some cases duties are based on the assessment of risk of harm presented by the service.
For example, in determining what is proportionate for the user empowerment duties on content for category 1 services, the findings of the most recent user empowerment assessments are relevant. They include the incidence of relevant content on the service in addition to the size and capacity of the provider. Where a code of practice is relevant to a duty, Ofcom must have regard to the principles on proportionality, and what is proportionate for one kind of service might not be for another.
The hon. Member for Huntingdon is absolutely right that the pornography review has been completed. The Government are reviewing that at the moment and will publish it in due course.
In response to the hon. Members for Newton Abbot and for Aberdeen North (Kirsty Blackman) and to the right hon. and learned Member for Kenilworth and Southam (Sir Jeremy Wright), when the Online Safety Act was introduced, category 1 thresholds were due to be assessed based on the level of risk and harm for adults—as the Members read out very clearly. That was removed during the passage of the Bill by the previous Government.
As things stand, although Baroness Morgan’s successful amendment made it possible for threshold conditions to be based solely on functionalities, it did not change the basis of Ofcom’s research, which for category 1 is easy, quick and wide dissemination of content. The Secretary of State had to consider that. I will repeat that for all Members to hear again: the Secretary of State has to act within the powers given to him in schedule 11 when setting out the threshold and conditions. The powers do not allow for thresholds to be determined by another body, as per the amendment.
Although the hon. Member for Aberdeen North very powerfully read out the Act, it very clearly sets out that it does not actually do what she is asking for it to do. We absolutely agree that small but risky sites need to be covered, but as it stands, the Secretary of State does not have the powers to include them.
Sorry, I have lots of points to cover. If I have not covered the hon Member’s concerns in my response, she is more than welcome to intervene later.
These small but risky services are of significant concern to the Government, and they will still have to protect against illegal content and, where relevant, content that is harmful to children. Ofcom also has a dedicated taskforce to go after them. I hope that answers the hon. Member’s question.
The hon. Member for Newton Abbot also raised the review of Ofcom’s approach. The regulator has already trialled an approach of targeting small but risky services through its regulation of video-sharing platforms. Indeed, a number of those services improved their policies and content moderation in response. All the adult platforms under the VSP regime, large and small, have implemented age verification through this route to ensure that under-18s cannot access pornography on their services. In instances where services fail to make necessary changes, they will face formal enforcement action from Ofcom. Ofcom has a proven track record and the Government have every faith in its ability to take action against non-compliant services.
The hon. Member also raised issues around how Ofcom will enforce action against small but risky services. Ofcom will have robust enforcement powers available to use against companies that fail to fulfil their duties and it will be able to issue enforcement decisions. Action can include fines of up to £18 million or 10% of qualifying worldwide revenue in the relevant year, whichever is higher, and Ofcom can direct companies to take specific steps to comply with its regulation.
The Minister raised the issue of age verification, which is good. However, she did not say how “harmful to adults”, “harmful to vulnerable minorities” and “harmful to women” are categorised. Children are protected in this case, but those other groups are not.
Also, in response to the answer that the Minister just gave, the difficulty is not the Ofcom powers; it is the obligation on the provider. If we have not put a provider into category 1, it does not have the same level of obligation as category 1 companies do. No matter what powers Ofcom has and no matter what fines it imposes, it cannot get such companies to give those commitments to a category 1 level if they are not in that category.
Removing the section is not giving Ofcom the tools it needs. The Minister was absolutely right earlier when she said that there is much more to do. Why drop this ability to put other sites in category 1?
I think the hon. Member missed it when I said that, as things stand, the Secretary of State does not have the power to include them. It is not about removing them; it is about not having the powers to include them, as things stand, at the moment.
I will conclude. In extreme cases, Ofcom, with the agreement of the courts, uses business disruption measures, which are court orders that mean third parties have to withdraw non-compliant services, or restrict or block access to non-compliant services in the UK.
The hon. Member for Newton Abbot also asked whether the Act will be reviewed to address the gaps in it. As I said at the start, our immediate focus is getting the Act implemented quickly and effectively. It was designed to tackle illegal content and protect children, and we want those protections in place as soon as possible. It is right that the Government continually assess the ability of the framework to keep us safe, especially given that technology develops so quickly. We will look, of course, at how effective these protections are and build on the Online Safety Act, based on evidence. However, our message to social media companies remains clear: there is no need to wait. As the Opposition spokesperson said, those companies can and should take immediate action to protect their users.
On the use of business disruption measures, the Act provides Ofcom with powers to apply to court for such measures, as I have said, including where there is continued failure and non-compliance. We expect Ofcom to use all available enforcement mechanisms.
The hon. Member for Huntingdon asked how Parliament can scrutinise the delivery of the legislation. Ongoing parliamentary scrutiny is absolutely crucial; indeed, the Online Safety Act requires Ofcom codes to be laid before Parliament for scrutiny. The Science, Innovation and Technology Committee and the Communications and Digital Committee of the House of Lords will play a vital role in scrutinising the regime. Ofcom’s codes of practice for illegal content duties were laid before Parliament in December. Subject to their passing without objection, we expect them to be in force by spring 2025, and the child safety codes are expected to be laid before Parliament in April, in order to be in effect by summer 2025. Under section 178 of the Act, the Secretary of State is required to review the effectiveness of its regulatory framework between two and five years after key provisions of the Act come into force. That will be published as a report and laid before Parliament.
Letters were sent in advance of laying these regulations to the House of Lords Communications and Digital Committee and the House of Commons Science, Innovation and Technology Committee. Hon. Members have asked about user numbers. Ofcom recommended the threshold of 34 million or 7 million for category 1. Services must exceed the user number thresholds. The Government are not in a position to confirm who will be categorised. That will be the statutory role of Ofcom once the regulations have passed.
I am going to make some progress. On livestreaming, Ofcom considered that functionality, but concluded that the key functionalities that spread content easily, quickly and widely are content recommender systems and forwarding or resharing user-generated content.
Services accessed by children must still be safe by design, regardless of whether they are categorised. Small but risky services will also still be required to comply with illegal content duties. The hon. Member for Aberdeen North should be well aware of that as she raised concerns on that issue.
On child safety, there were questions about how online safety protects children from harmful content. The Act requires all services in scope to proactively remove and prevent users from being exposed to priority illegal content, such as illegal suicide content and child sexual exploitation and abuse material. That is already within the remit.
In addition, companies that are likely to be accessed by children will need to take steps to protect children from harmful content and behaviour on their services, including content that is legal but none the less presents a risk of harm to children. The Act designates content that promotes suicide or self-harm as in the category of primary priority content that is harmful to children. Parents and children will also be able to report pro-suicide or pro-self-harm content to the platform and the reporting mechanism will need to be easy to navigate for child users. On 8 May, Ofcom published its draft children’s safety codes of conduct, in which it proposed measures that companies should employ to protect children from suicide and self-harm content, as well as other content.
Finally, on why category 1 is not based on risk, such as the risk of hate speech, when the Act was introduced, category 1 thresholds were due to be assessed on the level of risk of harm to adults from priority content disseminated by means of that service. As I said earlier, that was removed during the Act’s passage by the then Government and replaced with consideration of the likely functionalities and how easily, quickly and widely user-generated content is disseminated, which is a significant change. Although the Government understand that that approach has its critics, who argue that the risk of harm is the most significant factor, that is the position under the Act.
The Minister is making the case that the Secretary of State’s hands are tied by the Act —that it requires stuff in relation to the number of users. Can she tell us in which part of the Act it says that, because it does not say that? If she can tell us where it is in the Act, I am quite willing to sit down and shut up about this point, but it is not in the Act.
The legislation allows the Secretary of State to deviate from Ofcom’s advice and to publish a statement explaining why. However, the core consideration for category 1 under schedule 11 is—I repeat for the third time—how easily, quickly and widely regulated user-generated content is disseminated by means of a service. As a result, for category 1, Ofcom concluded that the content is disseminated with increased breadth as the number of users increases.
The decision to proceed with the threshold combination recommended by Ofcom, rather than discounting user-number thresholds, reflects that any threshold condition created by the Government should consider the factors as set out in the Act, including easy, quick and wide dissemination for category 1, and the evidence base. That is what the Act says. As a result, the Government decided to not proceed with an approach that deviated from Ofcom’s recommendation, particularly considering the risk of unintended consequences.
I am more than happy to write to the hon. Member for Aberdeen North with the full details. I understand that she feels very passionately about this point, but the Act is the Act. Although I am grateful for her contribution, I have to follow what the Act says, based on the legal advice that I get.
I am extremely grateful to the Minister for giving way, and I have sympathy with her position, especially in relation to legal advice, having both received it and given it. I suggest that the Minister is talking about two different things, and they need to be separated. The first is the question of whether legal but harmful content was removed from the Bill, which it undoubtedly was. Measures in relation to content that is neither unlawful nor harmful to children were largely removed from the Bill—the Minister is right to say that.
What we are discussing, however, are the tools available to Ofcom to deal with those platforms that it is still concerned about in relation to the remaining content within the ambit of the Bill. The worry of those of us who have spoken in the debate is that the Government are about to remove one of the tools that Ofcom would have had to deal with smaller, high-harm platforms when the harm in question remains in ambit of the Bill—not that which was taken out during its passage. Would the Minister accept that?
I will again set out what the Secretary of State’s powers are. The Government have considered the suggestion of Baroness Morgan and others to categorise small but risky based on the coroner or Ofcom linking a service to a death. The Government were grateful for that suggestion. However, there were issues with that approach, including with what the Act allows the Secretary of State to consider when setting the categories. The Secretary of State is not allowed to consider anything other than the factors set out in the Act, which says that it has to include easy, quick and wide dissemination for category 1, and has to be evidence based.
I hope that the hon. Member for Aberdeen North will accept that I will write to her in great detail, and include a letter from Government lawyers setting out what I am saying in relation to the powers of the Secretary of State in setting the categories. I hope that she will be satisfied with that. I want to make it clear that we are not taking anything out; the Secretary of State is proceeding with the powers that he has been given.
I am going to proceed. I think I have covered the main points raised by hon. Members. I hope that the Committee agrees with me on the importance of enacting these thresholds and implementing the Online Safety Act as swiftly as possible. I made it clear that Ofcom has set up a taskforce that will review the small but risky sites, in response to the Secretary of State’s letter to it in September.
It is an honour to serve under your chairmanship, Sir Christopher. My right hon. and learned Friend the Member for Kenilworth and Southam was Attorney General for four years. It is just possible that his interpretation of the Act is correct, and that of the Minister’s officials is incorrect. I do not have detailed knowledge of this legislation, but I wonder whether the Minister and her Whip want to take some further time and pause before putting these regulations to a vote—that would be perfectly acceptable to us. We will not oppose the regulations, but we are cautious that if the Minister wants more time, she is welcome to take it.
Although I thank the hon. Member for his contribution, I am sure that he will appreciate that this issue has been looked into and discussed in debates and with officials. With that, I commend these regulations to the Committee.
The debate can continue until seven minutes past 11 o’clock. For the benefit of new Members, if anybody wishes to speak again, it is possible to speak for a second time in a General Committee.
Thank you, Sir Christopher—I appreciate that prod. I did look at Standing Orders this morning, but could not find that bit, so that is incredibly helpful.
On what the Minister said about schedule 11 and the notes that she has been passed from her team on that point, I appreciate her commitment to share the Government’s legal advice. That will be incredibly helpful; it would have been helpful to have it in advance of this Committee.
In schedule 11, it says:
“In making regulations under sub-paragraph (1), the Secretary of State must take into account the likely impact of the number of users of the user-to-user part of the service, and its functionalities, on how easily, quickly and widely regulated user-generated content is disseminated by means of the service.”
Perhaps I cannot read English, or perhaps the Minister, her legal advisers and the team at DSIT read it in a different way from me, but the Secretary of State having to take something into account and the Secretary of State being bound by something are two different things—they are not the same. It does not say that the Secretary of State must regulate only on the specific number of users.
In fact, schedule 11 says earlier that the Secretary of State
“must make regulations specifying conditions…for the user-to-user part of regulated user-to-user services relating to each of the following”,
which are the
“number of users…functionalities of that part of the service, and…any other characteristics of that part of the service or factors”.
The Secretary of State must therefore make regulations in relation to any other characteristics of that part of the service or factors
“relating to that part of the service that the Secretary of State considers relevant.”
He must do that, but he must only take into account the number of users. The Government, however, have decided that taking into account is much more important than “must” do something. They have decided to do that despite Parliament being pretty clear in the language it has used.
I am not terribly happy with the Online Safety Act. It is a lot better than the situation we have currently, but it is far from perfect. As the Minister said, I argued in favour of keeping the stuff about legal but harmful content for adults. I argued against the then Government’s position on that, but the Act is the Act that we have.
The Minister’s point does not make sense. The Secretary of State has to take into account the number of users and how quickly things are disseminated, but he must make regulations about functionalities or factors that he considers relevant. Therefore, it seems that he does not consider suicide forums and livestreaming to be relevant; if he did, he would surely be bound by the “must” and would have to make regulations about them. It is frustrating that the Act does not do what it is supposed to do and does not protect young people from livestreaming. The Minister said that it protects people from seeing that illegal content, but it does not prevent them from creating it.
The Government could make regulations so that every platform that has a livestreaming functionality, or even every platform that has child users on it—there is a lot in the Act about the proportion of children who use a service—is automatically included in category 1 because they consider them to be high risk.
It would not be right for either of us to ask the Minister to disclose legal advice—that clearly would not be appropriate—but I am grateful for the Minister’s offer to share a slightly more expansive description of why the Government have come to the conclusion that they have.
On the hon. Lady’s point about what the Act actually says, we have both quoted paragraph 1(5) of schedule 11, which deals with whether the language that has found its way into the ministerial statement is the be-all and end-all of the Minister’s conclusions. We both think it is not. If it is the case, as I think the Minister is arguing, that the ability to disseminate “easily, quickly and widely” is essentially a synonym for the scale of the service and the number of its users, what does the hon. Lady think of the amendment that Baroness Morgan made in the other place to paragraph 1(4), which says that when the regulations we are considering specify
“the way or ways in which the relevant conditions are met”,
for category 1 threshold conditions
“at least one specified condition about number of users or functionality must be met”?
The crucial word that was added is “or”. If the number of users were required to establish what the hon. Lady has described, the word “or” would be inappropriate.
I absolutely agree, and that is a helpful clarification.
If the Government have decided that it is too difficult to regulate high-risk platforms as category 1, and that they do not matter enough because they do not have enough of an impact, they should stand up and tell us that. Rather than saying that their hands have been tied by the Act—they manifestly have not—they need to take ownership of their actions. If they have decided that such platforms are not important enough or that they cannot be bothered having a fight with Ofcom about that, they should be honest and say, “This is the position we have decided to take.” Instead, they are standing up and saying, “Our hands have been tied,” but that is just not correct: their hands have not been tied by the Act.
I appreciate that the Minister will get in touch with me about the legal advice, but it will be too late. This statutory instrument will have been through the process by that time, and people will have been put at risk as a result of the Government’s failure. They have the power to take action in relation to functionalities and factors, and in relation to suicide forums, livestreaming and the creation of child sexual abuse material, and they are choosing not to.
If the Government have decided that it is too difficult to do that, that those platforms are not risky enough and that not enough people are being harmed by them, they need to hold their hands up and say, “We’ve decided that this is the position we are going to take.” They must not hide behind the legislation, which does not say what they are telling us it says. They should just be honest about the fact that they have decided that they cannot be bothered to take action. They cannot be bothered to have a fight with Ofcom because it is not important enough. Hiding behind the legislation is incredibly cowardly—it does not say that.
I do not have the benefit of having gone through the Act in its entirety, so I appreciate the input of hon. Members on this subject. It is that one word: “or”. Amendment 245 entailed moving from a test of size “and” functionality to a test of size “or” functionality. That is not, as far as I can hear from what the Minister has said, what is causing the problem; it should be giving the Government the opportunity to keep that in there. In setting these categorisations on just size, they are ignoring that.
The Minister also mentioned that the Act did not give the Secretary of State the power to allow somebody else to put organisations or sites into these categories; no, that is not what is being asked. It is about a recommendation from Ofcom for the Secretary of State to bring those smaller types of sites into the category. What this change does is remove the powers to stop those small sites that promote misogyny and racist hatred and those things that are very harmful, and which we saw examples of in the summer.
The Science, Innovation and Technology Committee is looking into those events at the moment, as I know, because I sit on the Committee. However, those powers are being thrown away, and an opportunity is being missed, because the powers for setting the thresholds have been misinterpreted. I beg the Minister to take a moment and look again, because the Government are getting this wrong.
The comments made by the hon. Member for Aberdeen North are absolutely outrageous, but I would not expect anything less from the SNP. I have made it very clear that I will share legal advice with Members. I also made it clear that the small but risky sites that Members have been talking about were raised by the Secretary of State in a letter to Ofcom in September, and Ofcom has set up a taskforce to look at those services.
The key thing for the Government is to get on with implementing the Online Safety Act. I know that the hon. Lady would like us to spend lots of time delaying, but we are interested in getting on with implementing the Act so that we can keep children safe online. With that, I commend the regulations to the House.
For the benefit of people watching, only Committee members can cast votes in a Division.
Question put.