Draft Online Safety Act 2023 (Category 1, Category 2A and Category 2B Threshold Conditions) Regulations 2025 Debate
Full Debate: Read Full DebateRobin Swann
Main Page: Robin Swann (Ulster Unionist Party - South Antrim)Department Debates - View all Robin Swann's debates with the Department for Science, Innovation & Technology
(1 day, 14 hours ago)
General CommitteesIt is a pleasure to serve under your chairship, Sir Christopher. I will not repeat many of the points that have already been made, but I want to express my concern that these changes do not bring into scope small but potentially dangerous platforms, including those that bring about specific, targeted abuse and harms, as well as those that disguise themselves as support for preventing self-harm, suicide and eating disorders, but actually promote that ideology and cause further harm.
As the right hon. and learned Member for Kenilworth and Southam (Sir Jeremy Wright) just said, the Government have missed an opportunity to correct what they continue to say this statutory instrument addresses. I also echo the comments made by the hon. Member for Aberdeen North (Kirsty Blackman) on what I see as a blunt tool, which is the setting of the limits at 7 million and 34 million. Reading into the regulations and the explanatory documents shows that the figures are worked out using a six-month mean average, so there is absolutely nothing to prevent one of these platforms, should they want to flout or get below the threshold, from simply delisting or deregistering a number of their users over that six-month rolling period, which would see them fall out of scope of the regulations.
I was not previously in this place, but from listening to other Members speak about previous pieces of legislation that came from the Online Safety Act that considered the level of risk rather than using numbers as a blank term, I encourage the Government, as previous speakers have done, to go back and look at what the legislation is about achieving—protecting our online users.
I think the hon. Member missed it when I said that, as things stand, the Secretary of State does not have the power to include them. It is not about removing them; it is about not having the powers to include them, as things stand, at the moment.
I will conclude. In extreme cases, Ofcom, with the agreement of the courts, uses business disruption measures, which are court orders that mean third parties have to withdraw non-compliant services, or restrict or block access to non-compliant services in the UK.
The hon. Member for Newton Abbot also asked whether the Act will be reviewed to address the gaps in it. As I said at the start, our immediate focus is getting the Act implemented quickly and effectively. It was designed to tackle illegal content and protect children, and we want those protections in place as soon as possible. It is right that the Government continually assess the ability of the framework to keep us safe, especially given that technology develops so quickly. We will look, of course, at how effective these protections are and build on the Online Safety Act, based on evidence. However, our message to social media companies remains clear: there is no need to wait. As the Opposition spokesperson said, those companies can and should take immediate action to protect their users.
On the use of business disruption measures, the Act provides Ofcom with powers to apply to court for such measures, as I have said, including where there is continued failure and non-compliance. We expect Ofcom to use all available enforcement mechanisms.
The hon. Member for Huntingdon asked how Parliament can scrutinise the delivery of the legislation. Ongoing parliamentary scrutiny is absolutely crucial; indeed, the Online Safety Act requires Ofcom codes to be laid before Parliament for scrutiny. The Science, Innovation and Technology Committee and the Communications and Digital Committee of the House of Lords will play a vital role in scrutinising the regime. Ofcom’s codes of practice for illegal content duties were laid before Parliament in December. Subject to their passing without objection, we expect them to be in force by spring 2025, and the child safety codes are expected to be laid before Parliament in April, in order to be in effect by summer 2025. Under section 178 of the Act, the Secretary of State is required to review the effectiveness of its regulatory framework between two and five years after key provisions of the Act come into force. That will be published as a report and laid before Parliament.
Letters were sent in advance of laying these regulations to the House of Lords Communications and Digital Committee and the House of Commons Science, Innovation and Technology Committee. Hon. Members have asked about user numbers. Ofcom recommended the threshold of 34 million or 7 million for category 1. Services must exceed the user number thresholds. The Government are not in a position to confirm who will be categorised. That will be the statutory role of Ofcom once the regulations have passed.
I am going to make some progress. On livestreaming, Ofcom considered that functionality, but concluded that the key functionalities that spread content easily, quickly and widely are content recommender systems and forwarding or resharing user-generated content.
Services accessed by children must still be safe by design, regardless of whether they are categorised. Small but risky services will also still be required to comply with illegal content duties. The hon. Member for Aberdeen North should be well aware of that as she raised concerns on that issue.
On child safety, there were questions about how online safety protects children from harmful content. The Act requires all services in scope to proactively remove and prevent users from being exposed to priority illegal content, such as illegal suicide content and child sexual exploitation and abuse material. That is already within the remit.
In addition, companies that are likely to be accessed by children will need to take steps to protect children from harmful content and behaviour on their services, including content that is legal but none the less presents a risk of harm to children. The Act designates content that promotes suicide or self-harm as in the category of primary priority content that is harmful to children. Parents and children will also be able to report pro-suicide or pro-self-harm content to the platform and the reporting mechanism will need to be easy to navigate for child users. On 8 May, Ofcom published its draft children’s safety codes of conduct, in which it proposed measures that companies should employ to protect children from suicide and self-harm content, as well as other content.
Finally, on why category 1 is not based on risk, such as the risk of hate speech, when the Act was introduced, category 1 thresholds were due to be assessed on the level of risk of harm to adults from priority content disseminated by means of that service. As I said earlier, that was removed during the Act’s passage by the then Government and replaced with consideration of the likely functionalities and how easily, quickly and widely user-generated content is disseminated, which is a significant change. Although the Government understand that that approach has its critics, who argue that the risk of harm is the most significant factor, that is the position under the Act.