Online Safety Act: Implementation Debate
Full Debate: Read Full DebateGareth Snell
Main Page: Gareth Snell (Labour (Co-op) - Stoke-on-Trent Central)Department Debates - View all Gareth Snell's debates with the Department for Science, Innovation & Technology
(1 day, 16 hours ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I beg to move,
That this House has considered the implementation of the Online Safety Act 2023.
It is a great pleasure to serve under your chairmanship, Mr Stringer, and I am grateful for the opportunity to open the debate. Let me start with some positives. The Online Safety Act 2023 is certainly not the last word on the subject, but it is, in my view, a big step forward in online safety, providing a variety of tools that allow the regulator to make the online world safer, particularly for children. I remain of the view that Ofcom is the right regulator for the task, not least because it can start its work sooner as an existing regulator and given the overlap with its existing work—for example, on video-sharing platforms. I also have great regard for the diligence and expertise of many at Ofcom who are now charged with these new responsibilities. However, I am concerned that Ofcom appears unwilling to use all the tools that the Act gives it to make the online world a safer place, and I am concerned that the Government appear unwilling to press Ofcom to be more ambitious. I want to explain why I am concerned, why I think it matters and what can be done about it.
Let me start with what I am worried about. There was a great deal of consensus about the passing of the Online Safety Act, and all of us involved in its development recognised both the urgent need to act on online harms and the enormity of the task. That means that the eventual version of the Act does not cover everything that is bad online and, of necessity, sets up a framework within which the regulator is required to fill in the gaps and has considerable latitude in doing so.
The architecture of that framework is important. Because we recognised that emerging harms would be more clearly and quickly seen by online services themselves than by legislators or regulators, in broad terms the Act requires online services to properly assess the risk of harms arising on their service and then to mitigate those risks. My concern is that Ofcom has taken an unnecessarily restrictive view of the harms it is asking services to assess and act on and, indeed, a view that is inconsistent with the terms of the Act. Specifically, my conversations with Ofcom suggest to me that it believes the Act only gives it power to act on harms that arise from the viewing of individual pieces of bad content. I do not agree, and let me explain why.
With limited exceptions, if an online service has not identified a risk in its risk assessment, it does not have to take action to reduce or eliminate that risk, so which risks are identified in the risk assessment really matters. That is why the Act sets out how a service should go about its risk assessment and what it should look out for. For services that may be accessed by children, the relevant risk assessment duties are set out in section 11 of the Act. Section 11(6) lists the matters that should be taken into account in a children’s risk assessment. Some of those undoubtedly refer to content, but some do not. Section 11(6)(e), for example, refers to
“the extent to which the design of the service, in particular its functionalities”
affects the risk of adults searching for and contacting children online. That is not a risk related to individual bits of content.
It is worth looking at section 11(6)(f), which, if colleagues will indulge me, I want to quote in full. It says that a risk assessment should include
“the different ways in which the service is used, including functionalities or other features of the service that affect how much children use the service (for example a feature that enables content to play automatically), and the impact of such use on the level of risk of harm that might be suffered by children”.
I think that that paragraph is talking about harms well beyond individual pieces of bad content. It is talking about damaging behaviours deliberately instigated by the design and operation of the online service, and the way its algorithms are designed to make us interact with it. That is a problem not just with excessive screen time, on which Ofcom has been conspicuously reluctant to engage, but with the issue of children being led from innocent material to darker and darker corners of the internet. We know that that is what happened to several of the young people whose suicides have been connected to their online activity. Algorithms designed to keep the user on the service for longer make that risk greater, and Ofcom seems reluctant to act on them despite the Act giving it powers to do so. We can see that from the draft code of practice on harm to children, which Ofcom published at the end of last year.
This debate is timely because the final version of the code of practice is due in the next couple of months. If Ofcom is to change course and broaden its characterisation of the risks that online services must act on—as I believe it should—now is the time. Many of the children’s welfare organisations that we all worked with so closely to deliver the Act in the first place are saying the same.
If Ofcom’s view of the harms to children on which services should act falls short of what the Act covers, why does it matter? Again, the answer lies in the architecture of the Act. The codes of practice that Ofcom drafts set out actions that services could take to meet their online safety duties. If they do the things that they set out, they are taken to have met the relevant safety duty and are safe from regulatory penalty. If in the code of practice Ofcom asks services to act only on content harms, it is highly likely that that is all services will do because it is compliance with the code that provides regulatory immunity. If it is not in the code, services probably will not do it. Codes that ignore some of the Act’s provisions to improve children’s safety means the online services that children use will ignore those provisions, too. We should all be worried about that.
That brings me to the second area where I believe that Ofcom has misinterpreted the Act. Throughout the passage of the Act, Parliament accepted that the demands that we make of online services to improve the safety of their users would have to be reasonable, not least to balance the risks of online activity with its benefits. In later iterations of the legislation, that balance is represented by the concept of proportionality in the measures that the regulator could require services to take. Again, Ofcom has been given much latitude to interpret proportionality. I am afraid that I do not believe it has done so consistently with Parliament’s intention. Ofcom’s view appears to be that for a measure to be proportionate there must be a substantial amount of evidence to demonstrate its effectiveness. That is not my reading of it.
Section 12 of the Act sets out the obligation on services to take proportionate measures to mitigate and manage risks to children. Section 13(1) offers more on what proportionate means in that context. It states:
“In determining what is proportionate for the purposes of section 12, the following factors, in particular, are relevant—
(a) all the findings of the most recent children’s risk assessment (including as to levels of risk and as to nature, and severity, of potential harm to children), and
(b) the size and capacity of the provider of a service.”
In other words, a measure that would be ruinously expensive or disruptive, especially for a smaller service, and which would deliver only a marginal safety benefit, should not be mandated, but a measure that brings a considerable safety improvement in responding to an identified risk, even if expensive, might well be justified.
Similarly, when it comes to measures recommended in a code of practice, schedule 4(2)(b) states those measures must be
“sufficiently clear, and at a sufficiently detailed level, that providers understand what those measures entail in practice”,
and schedule 4(2)(c) states that recommended measures must be “proportionate and technically feasible”, based on the size and capacity of the service. We should not ask anything of services they cannot do, and it should be clear what they have to do to comply. That is what the Act says proportionality means. I cannot find in the Act support for the idea that we have to know something will work before we try it in order for that action to be proportionate and therefore recommended in a code of practice. Why does that disagreement on interpretation matter? Because we should want online platforms and services to be innovative in how they fulfil their safety objectives, especially in the fast-moving landscape of online harms. I fear that Ofcom’s interpretation of proportionality, as requiring evidence of effectiveness, will achieve the opposite.
There will only be an evidence base on effectiveness for a measure that is already being taken somewhere, and that has been taken for long enough to generate that evidence of effectiveness. If we limit recommended actions to those that have evidence of success, we effectively set the bar for safety measures at current best practice. Given the safe harbour offered by measures recommended in codes of practice, that could mean services being deterred from innovating, because they get the protection only by doing things that are already being done.
I thank the right hon. and learned Gentleman for securing this incredibly important debate. He has described in his very good speech how inconsistency can occur across different platforms and providers. As a parent of a 14-year-old daughter who uses multiple apps and platforms, I want confidence about how they are regulated and that the security measures to keep her safe are consistent across all platforms she might access. My responsibility as a parent is to match that. The right hon. and learned Gentleman rightly highlights how Ofcom’s interpretation of the Act has led to inconsistencies and potential grey areas for bad faith actors to exploit, which will ultimately damage our children.
The hon. Gentleman makes an interesting point. We have to balance two things, though. We want consistency, as he suggests, but we also want platforms to respond to the circumstances of their own service, and to push the boundaries of what they can achieve by way of safety measures. As I said, they are in a better position to do so than legislators or regulators are to instruct them. The Act was always intended to put the onus on the platforms to take responsibility for their own safety measures. Given the variety of actors and different services in this space, we are probably not going to get a uniform approach, nor should we want one. The hon. Gentleman is right to say that the regulator needs to ensure that its expectations of everyone are high. There is a further risk not that we might just fix the bar at status quo but that, because of the opportunity that platforms have to innovate, some might go backwards on new safety measures that they are already implementing because they are not recommended or encouraged by Ofcom’s code of practice. That cannot be what we want to happen.
Those are two areas where I believe Ofcom’s interpretation of the Act is wrong and retreats in significant ways from Parliament’s intention to give the regulator power to act to enhance children’s online safety. I also believe it matters that it is wrong. The next question is what should be done about it. I accept that sometimes, as legislators, we have no choice but to pass framework legislation, with much of the detail on implementation to come later. That may be because the subject is incredibly complex, or because the subject is fast-moving. In the case of online safety, it is both.
Framework legislation raises serious questions about how Parliament ensures its intentions are followed through in all the subsequent work on implementation. What do we do if we have empowered regulators to act but their actions do not fulfil the expectations that we set out in legislation?