(1 year, 11 months ago)
Public Bill CommitteesGood morning, ladies and gentlemen. We are sitting in public and the proceedings are being broadcast. I have a few preliminary announcements. Hansard would be grateful if Members would provide speaking notes as and when they finish with them. Please also ensure that all mobile phones and other electronic devices are switched off.
I had better apologise at the start for the temperature in the room. For reasons that none of us can understand, two of the windows were left wide open all night. However, the Minister’s private office has managed to sort that out, because that is what it is there for. Hopefully the room will warm up. If any hon. Member has a problem with the temperature, they will have to tell the Chair. If necessary, I will suspend, but we are made of tough stuff, so we will try to bat on if we can.
We will first consider the programme motion, which can be debated for up to half an hour. I call the Minister to move the motion standing in his name, which was discussed yesterday by the Programming Sub-Committee.
Ordered,
That—
(1) the Committee shall (in addition to its first meeting at 9.25 am on Tuesday 13 December) meet—
(a) at 2.00 pm on Tuesday 13 December;
(b) at 11.30 am and 2.00 pm on Thursday 15 December;
(2) the proceedings shall be taken in the following order: Clauses 11 to 14, 18 to 21, 30, 46, 55, 56 and 65; Schedule 8; Clauses 79 and 82; Schedule 11; Clauses 87, 90, 115, 155, 169 and 183; Schedule 17; Clauses 203, 206 and 207; new Clauses; new Schedules; remaining proceedings on the Bill;
(3) the proceedings shall (so far as not previously concluded) be brought to a conclusion at 5.00 pm on Thursday 15 December. —(Paul Scully.)
Resolved,
That, subject to the discretion of the Chair, any written evidence received by the Committee shall be reported to the House for publication.—(Paul Scully.)
We now begin line-by-line consideration of the Bill. Owing to the unusual nature of today’s proceedings on recommittal, which is exceptional, I need to make a few points.
Only the recommitted clauses and schedules, and amendments and new clauses relating to them, are in scope for consideration. The selection list, which has been circulated to Members and is available in the room, outlines which clauses and schedules those are. Any clause or schedule not on the list is not in scope for discussion. Basically, that means that we cannot have another Second Reading debate. Moreover, we cannot have a further debate on any issues that have been debated already on Report on the Floor of the House. As I say, this is unusual; in fact, I think it is probably a precedent—“Erskine May” will no doubt wish to take note.
The selection list also shows the selected amendments and how they have been grouped. Colleagues will by now be aware that we group amendments by subject for debate. They are not necessarily voted on at the time of the completion of the debate on that group, but as we reach their position in the Bill. Do not panic; we have expert advice to ensure that we do not miss anything—at least, I hope we have.
Finally, only the lead amendment is decided on at the end of the debate. If a Member wishes to move any other amendment in the group, please let the Chair know. Dame Angela or I will not necessarily select it for a Division, but we need to know if Members wish to press it to one. Otherwise, there will be no Division on the non-lead amendments.
Clause 11
Safety duties protecting children
I beg to move amendment 98, in clause 11, page 10, line 17, at end insert
“, and—
(c) mitigate the harm to children caused by habit-forming features of the service by consideration and analysis of how processes (including algorithmic serving of content, the display of other users’ approval of posts and notifications) contribute to development of habit-forming behaviour.”
This amendment requires services to take or use proportionate measures to mitigate the harm to children caused by habit-forming features of a service.
Thank you, Sir Roger, for chairing this recommitted Bill Committee. I will not say that it is nice to be back discussing the Bill again; we had all hoped to have made more progress by now. If you will indulge me for a second, I would like to thank the Clerks, who have been massively helpful in ensuring that this quick turnaround could happen and that we could table the amendments in a sensible form.
Amendment 98 arose from comments and evidence from the Royal College of Psychiatrists highlighting that a number of platforms, and particularly social media platforms such as TikTok and Facebook, generally encourage habit-forming behaviour or have algorithms that encourage it. Such companies are there to make money—that is what companies do—so they want people to linger on their sites and to spend as much time there as possible.
I do not know how many hon. Members have spent time on TikTok, but if they do, and they enjoy some of the cat videos, for instance, the algorithm will know and will show them more videos of cats. They will sit there and think, “Gosh, where did the last half-hour go? I have been watching any number of 20-second videos about cats, because they constantly come up.” Social media sites work by encouraging people to linger on the site and to spend the time dawdling and looking at the advertisements, which make the company additional revenue.
That is good for capitalism and for the company’s ability to make money but the issue, particularly in relation to clause 11, is how that affects children. Children may not have the necessary filters; they may not have the ability that we have to put our phones down—not that we always manage to do so. That ability and decision-making process may not be as refined in children as in adults. Children can be sucked into the platforms by watching videos of cats or of something far more harmful.
The hon. Member makes an excellent point about TikTok, but it also applies to YouTube. The platforms’ addictive nature has to do with the content. A platform does not just show a person a video of a cat, because that will not keep them hooked for half an hour. It has to show them a cat doing something extraordinary, and then a cat doing something even more extraordinary. That is why vulnerable people, especially children, get sucked into a dark hole. They click to see not just the same video but something more exciting, and then something even more exciting. That is the addictive nature of this.
That is absolutely the case. We are talking about cats because I chose them to illustrate the situation, but people may look at content about healthy eating, and that moves on to content that encourages them to be sick. The way the algorithms step it up is insidious; they get more and more extreme, so that the linger time is increased and people do not get bored. It is important that platforms look specifically at their habit-forming features.
A specific case on the platform TikTok relates to a misogynist who goes by the name of Andrew Tate, who has been banned from a number of social media platforms. However, because TikTok works by making clips shorter, which makes it more difficult for the company to identify some of this behaviour among users, young boys looking for videos of things that might interest them were very quickly shown misogynist content from Andrew Tate. Because they watched one video of him, they were then shown more and more. It is easy to see how the habit-forming behaviours built into platforms’ algorithms, which the hon. Lady identifies, can also be a means of quickly radicalising children into extreme ideologies.
Order. I think we have the message. I have to say to all hon. Members that interventions are interventions, not speeches. If Members wish to make speeches, there is plenty of time.
Thank you, Sir Roger. I absolutely agree with the hon. Member for Warrington North. The platform works by stitching things together, so a video could have a bit of somebody else’s video in it, and that content ends up being shared and disseminated more widely.
This is not an attack on every algorithm. I am delighted to see lots of videos of cats—it is wonderful, and it suits me down to the ground—but the amendment asks platforms to analyse how those processes contribute to the development of habit-forming behaviour and to mitigate the harm caused to children by habit-forming features in the service. It is not saying, “You can’t use algorithms” or “You can’t use anything that may encourage people to linger on your site.” The specific issue is addiction—the fact that people will get sucked in and stay on platforms for hours longer than is healthy.
There is a demographic divide here. There is a significant issue when we compare children whose parents are engaged in these issues and spend time—and have the time to spend—assisting them to use the internet. There is a divide between the experiences of those children online and the experiences of children who are generally not nearly as well off, whose parents may be working two or three jobs to try to keep their homes warm and keep food on the table, so the level of supervision those children have may be far lower. We have a parental education gap, where parents are not able to instruct or teach their children a sensible way to use these things. A lot of parents have not used things such as TikTok and do not know how it works, so they are unable to teach their children.
Does the hon. Lady agree that this feeds into the problem we have with the lack of a digital media literacy strategy in the Bill, which we have, sadly, had to accept? However, that makes it even more important that we protect children wherever we have the opportunity to do so, and this amendment is a good example of where we can do that.
The hon. Lady makes an excellent point. This is not about mandating that platforms stop doing these things; it is about ensuring that they take this issue into account and that they agree—or that we as legislators agree—with the Royal College of Psychiatrists that we have a responsibility to tackle it. We have a responsibility to ask Ofcom to tackle it with platforms.
This comes back to the fact that we do not have a user advocacy panel, and groups representing children are not able to bring emerging issues forward adequately and effectively. Because of the many other inadequacies in the Bill, that is even more important than it was. I assume the Minister will not accept my amendment—that generally does not happen in Bill Committees—but if he does not, it would be helpful if he could give Ofcom some sort of direction of travel so that it knows it should take this issue into consideration when it deals with platforms. Ofcom should be talking to platforms about habit-forming features and considering the addictive nature of these things; it should be doing what it can to protect children. This threat has emerged only in recent years, and things will not get any better unless we take action.
It is a privilege to see you back in the Chair for round 2 of the Bill Committee, Sir Roger. It feels slightly like déjà vu to return to line-by-line scrutiny of the Bill, which, as you said, Sir Roger, is quite unusual and unprecedented. Seeing this Bill through Committee is the Christmas gift that keeps on giving. Sequels are rarely better than the original, but we will give it a go. I have made no secret of my plans, and my thoughts on the Minister’s plans, to bring forward significant changes to the Bill, which has already been long delayed. I am grateful that, as we progress through Committee, I will have the opportunity to put on record once again some of Labour’s long-held concerns with the direction of the Bill.
I will touch briefly on clause 11 specifically before addressing the amendments to the clause. Clause 11 covers safety duties to protect children, and it is a key part of the Bill—indeed, it is the key reason many of us have taken a keen interest in online safety more widely. Many of us, on both sides of the House, have been united in our frustrations with the business models of platform providers and search engines, which have paid little regard to the safety of children over the years in which the internet has expanded rapidly.
That is why Labour has worked with the Government. We want to see the legislation get over the line, and we recognise—as I have said in Committee previously—that the world is watching, so we need to get this right. The previous Minister characterised the social media platforms and providers as entirely driven by finance, but safety must be the No. 1 priority. Labour believes that that must apply to both adults and children, but that is an issue for debate on a subsequent clause, so I will keep my comments on this clause brief.
The clause and Government amendments 1, 2 and 3 address the thorny issue of age assurance measures. Labour has been clear that we have concerns that the Government are relying heavily on the ability of social media companies to distinguish between adults and children, but age verification processes remain fairly complex, and that clearly needs addressing. Indeed, Ofcom’s own research found that a third of children have false social media accounts aged over 18. This is an area we certainly need to get right.
I am grateful to the many stakeholders, charities and groups working in this area. There are far too many to mention, but a special shout-out should go to Iain Corby from the Age Verification Providers Association, along with colleagues at the Centre to End All Sexual Exploitation and Barnardo’s, and the esteemed John Carr. They have all provided extremely useful briefings for my team and me as we have attempted to unpick this extremely complicated part of the Bill.
We accept that there are effective age checks out there, and many have substantial anti-evasion mechanisms, but it is the frustrating reality that this is the road the Government have decided to go down. As we have repeatedly placed on the record, the Government should have retained the “legal but harmful” provisions that were promised in the earlier iteration of the Bill. Despite that, we are where we are.
I will therefore put on the record some brief comments on the range of amendments on this clause. First, with your permission, Sir Roger, I will speak to amendments 98, 99—
Order. No, you cannot. I am sorry. I am perfectly willing to allow—the hon. Lady has already done this—a stand part debate at the start of a group of selections, rather than at the end, but she cannot have it both ways. I equally understand the desire of an Opposition Front Bencher to make some opening remarks, which is perfectly in order. With respect, however, you may not then go through all the other amendments. We are dealing now with amendment 98. If the hon. Lady can confine her remarks to that at this stage, that would be helpful.
Of course, Sir Roger. Without addressing the other amendments, I would like us to move away from the overly content-focused approach that the Government seem intent on taking in the Bill more widely. I will leave my comments there on the SNP amendment, but we support our SNP colleagues on it.
It is a pleasure to serve under your chairmanship, Sir Roger.
Being online can be a hugely positive experience for children and young people, but we recognise the challenge of habit-forming behaviour or designed addiction to some digital services. The Bill as drafted, however, would already deliver the intent of the amendment from the hon. Member for Aberdeen North. If service providers identify in their risk assessment that habit-forming or addictive-behaviour risks cause significant harm to an appreciable number of children on a service, the Bill will require them to put in place measures to mitigate and manage that risk under clause 11(2)(a).
To meet the child safety risk assessment duties under clause 10, services must assess the risk of harm to children from the different ways in which the service is used; the impact of such use; the level of risk of harm to children; how the design and operation of the service may increase the risks identified; and the functionalities that facilitate the presence or dissemination of content of harm to children. The definition of “functionality” at clause 200 already includes an expression of a view on content, such as applying a “like” or “dislike” button, as at subsection (2)(f)(i).
I thank the Minister for giving way so early on. He mentioned an “appreciable number”. Will he clarify what that is? Is it one, 10, 100 or 1,000?
I do not think that a single number can be put on that, because it depends on the platform and the type of viewing. It is not easy to put a single number on that. An “appreciable number” is basically as identified by Ofcom, which will be the arbiter of all this. It comes back to what the hon. Member for Aberdeen North said about the direction that we, as she rightly said, want to give Ofcom. Ofcom has a range of powers already to help it assess whether companies are fulfilling their duties, including the power to require information about the operation of their algorithms. I would set the direction that the hon. Lady is looking for, to ensure that Ofcom uses those powers to the fullest and can look at the algorithms. We should bear in mind that social media platforms face criminal liability if they do not supply the information required by Ofcom to look under the bonnet.
If platforms do not recognise that they have an issue with habit-forming features, even though we know they have, will Ofcom say to them, “Your risk assessment is insufficient. We know that the habit-forming features are really causing a problem for children”?
We do not want to wait for the Bill’s implementation to start those conversations with the platforms. We expect companies to be transparent about their design practices that encourage extended engagement and to engage with researchers to understand the impact of those practices on their users.
The child safety duties in clause 11 apply across all areas of a service, including the way it is operated and used by children and the content present on the service. Subsection (4)(b) specifically requires services to consider the
“design of functionalities, algorithms and other features”
when complying with the child safety duties. Given the direction I have suggested that Ofcom has, and the range of powers that it will already have under the Bill, I am unable to accept the hon. Member’s amendment, and I hope she will therefore withdraw it.
I would have preferred it had the Minister been slightly more explicit that habit-forming features are harmful. That would have been slightly more helpful.
I thank the Minister. Absolutely—they are not always harmful. With that clarification, I am happy to beg to ask leave to withdraw the amendment.
Amendment, by leave, withdrawn.
I beg to move amendment 1, in clause 11, page 10, line 22, leave out
“, or another means of age assurance”.
This amendment omits words which are no longer necessary in subsection (3)(a) of clause 11 because they are dealt with by the new subsection inserted by Amendment 3.
The Bill’s key objective, above everything else, is the safety of young people online. That is why the strongest protections in the Bill are for children. Providers of services that are likely to be accessed by children will need to provide safety measures to protect child users from harmful content, such as pornography, and from behaviour such as bullying. We expect companies to use age verification technologies to prevent children from accessing services that pose the highest risk of harm to them, and age assurance technologies and other measures to provide children with an age-appropriate experience.
The previous version of the Bill already focused on protecting children, but the Government are clear that the Bill must do more to achieve that and to ensure that the requirements on providers are as clear as possible. That is why we are strengthening the Bill and clarifying the responsibilities of providers to provide age-appropriate protections for children online. We are making it explicit that providers may need to use age assurance to identify the age of their users in order to meet the child safety duties for user-to-user services.
The Bill already set out that age assurance may be required to protect children from harmful content and activity, as part of meeting the duty in clause 11(3), but the Bill will now clarify that it may also be needed to meet the wider duty in subsection (2) to
“mitigate and manage the risks of harm to children”
and to manage
“the impact of harm to children”
on such services. That is important so that only children who are old enough are able to use functionalities on a service that poses a risk of harm to younger children. The changes will also ensure that children are signposted to support that is appropriate to their age if they have experienced harm. For those reasons, I recommend that the Committee accepts the amendments.
I have a few questions regarding amendments 1 to 3, which as I mentioned relate to the thorny issue of age verification and age assurance, and I hope the Minister can clarify some of them.
We are unclear about why, in subsection (3)(a), the Government have retained the phrase
“for example, by using age verification, or another means of age assurance”.
Can that difference in wording be taken as confirmation that the Government want harder forms of age verification for primary priority content? The Minister will be aware that many in the sector are unclear about what that harder form of age verification may look like, so some clarity would be useful for all of us in the room and for those watching.
In addition, we would like to clarify the Minister’s understanding of the distinction between age verification and age assurance. They are very different concepts in reality, so we would appreciate it if he could be clear, particularly when we consider the different types of harm that the Bill will address and protect people from, how that will be achieved and what technology will be required for different types of platform and content. I look forward to clarity from the Minister on that point.
That is a good point. In essence, age verification is the hard access to a service. Age assurance ensures that the person who uses the service is the same person whose age was verified. Someone could use their parent’s debit card or something like that, so it is not necessarily the same person using the service right the way through. If we are to protect children, in particular, we have to ensure that we know there is a child at the other end whom we can protect from the harm that they may see.
On the different technologies, we are clear that our approach to age assurance or verification is not technology-specific. Why? Because otherwise the Bill would be out of date within around six months. By the time the legislation was fully implemented it would clearly be out of date. That is why it is incumbent on the companies to be clear about the technology and processes they use. That information will be kept up to date, and Ofcom can then look at it.
The Minister leapt to his feet before I had the opportunity to call any other Members. I call Kirsty Blackman.
Thank you, Sir Roger. It was helpful to hear the Minister’s clarification of age assurance and age verification, and it was useful for him to put on the record the difference between the two.
I have a couple of points. In respect of Ofcom keeping up to date with the types of age verification and the processes, new ones will come through and excellent new methods will appear in coming years. I welcome the Minister’s suggestion that Ofcom will keep up to date with that, because it is incredibly important that we do not rely on, say, the one provider that there is currently, when really good methods could come out. We need the legislation to ensure that we get the best possible service and the best possible verification to keep children away from content that is inappropriate for them.
This is one of the most important parts of the Bill for ensuring that we can continue to have adult sections of the internet—places where there is content that would be disturbing for children, as well as for some adults—and that an age-verification system is in place to ensure that that content can continue to be there. Websites that require a subscription, such as OnlyFans, need to continue to have in place the age-verification systems that they currently have. By writing into legislation the requirement for them to continue to have such systems in place, we can ensure that children cannot access such services but adults can continue to do so. This is not about what is banned online or about trying to make sure that this content does not exist anywhere; it is specifically about gatekeeping to ensure that no child, as far as we can possibly manage, can access content that is inappropriate for kids.
There was a briefing recently on children’s access to pornography, and we heard horrendous stories. It is horrendous that a significant number of children have seen inappropriate content online, and the damage that that has caused to so many young people cannot be overstated. Blocking access to adult parts of the internet is so important for the next generation, not just so that children are not disturbed by the content they see, but so that they learn that it is not okay and normal and understand that the depictions of relationships in pornography are not the way reality works, not the way reality should work and not how women should be treated. Having a situation in which Ofcom or anybody else is better able to take action to ensure that adult content is specifically accessed only by adults is really important for the protection of children and for protecting the next generation and their attitudes, particularly towards sex and relationships.
I wish to add some brief words in support of the Government’s proposals and to build on the comments from Members of all parties.
We know that access to extreme and abusive pornography is a direct factor in violence against women and girls. We see that play out in the court system every day. People claim to have watched and become addicted to this type of pornography; they are put on trial because they seek to play that out in their relationships, which has resulted in the deaths of women. The platforms already have technology that allows them to figure out the age of people on their platforms. The Bill seeks to ensure that they use that for a good end, so I thoroughly support it. I thank the Minister.
There are two very important and distinct issues here. One is age verification. The platforms ask adults who have identification to verify their age; if they cannot verify their age, they cannot access the service. Platforms have a choice within that. They can design their service so that it does not have adult content, in which case they may not need to build in verification systems—the platform polices itself. However, a platform such as Twitter, which allows adult content on an app that is open to children, has to build in those systems. As the hon. Member for Aberdeen North mentioned, people will also have to verify their identity to access a service such as OnlyFans, which is an adult-only service.
On that specific point, I searched on Twitter for the name—first name and surname—of a politician to see what people had been saying, because I knew that he was in the news. The pictures that I saw! That was purely by searching for the name of the politician; it is not as though people are necessarily seeking such stuff out.
On these platforms, the age verification requirements are clear: they must age-gate the adult content or get rid of it. They must do one or the other. Rightly, the Bill does not specify technologies. Technologies are available. The point is that a company must demonstrate that it is using an existing and available technology or that it has some other policy in place to remedy the issue. It has a choice, but it cannot do nothing. It cannot say that it does not have a policy on it.
Age assurance is always more difficult for children, because they do not have the same sort of ID that adults have. However, technologies exist: for instance, Yoti uses facial scanning. Companies do not have to do that either; they have to demonstrate that they do something beyond self-certification at the point of signing up. That is right. Companies may also demonstrate what they do to take robust action to close the accounts of children they have identified on their platforms.
If a company’s terms of service state that people must be 13 or over to use the platform, the company is inherently stating that the platform is not safe for someone under 13. What does it do to identify people who sign up? What does it do to identify people once they are on the platform, and what action does it then take? The Bill gives Ofcom the powers to understand those things and to force a change of behaviour and action. That is why—to the point made by the hon. Member for Pontypridd—age assurance is a slightly broader term, but companies can still extract a lot of information to determine the likely age of a child and take the appropriate action.
I think we are all in agreement, and I hope that the Committee will accept the amendments.
Amendment 1 agreed to.
Amendments made: 2, in clause 11, page 10, line 25, leave out
“(for example, by using age assurance)”.
This amendment omits words which are no longer necessary in subsection (3)(b) of clause 11 because they are dealt with by the new subsection inserted by Amendment 3.
Amendment 3, in clause 11, page 10, line 26, at end insert—
“(3A) Age assurance to identify who is a child user or which age group a child user is in is an example of a measure which may be taken or used (among others) for the purpose of compliance with a duty set out in subsection (2) or (3).”—(Paul Scully.)
This amendment makes it clear that age assurance measures may be used to comply with duties in clause 11(2) as well as (3) (safety duties protecting children).
I beg to move amendment 99, in clause 11, page 10, line 34, leave out paragraph (d) and insert—
“(d) policies on user access to the service, parts of the service, or to particular content present on the service, including blocking users from accessing the service, parts of the service, or particular content,”.
This amendment is intended to make clear that if it is proportionate to do so, services should have policies that include blocking access to parts of a service, rather than just the entire service or particular content on the service.
With this it will be convenient to discuss the following:
Amendment 96, in clause 11, page 10, line 41, at end insert—
“(i) reducing or removing a user’s access to private messaging features”.
This amendment is intended to explicitly include removing or reducing access to private messaging features in the list of areas where proportionate measures can be taken to protect children.
Amendment 97, in clause 11, page 10, line 41, at end insert—
“(i) reducing or removing a user’s access to livestreaming features”.
This amendment is intended to explicitly include removing or reducing access to livestreaming features in the list of areas where proportionate measures can be taken to protect children.
I am glad that the three amendments are grouped, because they link together nicely. I am concerned that clause 11(4)(d) does not do exactly what the Government intend it to. It refers to
“policies on user access to the service or to particular content present on the service, including blocking users from accessing the service or particular content”.
There is a difference between content and parts of the service. It would be possible to block users from accessing some of the things that we have been talking about —for example, eating disorder content—on the basis of clause 11(4)(d). A platform would be able to take that action, provided that it had the architecture in place. However, on my reading, I do not think it would be possible to block a user from accessing, for example, private messaging or livestreaming features. Clause 11(4)(d) would allow a platform to block certain content, or access to the service, but it would not explicitly allow it to block users from using part of the service.
Let us think about platforms such as Discord and Roblox. I have an awful lot of issues with Roblox, but it can be a pretty fun place for people to spend time. However, if a child, or an adult, is inappropriately using its private messaging features, or somebody on Instagram is using the livestreaming features, there are massive potential risks of harm. Massive harm is happening on such platforms. That is not to say that Instagram is necessarily inherently harmful, but if it could block a child’s access to livestreaming features, that could have a massive impact in protecting them.
Does the hon. Lady accept that the amendments would give people control over the bit of the service that they do not currently have control of? A user can choose what to search for and which members to engage with, and can block people. What they cannot do is stop the recommendation feeds recommending things to them. The shields intervene there, which gives user protection, enabling them to say, “I don’t want this sort of content recommended to me. On other things, I can either not search for them, or I can block and report offensive users.” Does she accept that that is what the amendment achieves?
I think that that is what the clause achieves, rather than the amendments that I have tabled. I recognise that the clause achieves that, and I have no concerns about it. It is good that the clause does that; my concern is that it does not take the second step of blocking access to certain features on the platform. For example, somebody could be having a great time on Instagram looking at various people’s pictures or whatever, but they may not want to be bombarded with private messages. They have no ability to turn off the private messaging section.
They can disengage with the user who is sending the messages. On a Meta platform, often those messages will be from someone they are following or engaging with. They can block them, and the platforms have the ability, in most in-app messaging services, to see whether somebody is sending priority illegal content material to other users. They can scan for that and mitigate that as well.
That is exactly why users should be able to block private messaging in general. Someone on Twitter can say, “I’m not going to receive a direct message from anybody I don’t follow.” Twitter users have the opportunity to do that, but there is not necessarily that opportunity on all platforms. We are asking for those things to be included, so that the provider can say, “You’re using private messaging inappropriately. Therefore, we are blocking all your access to private messaging,” or, “You are being harmed as a result of accessing private messaging. Therefore, we are blocking your access to any private messaging. You can still see pictures on Instagram, but you can no longer receive any private messages, because we are blocking your access to that part of the site.” That is very different from blocking a user’s access to certain kinds of content, for example. I agree that that should happen, but it is about the functionalities and stopping access to some of them.
We are not asking Ofcom to mandate that platforms take this measure; they could still take the slightly more nuclear option of banning somebody entirely from their service. However, if this option is included, we could say, “Your service is doing pretty well, but we know there is an issue with private messaging. Could you please take action to ensure that those people who are using private messaging to harm children no longer have access to private messaging and are no longer able to use the part of the service that enables them to do these things?” Somebody might be doing a great job of making games in Roblox, but they may be saying inappropriate things. It may be proportionate to block that person entirely, but it may be more proportionate to block their access to voice chat, so that they can no longer say those things, or direct message or contact anybody. It is about proportionality and recognising that the service is not necessarily inherently harmful but that specific parts of it could be.
The hon. Member is making fantastic, salient points. The damage with private messaging is around phishing, as well as seeing a really harmful message and not being able to unsee it. Would she agree that it is about protecting the victim, not putting the onus on the victim to disengage from such conversations?
I completely agree. The hon. Member put that much better than I could. I was trying to formulate that point in my head, but had not quite got there, so I appreciate her intervention. She is right: we should not put the onus on a victim to deal with a situation. Once they have seen a message from someone, they can absolutely block that person, but that person could create another account and send them messages again. People could be able to choose, and to say, “No, I don’t want anyone to be able to send me private messages,” or “I don’t want any private messages from anyone I don’t know.” We could put in those safeguards.
I am talking about adding another layer to the clause, so that companies would not necessarily have to demonstrate that it was proportionate to ban a person from using their service, as that may be too high a bar—a concern I will come to later. They could, however, demonstrate that it was proportionate to ban a person from using private messaging services, or from accessing livestreaming features. There has been a massive increase in self-generated child sexual abuse images, and huge amount has come from livestreaming. There are massive risks with livestreaming features on services.
Livestreaming is not always bad. Someone could livestream themselves showing how to make pancakes. There is no issue with that—that is grand—but livestreaming is being used by bad actors to manipulate children into sharing videos of themselves, and once they are on the internet, they are there forever. It cannot be undone. If we were able to ban vulnerable users—my preferred option would be all children—from accessing livestreaming services, they would be much safer.
The hon. Lady is talking about extremely serious matters. My expectation is that Ofcom would look at all of a platform’s features when risk-assessing the platform and enforcing safety, and in-app messaging services would not be exempt. Platforms have to demonstrate what they would do to mitigate harmful and abusive behaviour, and that they would take action against the accounts responsible.
Absolutely, I agree, but the problem is with the way the Bill is written. It does not suggest that a platform could stop somebody accessing a certain part of a service. The Bill refers to content, and to the service as a whole, but it does not have that middle point that I am talking about.
A platform is required to demonstrate to Ofcom what it would do to mitigate activity that would breach the safety duties. It could do that through a feature that it builds in, or it may take a more draconian stance and say, “Rather than turning off certain features, we will just suspend the account altogether.” That could be discussed in the risk assessments, and agreed in the codes of practice.
What I am saying is that the clause does not actually allow that middle step. It does not explicitly say that somebody could be stopped from accessing private messaging. The only options are being banned from certain content, or being banned from the entire platform.
I absolutely recognise the hard work that Ofcom has done, and I recognise that it will work very hard to ensure that risks are mitigated, but the amendment ensures what the Minister intended with this legislation. I am not convinced that he intended there to be just the two options that I outlined. I think he intended something more in line with what I am suggesting in the amendment. It would be very helpful if the Minister explicitly said something in this Committee that makes it clear that Ofcom has the power to say to platforms, “Your risk assessment says that there is a real risk from private messaging”—or from livestreaming—“so why don’t you turn that off for all users under 18?” Ofcom should be able to do that.
Could the Minister be clear that that is the direction of travel he is hoping and intending that Ofcom will take? If he could be clear on that, and will recognise that the clause could have been slightly better written to ensure Ofcom had that power, I would be quite happy to not push the amendment to a vote. Will the Minister be clear about the direction he hopes will be taken?
I rise to support my SNP colleagues’ amendments 99, and 96 and 97, just as I supported amendment 98. The amendments are sensible and will ensure that service providers are empowered to take action to mitigate harms done through their services. In particular, we support amendment 99, which makes it clear that a service should be required to have the tools available to allow it to block access to parts of its service, if that is proportionate.
Amendments 96 and 97 would ensure that private messaging and livestreaming features were brought into scope, and that platforms and services could block access to them when that was proportionate, with the aim of protecting children, which is the ultimate aim of the Bill. Those are incredibly important points to raise.
In previous iterations of the Bill Committee, Labour too tabled a number of amendments to do with platforms’ responsibilities for livestreaming. I expressed concerns about how easy it is for platforms to host live content, and about how ready they were to screen that content for harm, illegal or not. I am therefore pleased to support our SNP colleagues. The amendments are sensible, will empower platforms and will keep children safe.
It is a pleasure to serve with you in the Chair, Sir Roger. I rise in support of amendments 99, and 96 and 97, as my hon. Friend the Member for Pontypridd did. I have an issue with the vagueness and ambiguity in the Bill. Ministerial direction is incredibly helpful, not only for Ofcom, but for the companies and providers that will use the Bill to make technologies available to do what we are asking them to do.
As the hon. Member for Aberdeen North said, if the Bill provided for that middle ground, that would be helpful for a number of purposes. Amendment 97 refers to livestreaming; in a number of cases around the world, people have livestreamed acts of terror, such as the shooting at the Christchurch mosque. Those offences were watched in real time, as they were perpetuated, by potentially hundreds of thousands of people. We have people on watch lists—people we are aware of. If we allowed them to use a social media platform but not the livestreaming parts, that could go some way to mitigating the risk of their livestreaming something like that. Their being on the site is perhaps less of a concern, as their general use of it could be monitored in real time. Under a risk analysis, we might be happy for people to be on a platform, but consider that the risk was too great to allow them to livestream. Having such a provision would be helpful.
My hon. Friend the Member for Luton North mentioned the onus always being on the victim. When we discuss online abuse, I really hate it when people say, “Well, just turn off your messages”, “Block them” or “Change your notification settings”, as though that were a panacea. Turning off the capacity to use direct messages is a much more effective way of addressing abuse by direct message than banning the person who sent it altogether—they might just make a new account—or than relying on the recipient of the message to take action when the platform has the capacity to take away the option of direct messaging. The adage is that sunlight is the best disinfectant. When people post in public and the post can be seen by anyone, they can be held accountable by anyone. That is less of a concern to me than what they send privately, which can be seen only by the recipient.
This group of amendments is reasonable and proportionate. They would not only give clear ministerial direction to Ofcom and the technology providers, and allow Ofcom to take the measures that we are discussing, but would pivot us away from placing the onus on the recipients of abusive behaviour, or people who might be exposed to it. Instead, the onus would be on platforms to make those risk assessments and take the middle ground, where that is a reasonable and proportionate step.
If someone on a PlayStation wants to play online games, they must sign up to PlayStation Plus—that is how the model works. Once they pay that subscription, they can access online games and play Fortnite or Rocket League or whatever they want online. They then also have access to a suite of communication features; they can private message people. It would be disproportionate to ban somebody from playing any PlayStation game online in order to stop them from being able to private message inappropriate things. That would be a disproportionate step. I do not want PlayStation to be unable to act against somebody because it could not ban them, as that would be disproportionate, but was unable to switch off the direct messaging features because the clause does not allow it that flexibility. A person could continue to be in danger on the PlayStation platform as a result of private communications that they could receive. That is one example of how the provision would be key and important.
Again, the Government recognise the intent behind amendment 99, which, as the hon. Member for Aberdeen North said, would require providers to be able to block children’s access to parts of a service, rather than the entire service. I very much get that. We recognise the nature and scale of the harm that can be caused to children through livestreaming and private messaging, as has been outlined, but the Bill already delivers what is intended by these amendments. Clause 11(4) sets out examples of areas in which providers will need to take measures, if proportionate, to meet the child safety duties. It is not an exhaustive list of every measure that a provider might be required to take. It would not be feasible or practical to list every type of measure that a provider could take to protect children from harm, because such a list could become out of date quickly as new technologies emerge, as the hon. Lady outlined with her PlayStation example.
I have a concern. The Minister’s phrasing was “to block children’s access”. Surely some of the issues would be around blocking adults’ access, because they are the ones causing risk to the children. From my reading of the clause, it does not suggest that the action could be taken only against child users; it could be taken against any user in order to protect children.
I will come to that in a second. The hon. Member for Luton North talked about putting the onus on the victim. Any element of choice is there for adults; the children will be protected anyway, as I will outline in a second. We all agree that the primary purpose of the Bill is to be a children’s protection measure.
Ofcom will set out in codes of practice the specific steps that providers can take to protect children who are using their service, and the Government expect those to include steps relating to children’s access to high-risk features, such as livestreaming or private messaging. Clause 11(4)(d) sets out that that providers may be required to take measures in the following areas:
“policies on user access to the service or to particular content present on the service, including blocking users from accessing the service or particular content”.
The other areas listed are intentionally broad categories that allow for providers to take specific measures. For example, a measure in the area of blocking user access to particular content could include specific measures that restrict children’s access to parts of a service, if that is a proportionate way to stop users accessing that type of content. It can also apply to any of the features of a service that enable children to access particular content, and could therefore include children’s access to livestreaming and private messaging features. In addition, the child safety duties make it clear that providers need to use proportionate systems and processes that prevent children from encountering primary priority content that is harmful to them, and protect children and age groups at risk of harm from other content that is harmful to them.
While Ofcom will set out in codes of practice the steps that providers can take to meet these duties, we expect those steps, as we have heard, to include the use of age verification to prevent children accessing content that poses the greatest risk of harm to them. To meet that duty, providers may use measures that restrict children from accessing parts of the service. The Bill therefore allows Ofcom to require providers to take that step where it is proportionate. I hope that that satisfies the hon. Member for Aberdeen North, and gives her the direction that she asked for—that is, a direction to be more specific that Ofcom does indeed have the powers that she seeks.
The Bill states that we can expect little impact on child protection before 2027-28 because of the enforcement road map and when Ofcom is planning to set that out. Does the Minister not think that in the meantime, that sort of ministerial direction would be helpful? It could make Ofcom’s job easier, and would mean that children could be protected online before 2027-28.
The ministerial direction that the various platforms are receiving from the Dispatch Box, from our conversations with them and from the Bill’s progress as it goes through the House of Lords will be helpful to them. We do not expect providers to wait until the very last minute to implement the measures. They are starting to do so now, but we want them to go them further, quicker.
Government amendment 4 will require providers who already have a minimum age requirement for access to their service, or parts of it, to give details of the measures that they use to restrict access in their terms of service and apply them consistently. Providers will also need to provide age-appropriate protections for children using their service. That includes protecting children from harmful content and activity on their service, as well as reviewing children’s use of higher-risk features, as I have said.
To meet the child safety risk assessment duties in clause 10, providers must assess: the risk of harm to children from functionalities that facilitate the presence or dissemination of harmful content; the level of risk from different kinds of harmful content, giving separate consideration to children in different age groups; the different ways in which the service is used, and the impact of such use on the level of risk of harm; and how the design and operation of the service may increase the risks identified.
The child safety duties in clause 11 apply across all areas of the service, including the way it is operated and used by children, as well as the content present on the service. For the reasons I have set out, I am not able to accept the amendments, but I hope that the hon. Member for Aberdeen North will take on board my assurances.
That was quite helpful. I am slightly concerned about the Minister’s focus on reducing children’s access to the service or to parts of it. I appreciate that is part of what the clause is intended to do, but I would also expect platforms to be able to reduce the ability of adults to access parts of the service or content in order to protect children. Rather than just blocking children, blocking adults from accessing some features—whether that is certain adults or adults as a group—would indeed protect children. My reading of clause 11(4) was that users could be prevented from accessing some of this stuff, rather than just child users. Although the Minister has given me more questions, I do not intend to push the amendment to a vote.
May I ask a question of you, Sir Roger? I have not spoken about clause stand part. Are we still planning to have a clause stand part debate?
Thank you, Sir Roger; I appreciate the clarification. When I talk about Government amendment 4, I will also talk about clause stand part. I withdraw the amendment.
I beg to move amendment 4, in clause 11, page 11, line 9, at end insert—
“(6A) If a provider takes or uses a measure designed to prevent access to the whole of the service or a part of the service by children under a certain age, a duty to—
(a) include provisions in the terms of service specifying details about the operation of the measure, and
(b) apply those provisions consistently.”
This amendment requires providers to give details in their terms of service about any measures they use which prevent access to a service (or part of it) by children under a certain age, and to apply those terms consistently.
With this it will be convenient to discuss the following:
Government amendment 5.
Amendment 100, in clause 11, page 11, line 15, after “accessible” insert “for child users.”
This amendment makes clear that the provisions of the terms of service have to be clear and accessible for child users.
Although the previous version of the Bill already focused on protecting children, as I have said, the Government are clear that it must do more to achieve that and to ensure that requirements for providers are as clear as possible. That is why we are making changes to strengthen the Bill. Amendments 4 and 5 will require providers who already have a minimum age requirement for access to their service, or parts of it, to give details in their terms of services of the measures that they use to ensure that children below the minimum age are prevented access. Those terms must be applied consistently and be clear and accessible to users. The change will mean that providers can be held to account for what they say in their terms of service, and will no longer do nothing to prevent underage access.
The Government recognise the intent behind amendment 100, which is to ensure that terms of service are clear and accessible for child users, but the Bill as drafted sets an appropriate standard for terms of service. The duty in clause 11(8) sets an objective standard for terms of service to be clear and accessible, rather than requiring them to be clear for particular users. Ofcom will produce codes of practice setting out how providers can meet that duty, which may include provisions about how to tailor the terms of service to the user base where appropriate.
The amendment would have the unintended consequence of limiting to children the current accessibility requirement for terms of service. As a result, any complicated and detailed information that would not be accessible for children—for example, how the provider uses proactive technology—would probably need to be left out of the terms of service, which would clearly conflict with the duty in clause 11(7) and other duties relating to the terms of service. It is more appropriate to have an objective standard of “clear and accessible” so that the terms of service can be tailored to provide the necessary level of information and be useful to other users such as parents and guardians, who are most likely to be able to engage with the more detailed information included in the terms of service and are involved in monitoring children’s online activities.
Ofcom will set out steps that providers can take to meet the duty and will have a tough suite of enforcement powers to take action against companies that do not meet their child safety duties, including if their terms of service are not clear and accessible. For the reasons I have set out, I am not able to accept the amendment tabled by the hon. Member for Aberdeen North and I hope she will withdraw it.
As I said, I will also talk about clause 11. I can understand why the Government are moving their amendments. It makes sense, particularly with things like complying with the provisions. I have had concerns all the way along—particularly acute now as we are back in Committee with a slightly different Bill from the one that we were first presented with—about the reliance on terms of service. There is a major issue with choosing to go down that route, given that providers of services can choose what to put in their terms of service. They can choose to have very good terms of service that mean that they will take action on anything that is potentially an issue and that will be strong enough to allow them to take the actions they need to take to apply proportionate measures to ban users that are breaking the terms of service. Providers will have the ability to write terms of service like that, but not all providers will choose to do that. Not all providers will choose to write the gold standard terms of service that the Minister expects everybody will write.
We have to remember that these companies’ and organisations’ No. 1 aim is not to protect children. If their No. 1 aim was to protect children, we would not be here. We would not need an Online Safety Bill because they would be putting protection front and centre of every decision they make. Their No. 1 aim is to increase the number of users so that they can get more money. That is the aim. They are companies that have a duty to their shareholders. They are trying to make money. That is the intention. They will not therefore necessarily draw up the best possible terms of service.
I heard an argument on Report that market forces will mean that companies that do not have strong enough terms of service, companies that have inherent risks in their platforms, will just not be used by people. If that were true, we would not be in the current situation. Instead, the platforms that are damaging people and causing harm—4chan, KiwiFarms or any of those places that cause horrendous difficulties—would not be used by people because market forces would have intervened. That approach does not work; it does not happen that the market will regulate itself and people will stay away from places that cause them or others harm. That is not how it works. I am concerned about the reliance on terms of service and requiring companies to stick to their own terms of service. They might stick to their own terms of service, but those terms of service might be utterly rubbish and might not protect people. Companies might not have in place what we need to ensure that children and adults are protected online.
Does the hon. Lady agree that people out there in the real world have absolutely no idea what a platform’s terms of service are, so we are being expected to make a judgment on something about which we have absolutely no knowledge?
Absolutely. The amendment I tabled regarding the accessibility of terms of service was designed to ensure that if the Government rely on terms of service, children can access those terms of service and are able to see what risks they are putting themselves at. We know that in reality children will not read these things. Adults do not read these things. I do not know what Twitter’s terms of service say, but I do know that Twitter managed to change its terms of service overnight, very easily and quickly. Companies could just say, “I’m a bit fed up with Ofcom breathing down my neck on this. I’m just going to change my terms of service, so that Ofcom will not take action on some of the egregious harm that has been done. If we just change our terms of service, we don’t need to bother. If we say that we are not going to ban transphobia on our platform—if we take that out of the terms of service—we do not need to worry about transphobia on our platform. We can just let it happen, because it is not in our terms of service.”
Does the hon. Lady agree that the Government are not relying solely on terms of service, but are rightly saying, “If you say in your terms of service that this is what you will do, Ofcom will make sure that you do it”? Ofcom will take on that responsibility for people, making sure that these complex terms of service are understood and enforced, but the companies still have to meet all the priority illegal harms objectives that are set out in the legislation. Offences that exist in law are still enforced on platforms, and risk-assessed by Ofcom as well, so if a company does not have a policy on race hate, we have a law on race hate, and that will apply.
It is absolutely the case that those companies still have to do a risk assessment, and a child risk assessment if they meet the relevant criteria. The largest platforms, for example, will still have to do a significant amount of work on risk assessments. However, every time a Minister stands up and talks about what they are requiring platforms and companies to do, they say, “Companies must stick to their terms of service. They must ensure that they enforce things in line with their terms of service.” If a company is finding it too difficult, it will just take the tough things out of their terms of service. It will take out transphobia, it will take out abuse. Twitter does not ban anyone for abuse anyway, it seems, but it will be easier for Twitter to say, “Ofcom is going to try to hold us for account for the fact that we are not getting rid of people for abusive but not illegal messages, even though we say in our terms of service, ‘You must act with respect’, or ‘You must not abuse other users’. We will just take that out of our terms of service so that we are not held to account for the fact that we are not following our terms of service.” Then, because the abuse is not illegal—because it does not meet that bar—those places will end up being even less safe than they are right now.
For example, occasionally Twitter does act in line with its terms of service, which is quite nice: it does ban people who are behaving inappropriately, but not necessarily illegally, on its platform. However, if it is required to implement that across the board for everybody, it will be far easier for Twitter to say, “We’ve sacked all our moderators—we do not have enough people to be able to do this job—so we will just take it all out of the terms of service. The terms of service will say, ‘We will ban people for sharing illegal content, full stop.’” We will end up in a worse situation than we are currently in, so the reliance on terms of service causes me a big, big problem.
Turning to amendment 100, dealing specifically with the accessibility of this feature for child users, I appreciate the ministerial clarification, and agree that my amendment could have been better worded and potentially causes some problems. However, can the Minister talk more about the level of accessibility? I would like children to be able to see a version of the terms of service that is age-appropriate, so that they understand what is expected of them and others on the platform, and understand when and how they can make a report and how that report will be acted on. The kids who are using Discord, TikTok or YouTube are over 13—well, some of them are—so they are able to read and understand, and they want to know how to make reports and for the reporting functions to be there. One of the biggest complaints we hear from kids is that they do not know how to report things they see that are disturbing.
A requirement for children to have an understanding of how reporting functions work, particularly on social media platforms where people are interacting with each other, and of the behaviour that is expected of them, does not mean that there cannot be a more in-depth and detailed version of the terms of service, laying out potential punishments using language that children may not be able to understand. The amendment would specifically ensure that children have an understanding of that.
We want children to have a great time on the internet. There are so many ace things out there and wonderful places they can access. Lego has been in touch, for example; its website is really pretty cool. We want kids to be able to access that stuff and communicate with their friends, but we also want them to have access to features that allow them to make reports that will keep them safe. If children are making reports, then platforms will say, “Actually, there is real problem with this because we are getting loads of reports about it.” They will then be able to take action. They will be able to have proper risk assessments in place because they will be able to understand what is disturbing people and what is causing the problems.
I am glad to hear the Minister’s words. If he were even more clear about the fact that he would expect children to be able to understand and access information about keeping themselves safe on the platforms, then that would be even more helpful.
On terms and conditions, it is clearly best practice to have a different level of explanation that ensures children can fully understand what they are getting into. The hon. Lady talked about the fact that children do not know how to report harm. Frankly, judging by a lot of conversations we have had in our debates, we do not know how to report harm because it is not transparent. On a number of platforms, how to do that is very opaque.
A wider aim of the Bill is to make sure that platforms have better reporting patterns. I encourage platforms to do exactly what the hon. Member for Aberdeen North says to engage children, and to engage parents. Parents are well placed to engage with reporting and it is important that we do not forget parenting in the equation of how Government and platforms are acting. I hope that is clear to the hon. Lady. We are mainly relying on terms and conditions for adults, but the Bill imposes a wider set of protections for children on the platforms.
Amendment 4 agreed to.
Amendment made: 5, in clause 11, page 11, line 15, after “(5)” insert “, (6A)”.—(Paul Scully.)
This amendment ensures that the duty in clause 11(8) to have clear and accessible terms of service applies to the terms of service mentioned in the new subsection inserted by Amendment 4.
Clause 11, as amended, ordered to stand part of the Bill.
Clause 12
Adults’ risk assessment duties
Question proposed, That the clause stand part of the Bill.
With this it will be convenient to discuss the following:
Clause 13 stand part.
Government amendments 18, 23 to 25, 32, 33 and 39.
Clause 55 stand part.
Government amendments 42 to 45, 61 to 66, 68 to 70, 74, 80, 85, 92, 51 and 52, 54, 94 and 60.
To protect free speech and remove any possibility that the Bill could cause tech companies to censor legal content, I seek to remove the so-called “legal but harmful” duties from the Bill. These duties are currently set out in clauses 12 and 13 and apply to the largest in-scope services. They require services to undertake risk assessments for defined categories of harmful but legal content, before setting and enforcing clear terms of service for each category of content.
I share the concerns raised by Members of this House and more broadly that these provisions could have a detrimental effect on freedom of expression. It is not right that the Government define what legal content they consider harmful to adults and then require platforms to risk assess for that content. Doing so may encourage companies to remove legal speech, undermining this Government’s commitment to freedom of expression. That is why these provisions must be removed.
At the same time, I recognise the undue influence that the largest platforms have over our public discourse. These companies get to decide what we do and do not see online. They can arbitrarily remove a user’s content or ban them altogether without offering any real avenues of redress to users. On the flip side, even when companies have terms of service, these are often not enforced, as we have discussed. That was the case after the Euro 2020 final where footballers were subject to the most appalling abuse, despite most platforms clearly prohibiting that. That is why I am introducing duties to improve the transparency and accountability of platforms and to protect free speech through new clauses 3 and 4. Under these duties, category 1 platforms will only be allowed to remove or restrict access to content or ban or suspend users when this is in accordance with their terms of service or where they face another legal obligation. That protects against the arbitrary removal of content.
Companies must ensure that their terms of service are consistently enforced. If companies’ terms of service say that they will remove or restrict access to content, or will ban or suspend users in certain circumstances, they must put in place proper systems and processes to apply those terms. That will close the gap between what companies say they will do and what they do in practice. Services must ensure that their terms of service are easily understandable to users and that they operate effective reporting and redress mechanisms, enabling users to raise concerns about a company’s application of the terms of service. We will debate the substance of these changes later alongside clause 18.
Clause 55 currently defines
“content that is harmful to adults”,
including
“priority content that is harmful to adults”
for the purposes of this legislation. As this concept would be removed with the removal of the adult safety duties, this clause will also need to be removed.
My hon. Friend mentioned earlier that companies will not be able to remove content if it is not part of their safety duties or if it was not a breach of their terms of service. I want to be sure that I heard that correctly and to ask whether Ofcom will be able to risk assess that process to ensure that companies are not over-removing content.
Absolutely. I will come on to Ofcom in a second and respond directly to his question.
The removal of clauses 12, 13 and 55 from the Bill, if agreed by the Committee, will require a series of further amendments to remove references to the adult safety duties elsewhere in the Bill. These amendments are required to ensure that the legislation is consistent and, importantly, that platforms, Ofcom and the Secretary of State are not held to requirements relating to the adult safety duties that we intend to remove from the Bill. The amendments remove requirements on platforms and Ofcom relating to the adult safety duties. That includes references to the adult safety duties in the duties to provide content reporting and redress mechanisms and to keep records. They also remove references to content that is harmful to adults from the process for designating category 1, 2A and 2B companies. The amendments in this group relate mainly to the process for the category 2B companies.
I also seek to amend the process for designating category 1 services to ensure that they are identified based on their influence over public discourse, rather than with regard to the risk of harm posed by content that is harmful to adults. These changes will be discussed when we debate the relevant amendments alongside clause 82 and schedule 11. The amendments will remove powers that will no longer be required, such as the Secretary of State’s ability to designate priority content that is harmful to adults. As I have already indicated, we intend to remove the adult safety duties and introduce new duties on category 1 services relating to transparency, accountability and freedom of expression. While they will mostly be discussed alongside clause 18, amendments 61 to 66, 68 to 70 and 74 will add references to the transparency, accountability and freedom of expression duties to schedule 8. That will ensure that Ofcom can require providers of category 1 services to give details in their annual transparency reports about how they comply with the new duties. Those amendments define relevant content and consumer content for the purposes of the schedule.
We will discuss the proposed transparency and accountability duties that will replace the adult safety duties in more detail later in the Committee’s deliberations. For the reasons I have set out, I do not believe that the current adult safety duties with their risks to freedom of expression should be retained. I therefore urge the Committee that clauses 12, 13 and 55 do not stand part and instead recommend that the Government amendments in this group are accepted.
Before we proceed, I emphasise that we are debating clause 13 stand part as well as the litany of Government amendments that I read out.
Clause 12 is extremely important because it outlines the platforms’ duties in relation to keeping adults safe online. The Government’s attempts to remove the clause through an amendment that thankfully has not been selected are absolutely shocking. In addressing Government amendments 18, 23, 24, 25, 32, 33 and 39, I must ask the Minister: exactly how will this Bill do anything to keep adults safe online?
In the original clause 12, companies had to assess the risk of harm to adults and the original clause 13 outlined the means by which providers had to report these assessments back to Ofcom. This block of Government amendments will make it impossible for any of us—whether that is users of a platform or service, researchers or civil society experts—to understand the problems that arise on these platforms. Labour has repeatedly warned the Government that this Bill does not go far enough to consider the business models and product design of platforms and service providers that contribute to harm online. By tabling this group of amendments, the Government are once again making it incredibly difficult to fully understand the role of product design in perpetuating harm online.
We are not alone in our concerns. Colleagues from Carnegie UK Trust, who are a source of expertise to hon. Members across the House when it comes to internet regulation, have raised their concerns over this grouping of amendments too. They have raised specific concerns about the removal of the transparency obligation, which Labour has heavily pushed for in previous Bill Committees.
Previously, service providers had been required to inform customers of the harms their risk assessment had detected, but the removal of this risk assessment means that users and consumers will not have the information to assess the nature or risk on the platform. The Minister may point to the Government’s approach in relation to the new content duties in platforms’ and providers’ terms of service, but we know that there are risks arising from the fact that there is no minimum content specified for the terms of service for adults, although of course all providers will have to comply with the illegal content duties.
This approach, like the entire Bill, is already overly complex—that is widely recognised by colleagues across the House and is the view of many stakeholders too. In tabling this group of amendments, the Minister is showing his ignorance. Does he really think that all vulnerabilities to harm online simply disappear at the age of 18? By pushing these amendments, which seek to remove these protections from harmful but legal content to adults, the Minister is, in effect, suggesting that adults are not susceptible to harm and therefore risk assessments are simply not required. That is an extremely narrow-minded view to take, so I must push the Minister further. Does he recognise that many young, and older, adults are still highly likely to be impacted by suicide and self-harm messaging, eating disorder content, disinformation and abuse, which will all be untouched by these amendments?
Labour has been clear throughout the passage of the Bill that we need to see more, not less, transparency and protection from online harm for all of us—whether adults or children. These risk assessments are absolutely critical to the success of the Online Safety Bill and I cannot think of a good reason why the Minister would not support users in being able to make an assessment about their own safety online.
We have supported the passage of the Bill, as we know that keeping people safe online is a priority for us all and we know that the perfect cannot be the enemy of the good. The Government have made some progress towards keeping children safe, but they clearly do not consider it their responsibility to do the same for adults. Ultimately, platforms should be required to protect everyone: it does not matter whether they are a 17-year-old who falls short of being legally deemed an adult in this country, an 18-year-old or even an 80-year-old. Ultimately, we should all have the same protections and these risk assessments are critical to the online safety regime as a whole. That is why we cannot support these amendments. The Government have got this very wrong and we have genuine concerns that this wholesale approach will undermine how far the Bill will go to truly tackling harm online.
I will also make comments on clause 55 and the other associated amendments. I will keep my comments brief, as the Minister is already aware of my significant concerns over his Department’s intention to remove adult safety duties more widely. In the previous Bill Committee, Labour made it clear that it supports, and thinks it most important, that the Bill should clarify specific content that is deemed to be harmful to adults. We have repeatedly raised concerns about missing harms, including health misinformation and disinformation, but really this group of amendments, once again, will touch on widespread concerns that the Government’s new approach will see adults online worse off. The Government’s removal of the “legal but harmful” sections of the Online Safety Bill is a major weakening—not a strengthening—of the Bill. Does the Minister recognise that the only people celebrating these decisions will be the executives of big tech firms, and online abusers? Does he agree that this delay shows that the Government have bowed to vested interests over keeping users and consumers safe?
Labour is not alone in having these concerns. We are all pleased to see that child safety duties are still present in the Bill, but the NSPCC, among others, is concerned about the knock-on implications that may introduce new risks to children. Without adult safety duties in place, children will be at greater risk of harm if platforms do not identify and protect them as children. In effect, these plans will now place a significant greater burden on platforms to protect children than adults. As the Bill currently stands, there is a significant risk of splintering user protections that can expose children to adult-only spaces and harmful content, while forming grooming pathways for offenders, too.
The reality is that these proposals to deal with harms online for adults rely on the regulator ensuring that social media companies enforce their own terms and conditions. We already know and have heard that that can have an extremely damaging impact for online safety more widely, and we have only to consider the very obvious and well-reported case study involving Elon Musk’s takeover of Twitter to really get a sense of how damaging that approach is likely to be.
In late November, Twitter stopped taking action against tweets in violation of coronavirus rules. The company had suspended at least 11,000 accounts under that policy, which was designed to remove accounts posting demonstrably false or misleading content relating to covid-19 that could lead to harm. The company operated a five-strike policy, and the impact on public health around the world of removing that policy will likely be tangible. The situation also raises questions about the platform’s other misinformation policies. As of December 2022, they remain active, but for how long remains unclear.
Does the Minister recognise that as soon as they are inconvenient, platforms will simply change their terms and conditions, and terms of service? We know that simply holding platforms to account for their terms and conditions will not constitute robust enough regulation to deal with the threat that these platforms present, and I must press the Minister further on this point.
My hon. Friend is making an excellent speech. I share her deep concerns about the removal of these clauses. The Government have taken this tricky issue of the concept of “legal but harmful”—it is a tricky issue; we all acknowledge that—and have removed it from the Bill altogether. I do not think that is the answer. My hon. Friend makes an excellent point about children becoming 18; the day after they become 18, they are suddenly open to lots more harmful and dangerous content. Does she also share my concern about the risks of people being drawn towards extremism, as well as disinformation and misinformation?
My hon. Friend makes a valid point. This is not just about misinformation and disinformation; it is about leading people to really extreme, vile content on the internet. As we all know, that is a rabbit warren. That situation does not change as soon as a 17-year-old turns 18 on their 18th birthday—that they are then exempt when it comes to seeing this horrendous content. The rules need to be there to protect all of us.
As we have heard, terms and conditions can change overnight. Stakeholders have raised the concern that, if faced with a clearer focus on their terms of service, platforms and providers may choose to make their terms of service shorter, in an attempt to cut out harmful material that, if left undealt with, they may be held liable for.
In addition, the fact that there is no minimum requirement in the regime means that companies have complete freedom to set terms of service for adults, which may not reflect the risks to adults on that service. At present, service providers do not even have to include terms of service in relation to the list of harmful content proposed by the Government for the user empowerment duties—an area we will come on to in more detail shortly as we address clause 14. The Government’s approach and overreliance on terms of service, which as we know can be so susceptible to rapid change, is the wrong approach. For that reason, we cannot support these amendments.
I would just say, finally, that none of us was happy with the term “legal but harmful”. It was a phrase we all disliked, and it did not encapsulate exactly what the content is or includes. Throwing the baby out with the bathwater is not the way to tackle that situation. My hon. Friend the Member for Batley and Spen is right that this is a tricky area, and it is difficult to get it right. We need to protect free speech, which is sacrosanct, but we also need to recognise that there are so many users on the internet who do not have access to free speech as a result of being piled on or shouted down. Their free speech needs to be protected too. We believe that the clauses as they stand in the Bill go some way to making the Bill a meaningful piece of legislation. I urge the Minister not to strip them out, to do the right thing and to keep them in the Bill.
Throughout the consideration of the Bill, I have been clear that I do not want it to end up simply being the keep MPs safe on Twitter Bill. That is not what it should be about. I did not mean that we should therefore take out everything that protects adults; what I meant was that we need to have a big focus on protecting children in the Bill, which thankfully we still do. For all our concerns about the issues and inadequacies of the Bill, it will go some way to providing better protections for children online. But saying that it should not be the keep MPs safe on Twitter Bill does not mean that it should not keep MPs safe on Twitter.
I understand how we have got to this situation. What I cannot understand is the Minister’s being willing to stand up there and say, “We can’t have these clauses because they are a risk to freedom of speech.” Why are they in the Bill in the first place if they are such a big risk to freedom of speech? If the Government’s No. 1 priority is making sure that we do not have these clauses, why did they put them in it? Why did it go through pre-legislative scrutiny? Why were they in the draft Bill? Why were they in the Bill? Why did they agree with them in Committee? Why did they agree with them on Report? Why have we ended up in a situation where, suddenly, there is a massive epiphany that they are a threat to freedom of speech and therefore we cannot possibly have them?
What is it that people want to say that they will be banned from saying as a result of this Bill? What is it that freedom of speech campaigners are so desperate to want to say online? Do they want to promote self-harm on platforms? Is that what people want to do? Is that what freedom of speech campaigners are out for? They are now allowed to do that a result of the Bill.
I believe that the triple shield being put in is in place of “legal but harmful”. That will enable users to put a layer of protection in so they can actually take control. But the illegal content still has to be taken down: anything that promotes self-harm is illegal content and would still have to be removed. The problem with the way it was before is that we had a Secretary of State telling us what could be said out there and what could not. What may offend the hon. Lady may not offend me, and vice versa. We have to be very careful of that. It is so important that we protect free speech. We are now giving control to each individual who uses the internet.
The promotion of self-harm is not illegal content; people are now able to do that online—congratulations, great! The promotion of incel culture is not illegal content, so this Bill will now allow people to do that online. It will allow terms of service that do not require people to be banned for promoting incel culture, self-harm, not wearing masks and not getting a covid vaccine. It will allow the platforms to allow people to say these things. That is what has been achieved by campaigners.
The Bill is making people less safe online. We will continue to have the same problems that we have with people being driven to suicide and radicalised online as a result of the changes being made in this Bill. I know the Government have been leaned on heavily by the free speech lobby. I still do not know what people want to say that they cannot say as a result of the Bill as it stands. I do not know. I cannot imagine that anybody is not offended by content online that drives people to hurt themselves. I cannot imagine anybody being okay and happy with that. Certainly, I imagine that nobody in this room is okay and happy with that.
These people have won this war on the attack on free speech. They have won a situation where they are able to promote misogynistic, incel culture and health disinformation, where they are able to say that the covid vaccine is entirely about putting microchips in people. People are allowed to say that now—great! That is what has been achieved, and it is a societal issue. We have a generational issue where people online are being exposed to harmful content. That will now continue.
It is not just a generational societal thing—it is not just an issue for society as a whole that these conspiracy theories are pervading. Some of the conspiracy theories around antisemitism are unbelievably horrific, but do not step over into illegality or David Icke would not be able to stand up and suggest that the world is run by lizard people—who happen to be Jewish. He would not be allowed to say that because it would be considered harmful content. But now he is. That is fine. He is allowed to say that because this Bill is refusing to take action on that.
Can the hon. Lady tell me where in the Bill, as it is currently drafted—so, unamended—it requires platforms to remove legal speech?
It allows the platforms to do that. It allows them, and requires legal but harmful stuff to be taken into account. It requires the platforms to act—to consider, through risk assessments, the harm done to adults by content that is legal but massively harmful.
The hon. Lady is right: the Bill does not require the removal of legal speech. Platforms must take the issue into account—it can be risk assessed—but it is ultimately their decision. I think the point has been massively overstated that, somehow, previously, Ofcom had the power to strike down legal but harmful speech that was not a breach of either terms of service or the law. It never had that power.
Why do the Government now think that there is a risk to free speech? If Ofcom never had that power, if it was never an issue, why are the Government bothered about that risk—it clearly was not a risk—to free speech? If that was never a consideration, it obviously was not a risk to free speech, so I am now even more confused as to why the Government have decided that they will have to strip this measure out of the Bill because of the risk to free speech, because clearly it was not a risk in this situation. This is some of the most important stuff in the Bill for the protection of adults, and the Government are keen to remove it.
The hon. Member is making an excellent and very passionate speech, and I commend her for that. Would she agree with one of my concerns, which is about the message that this sends to the public? It is almost that the Government were acknowledging that there was a problem with legal but harmful content—we can all, hopefully, acknowledge that that is a problem, even though we know it is a tricky one to tackle—but, by removing these clauses from the Bill, are now sending the message that, “We were trying to clean up the wild west of the internet, but, actually, we are not that bothered anymore.”
The hon. Lady is absolutely right. We have all heard from organisations and individuals who have had their lives destroyed as a result of “legal but harmful”—I don’t have a better phrase for it—content online and of being radicalised by being driven deeper and deeper into blacker and blacker Discord servers, for example, that are getting further and further right wing.
A number of the people who are radicalised—who are committing terror attacks, or being referred to the Prevent programme because they are at risk of committing terror attacks—are not so much on the far-right levels of extremism any more, or those with incredible levels of religious extremism, but are in a situation where they have got mixed up or unclear ideological drivers. It is not the same situation as it was before, because people are being radicalised by the stuff that they find online. They are being radicalised into situations where they “must do something”—they “must take some action”—because of the culture change in society.
The hon. Member is making a powerful point. Just a few weeks ago, I asked the Secretary of State for Digital, Culture, Media and Sport, at the Dispatch Box, whether the horrendous and horrific content that led a man to shoot and kill five people in Keyham—in the constituency of my hon. Friend the Member for Plymouth, Sutton and Devonport (Luke Pollard)—would be allowed to remain and perpetuate online as a result of the removal of these clauses from the Bill. I did not get a substantial answer then, but we all know that the answer is yes.
That is the thing: this Bill is supposed to be the Online Safety Bill. It is supposed to be about protecting people from the harm that can be done to them by others. It is also supposed to be about protecting people from that radicalisation and that harm that they can end up in. It is supposed to make a difference. It is supposed to be game changer and a world leader.
Although, absolutely, I recognise the importance of the child-safety duties in the clauses and the change that that will have, when people turn 18 they do not suddenly become different humans. They do not wake up on their 18th birthday as a different person from the one that they were before. They should not have to go from that level of protection, prior to 18, to being immediately exposed to comments and content encouraging them to self-harm, and to all of the negative things that we know are present online.
I understand some of the arguments the hon. Lady is making, but that is a poor argument given that the day people turn 17 they can learn to drive or the day they turn 16 they can do something else. There are lots of these things, but we have to draw a line in the sand somewhere. Eighteen is when people become adults. If we do not like that, we can change the age, but there has to be a line in the sand. I agree with much of what the hon. Lady is saying, but that is a poor argument. I am sorry, but it is.
I do not disagree that overnight changes are involved, but the problem is that we are going from a certain level of protection to nothing; there will be a drastic, dramatic shift. We will end up with any vulnerable person who is over 18 being potentially subject to all this content online.
I still do not understand what people think they will have won as a result of having the provisions removed from the Bill. I do not understand how people can say, “This is now a substantially better Bill, and we are much freer and better off as a result of the changes.” That is not the case; removing the provisions will mean the internet continuing to be unsafe—much more unsafe than it would have been under the previous iteration of the Bill. It will ensure that more people are harmed as a result of online content. It will absolutely—
No, I will not give way again. The change will ensure that people can absolutely say what they like online, but the damage and harm that it will cause are not balanced by the freedoms that have been won.
As a Back-Bench Member of Parliament, I recommended that the “legal but harmful” provisions be removed from the Bill. When I chaired the Joint Committee of both Houses of Parliament that scrutinised the draft Bill, it was the unanimous recommendation of the Committee that the “legal but harmful” provisions be removed. As a Minister at the Dispatch Box, I said that I thought “legal but harmful” was a problematic term and we should not use it. The term “legal but harmful” does not exist in the Bill, and has never existed in the Bill, but it has provoked a debate that has caused a huge confusion. There is a belief, which we have heard expressed in debate today, that somehow there are categories of content that Ofcom can deem categories for removal whether they are unlawful or not.
During the Bill’s journey from publication in draft to where we are today, it has become more specific. Rather than our relying on general duties of care, written into the Bill are areas of priority illegal activity that the companies must proactively look for, monitor and mitigate. In the original version of the Bill, that included only terrorist content and child sexual exploitation material, but on the recommendation of the Joint Committee, the Government moved in the direction of writing into the Bill at schedule 7 offences in law that will be the priority illegal offences.
The list of offences is quite wide, and it is more comprehensive than any other such list in the world in specifying exactly what offences are in scope. There is no ambiguity for the platforms as to what offences are in scope. Stalking, harassment and inciting violence, which are all serious offences, as well as the horrible abuse a person might receive as a consequence of their race or religious beliefs, are written into the Bill as priority illegal offences.
There has to be a risk assessment of whether such content exists on platforms and what action platforms should take. They are required to carry out such a risk assessment, although that was never part of the Bill before. The “legal but harmful” provisions in some ways predate that. Changes were made; the offences were written into the Bill, risk assessments were provided for, and Parliament was invited to create new offences and write them into the Bill, if there were categories of content that had not been captured. In some ways, that creates a democratic lock that says, “If we are going to start to regulate areas of speech, what is the legal reason for doing that? Where is the legal threshold? What are the grounds for us taking that decision, if it is something that is not already covered in platforms’ terms of service?”
We are moving in that direction. We have a schedule of offences that we are writing into the Bill, and those priority illegal offences cover most of the most serious behaviour and most of the concerns raised in today’s debate. On top of that, there is a risk assessment of platforms’ terms of service. When we look at the terms of service of the companies—the major platforms we have been discussing—we see that they set a higher bar again than the priority illegal harms. On the whole, platforms do not have policies that say, “We won’t do anything about this illegal activity, race hate, incitement to violence, or promotion or glorification of terrorism.” The problem is that although have terms of service, they do not enforce them. Therefore, we are not relying on terms of service. What we are saying, and what the Bill says, is that the minimum safety standards are based on the offences written into the Bill. In addition, we have risk assessment, and we have enforcement based on the terms of service.
There may be a situation in which there is a category of content that is not in breach of a platform’s terms of service and not included in the priority areas of illegal harm. It is very difficult to think of what that could be—something that is not already covered, and over which Ofcom would not have power. There is the inclusion of the new offences of promoting self-harm and suicide. That captures not just an individual piece of content, but the systematic effect of a teenager like Molly Russell—or an adult of any age—being targeted with such content. There are also new offences for cyber-flashing, and there is Zach’s law, which was discussed in the Chamber on Report. We are creating and writing into the Bill these new priority areas of illegal harm.
Freedom of speech groups’ concern was that the Government could have a secret list of extra things that they also wanted risk-assessed, rather enforcement being clearly based either on the law or on clear terms of service. It is difficult to think of categories of harm that are not already captured in terms of service or priority areas of illegal harm, and that would be on such a list. I think that is why the change was made. For freedom of speech campaigners, there was a concern about exactly what enforcement was based on: “Is it based on the law? Is it based on terms of service? Or is it based on something else?”
I personally believed that the “legal but harmful” provisions in the Bill, as far as they existed, were not an infringement on free speech, because there was never a requirement to remove legal speech. I do not think the removal of those clauses from the Bill suddenly creates a wild west in which no enforcement will take place at all. There will be very effective enforcement based on the terms of service, and on the schedule 7 offences, which deal with the worst kinds of illegal activity; there is a broad list. The changes make it much clearer to everybody—platforms and users alike, and Ofcom—exactly what the duties are, how they are enforced and what they are based on.
For future regulation, we have to use this framework, so that we can say that when we add new offences to the scope of the legislation, they are offences that have been approved by Parliament and have gone through a proper process, and are a necessary addition because terms of service do not cover them. That is a much clearer and better structure to follow, which is why I support the Government amendments.
I cannot help but see the Government’s planned removal of clauses 12 and 13 as essentially wrecking amendments to the Bill. Taking those provisions out of the Bill makes it a Bill not about online safety, but about child protection. We have not had five years or so of going backwards and forwards, and taken the Bill through Committee and then unprecedentedly recommitted it to Committee, in order to fundamentally change what the Bill set out to do. The fact that, at this late stage, the Government are trying to take out these aspects of the Bill melts my head, for want of a better way of putting it.
My hon. Friend the Member for Batley and Spen was absolutely right when she talked about what clauses 12 and 13 do. In effect, they are an acknowledgement that adults are also harmed online, and have different experiences online. I strongly agree with the hon. Member for Aberdeen North about this not being the protect MPs from being bullied on Twitter Bill, because obviously the provisions go much further than that, but it is worth noting, in the hope that it is illustrative to Committee members, the very different experience that the Minister and I have in using Twitter. I say that as a woman who is LGBT and Jewish—and although I would not suggest that it should be a protected characteristic, the fact that I am ginger probably plays a part as well. He and I could do the same things on Twitter on the same day and have two completely different experiences of that platform.
The risk-assessment duties set out in clause 12, particularly in subsection (5)(d) to (f), ask platforms to consider the different ways in which different adult users might experience them. Platforms have a duty to attempt to keep certain groups of people, and categories of user, safe. When we talk about free speech, the question is: freedom of speech for whom, and at what cost? Making it easier for people to perpetuate, for example, holocaust denial on the internet—a category of speech that is lawful but awful, as it is not against the law in this country to deny that the holocaust happened—makes it much less likely that I, or other Jewish people, will want to use the platform.
The hon. Member makes a powerful point about the different ways in which people experience things. That tips over into real-life abusive interactions, and goes as far as terrorist incidents in some cases. Does she agree that protecting people’s freedom of expression and safety online also protects people in their real, day-to-day life?
I could not agree more. I suppose that is why this aspect of the Bill is so important, not just to me but to all those categories of user. I mentioned paragraphs (d) to (f), which would require platforms to assess exactly that risk. This is not about being offended. Personally, I have the skin of a rhino. People can say most things to me and I am not particularly bothered by it. My concern is where things that are said online are transposed into real-life harms. I will use myself as an example. Online, we can see antisemitic and conspiratorial content, covid misinformation, and covid misinformation that meets with antisemitism and conspiracies. When people decide that I, as a Jewish Member of Parliament, am personally responsible for George Soros putting a 5G chip in their arm, or whatever other nonsense they have become persuaded by on the internet, that is exactly the kind of thing that has meant people coming to my office armed with a knife. The kind of content that they were radicalised by on the internet led to their perpetrating a real-life, in-person harm. Thank God—Baruch Hashem—neither I nor my staff were in the office that day, but that could have ended very differently, because of the sorts of content that the Bill is meant to protect online users from.
The hon. Lady is talking about an incredibly important issue, but the Bill covers such matters as credible threats to life, incitement to violence against an individual, and harassment and stalking—those patterns of behaviour. Those are public order offences, and they are in the Bill. I would absolutely expect companies to risk-assess for that sort of activity, and to be required by Ofcom to mitigate it. On her point about holocaust denial, first, the shield will mean that people can protect themselves from seeing stuff. The further question would be whether we create new offences in law, which can then be transposed across.
I accept the points that the hon. Member raised, but he is fundamentally missing the point. The categories of information and content that these people had seen and been radicalised by would not fall under the scope of public order offences or harassment. The person was not sending me harassing messages before they turned up at my office. Essentially, social media companies and other online platforms have to take measures to mitigate the risk of categories of offences that are illegal, whether or not they are in the Bill. I am talking about what clauses 12 and 13 covered, whether we call it the “legal but harmful” category or “lawful but awful”. Whatever we name those provisions, by taking out of the Bill clauses relating to the “legal but harmful” category, we are opening up an area of harm that already exists, that has a real-world impact, and that the Bill was meant to go some way towards addressing.
The provisions have taken out the risk assessments that need to be done. The Bill says,
“(e) the level of risk of functionalities of the service facilitating the presence or dissemination of priority content that is harmful to adults, identifying and assessing those functionalities that present higher levels of risk;
(f) the different ways in which the service is used, and the impact of such use on the level of risk of harm that might be suffered by adults;
(g) the nature, and severity, of the harm that might be suffered by adults”.
Again, the idea that we are talking about offence, and that the clauses need to be taken out to protect free speech, is fundamentally nonsense.
I have already mentioned holocaust denial, but it is also worth mentioning health-related disinformation. We have already seen real-world harms from some of the covid misinformation online. It led to people including Piers Corbyn turning up outside Parliament with a gallows, threatening to hang hon. Members for treason. Obviously, that was rightly dealt with by the police, but the kind of information and misinformation that he had been getting online and that led him to do that, which is legal but harmful, will now not be covered by the Bill.
I will also raise an issue I have heard about from a number of people dealing with cancer and conditions such as multiple sclerosis. People online try to discourage them from accessing the proper medical interventions for their illnesses, and instead encourage them to take more vitamin B or adopt a vegan diet. There are people who have died because they had cancer but were encouraged online to not access cancer treatment because they were subject to lawful but awful categories of harm.
I wonder if the hon. Member saw the story online about the couple in New Zealand who refused to let their child have a life-saving operation because they could not guarantee that the blood used would not be from vaccinated people? Is the hon. Member similarly concerned that this has caused real-life harm?
I am aware of the case that the hon. Member mentioned. I appreciate that I am probably testing the patience of everybody in the Committee Room, but I want to be clear just how abhorrent I find it that these provisions are coming out of the Bill. I am trying to be restrained, measured and reasonably concise, but that is difficult when there are so many parts of the change that I find egregious.
My final point is on self-harm and suicide content. For men under the age of 45, suicide is the biggest killer. In the Bill, we are doing as much as we can to protect young people from that sort of content. My real concern is this: many young people are being protected by the Bill’s provisions relating to children. They are perhaps waiting for support from child and adolescent mental health services, which are massively oversubscribed. The minute they tick over into 18, fall off the CAMHS waiting list and go to the bottom of the adult mental health waiting list—they may have to wait years for treatment of various conditions—there is no requirement or duty on the social media companies and platforms to do risk assessments.