ONLINE SAFETY BILL (First sitting) Debate
Full Debate: Read Full DebateDamian Collins
Main Page: Damian Collins (Conservative - Folkestone and Hythe)Department Debates - View all Damian Collins's debates with the Department for Digital, Culture, Media & Sport
(1 year, 11 months ago)
Public Bill CommitteesI wish to add some brief words in support of the Government’s proposals and to build on the comments from Members of all parties.
We know that access to extreme and abusive pornography is a direct factor in violence against women and girls. We see that play out in the court system every day. People claim to have watched and become addicted to this type of pornography; they are put on trial because they seek to play that out in their relationships, which has resulted in the deaths of women. The platforms already have technology that allows them to figure out the age of people on their platforms. The Bill seeks to ensure that they use that for a good end, so I thoroughly support it. I thank the Minister.
There are two very important and distinct issues here. One is age verification. The platforms ask adults who have identification to verify their age; if they cannot verify their age, they cannot access the service. Platforms have a choice within that. They can design their service so that it does not have adult content, in which case they may not need to build in verification systems—the platform polices itself. However, a platform such as Twitter, which allows adult content on an app that is open to children, has to build in those systems. As the hon. Member for Aberdeen North mentioned, people will also have to verify their identity to access a service such as OnlyFans, which is an adult-only service.
On that specific point, I searched on Twitter for the name—first name and surname—of a politician to see what people had been saying, because I knew that he was in the news. The pictures that I saw! That was purely by searching for the name of the politician; it is not as though people are necessarily seeking such stuff out.
On these platforms, the age verification requirements are clear: they must age-gate the adult content or get rid of it. They must do one or the other. Rightly, the Bill does not specify technologies. Technologies are available. The point is that a company must demonstrate that it is using an existing and available technology or that it has some other policy in place to remedy the issue. It has a choice, but it cannot do nothing. It cannot say that it does not have a policy on it.
Age assurance is always more difficult for children, because they do not have the same sort of ID that adults have. However, technologies exist: for instance, Yoti uses facial scanning. Companies do not have to do that either; they have to demonstrate that they do something beyond self-certification at the point of signing up. That is right. Companies may also demonstrate what they do to take robust action to close the accounts of children they have identified on their platforms.
If a company’s terms of service state that people must be 13 or over to use the platform, the company is inherently stating that the platform is not safe for someone under 13. What does it do to identify people who sign up? What does it do to identify people once they are on the platform, and what action does it then take? The Bill gives Ofcom the powers to understand those things and to force a change of behaviour and action. That is why—to the point made by the hon. Member for Pontypridd—age assurance is a slightly broader term, but companies can still extract a lot of information to determine the likely age of a child and take the appropriate action.
I think we are all in agreement, and I hope that the Committee will accept the amendments.
Amendment 1 agreed to.
Amendments made: 2, in clause 11, page 10, line 25, leave out
“(for example, by using age assurance)”.
This amendment omits words which are no longer necessary in subsection (3)(b) of clause 11 because they are dealt with by the new subsection inserted by Amendment 3.
Amendment 3, in clause 11, page 10, line 26, at end insert—
“(3A) Age assurance to identify who is a child user or which age group a child user is in is an example of a measure which may be taken or used (among others) for the purpose of compliance with a duty set out in subsection (2) or (3).”—(Paul Scully.)
This amendment makes it clear that age assurance measures may be used to comply with duties in clause 11(2) as well as (3) (safety duties protecting children).
Does the hon. Lady accept that the amendments would give people control over the bit of the service that they do not currently have control of? A user can choose what to search for and which members to engage with, and can block people. What they cannot do is stop the recommendation feeds recommending things to them. The shields intervene there, which gives user protection, enabling them to say, “I don’t want this sort of content recommended to me. On other things, I can either not search for them, or I can block and report offensive users.” Does she accept that that is what the amendment achieves?
I think that that is what the clause achieves, rather than the amendments that I have tabled. I recognise that the clause achieves that, and I have no concerns about it. It is good that the clause does that; my concern is that it does not take the second step of blocking access to certain features on the platform. For example, somebody could be having a great time on Instagram looking at various people’s pictures or whatever, but they may not want to be bombarded with private messages. They have no ability to turn off the private messaging section.
They can disengage with the user who is sending the messages. On a Meta platform, often those messages will be from someone they are following or engaging with. They can block them, and the platforms have the ability, in most in-app messaging services, to see whether somebody is sending priority illegal content material to other users. They can scan for that and mitigate that as well.
That is exactly why users should be able to block private messaging in general. Someone on Twitter can say, “I’m not going to receive a direct message from anybody I don’t follow.” Twitter users have the opportunity to do that, but there is not necessarily that opportunity on all platforms. We are asking for those things to be included, so that the provider can say, “You’re using private messaging inappropriately. Therefore, we are blocking all your access to private messaging,” or, “You are being harmed as a result of accessing private messaging. Therefore, we are blocking your access to any private messaging. You can still see pictures on Instagram, but you can no longer receive any private messages, because we are blocking your access to that part of the site.” That is very different from blocking a user’s access to certain kinds of content, for example. I agree that that should happen, but it is about the functionalities and stopping access to some of them.
We are not asking Ofcom to mandate that platforms take this measure; they could still take the slightly more nuclear option of banning somebody entirely from their service. However, if this option is included, we could say, “Your service is doing pretty well, but we know there is an issue with private messaging. Could you please take action to ensure that those people who are using private messaging to harm children no longer have access to private messaging and are no longer able to use the part of the service that enables them to do these things?” Somebody might be doing a great job of making games in Roblox, but they may be saying inappropriate things. It may be proportionate to block that person entirely, but it may be more proportionate to block their access to voice chat, so that they can no longer say those things, or direct message or contact anybody. It is about proportionality and recognising that the service is not necessarily inherently harmful but that specific parts of it could be.
I completely agree. The hon. Member put that much better than I could. I was trying to formulate that point in my head, but had not quite got there, so I appreciate her intervention. She is right: we should not put the onus on a victim to deal with a situation. Once they have seen a message from someone, they can absolutely block that person, but that person could create another account and send them messages again. People could be able to choose, and to say, “No, I don’t want anyone to be able to send me private messages,” or “I don’t want any private messages from anyone I don’t know.” We could put in those safeguards.
I am talking about adding another layer to the clause, so that companies would not necessarily have to demonstrate that it was proportionate to ban a person from using their service, as that may be too high a bar—a concern I will come to later. They could, however, demonstrate that it was proportionate to ban a person from using private messaging services, or from accessing livestreaming features. There has been a massive increase in self-generated child sexual abuse images, and huge amount has come from livestreaming. There are massive risks with livestreaming features on services.
Livestreaming is not always bad. Someone could livestream themselves showing how to make pancakes. There is no issue with that—that is grand—but livestreaming is being used by bad actors to manipulate children into sharing videos of themselves, and once they are on the internet, they are there forever. It cannot be undone. If we were able to ban vulnerable users—my preferred option would be all children—from accessing livestreaming services, they would be much safer.
The hon. Lady is talking about extremely serious matters. My expectation is that Ofcom would look at all of a platform’s features when risk-assessing the platform and enforcing safety, and in-app messaging services would not be exempt. Platforms have to demonstrate what they would do to mitigate harmful and abusive behaviour, and that they would take action against the accounts responsible.
Absolutely, I agree, but the problem is with the way the Bill is written. It does not suggest that a platform could stop somebody accessing a certain part of a service. The Bill refers to content, and to the service as a whole, but it does not have that middle point that I am talking about.
A platform is required to demonstrate to Ofcom what it would do to mitigate activity that would breach the safety duties. It could do that through a feature that it builds in, or it may take a more draconian stance and say, “Rather than turning off certain features, we will just suspend the account altogether.” That could be discussed in the risk assessments, and agreed in the codes of practice.
What I am saying is that the clause does not actually allow that middle step. It does not explicitly say that somebody could be stopped from accessing private messaging. The only options are being banned from certain content, or being banned from the entire platform.
I absolutely recognise the hard work that Ofcom has done, and I recognise that it will work very hard to ensure that risks are mitigated, but the amendment ensures what the Minister intended with this legislation. I am not convinced that he intended there to be just the two options that I outlined. I think he intended something more in line with what I am suggesting in the amendment. It would be very helpful if the Minister explicitly said something in this Committee that makes it clear that Ofcom has the power to say to platforms, “Your risk assessment says that there is a real risk from private messaging”—or from livestreaming—“so why don’t you turn that off for all users under 18?” Ofcom should be able to do that.
Could the Minister be clear that that is the direction of travel he is hoping and intending that Ofcom will take? If he could be clear on that, and will recognise that the clause could have been slightly better written to ensure Ofcom had that power, I would be quite happy to not push the amendment to a vote. Will the Minister be clear about the direction he hopes will be taken?
Absolutely. The amendment I tabled regarding the accessibility of terms of service was designed to ensure that if the Government rely on terms of service, children can access those terms of service and are able to see what risks they are putting themselves at. We know that in reality children will not read these things. Adults do not read these things. I do not know what Twitter’s terms of service say, but I do know that Twitter managed to change its terms of service overnight, very easily and quickly. Companies could just say, “I’m a bit fed up with Ofcom breathing down my neck on this. I’m just going to change my terms of service, so that Ofcom will not take action on some of the egregious harm that has been done. If we just change our terms of service, we don’t need to bother. If we say that we are not going to ban transphobia on our platform—if we take that out of the terms of service—we do not need to worry about transphobia on our platform. We can just let it happen, because it is not in our terms of service.”
Does the hon. Lady agree that the Government are not relying solely on terms of service, but are rightly saying, “If you say in your terms of service that this is what you will do, Ofcom will make sure that you do it”? Ofcom will take on that responsibility for people, making sure that these complex terms of service are understood and enforced, but the companies still have to meet all the priority illegal harms objectives that are set out in the legislation. Offences that exist in law are still enforced on platforms, and risk-assessed by Ofcom as well, so if a company does not have a policy on race hate, we have a law on race hate, and that will apply.
It is absolutely the case that those companies still have to do a risk assessment, and a child risk assessment if they meet the relevant criteria. The largest platforms, for example, will still have to do a significant amount of work on risk assessments. However, every time a Minister stands up and talks about what they are requiring platforms and companies to do, they say, “Companies must stick to their terms of service. They must ensure that they enforce things in line with their terms of service.” If a company is finding it too difficult, it will just take the tough things out of their terms of service. It will take out transphobia, it will take out abuse. Twitter does not ban anyone for abuse anyway, it seems, but it will be easier for Twitter to say, “Ofcom is going to try to hold us for account for the fact that we are not getting rid of people for abusive but not illegal messages, even though we say in our terms of service, ‘You must act with respect’, or ‘You must not abuse other users’. We will just take that out of our terms of service so that we are not held to account for the fact that we are not following our terms of service.” Then, because the abuse is not illegal—because it does not meet that bar—those places will end up being even less safe than they are right now.
For example, occasionally Twitter does act in line with its terms of service, which is quite nice: it does ban people who are behaving inappropriately, but not necessarily illegally, on its platform. However, if it is required to implement that across the board for everybody, it will be far easier for Twitter to say, “We’ve sacked all our moderators—we do not have enough people to be able to do this job—so we will just take it all out of the terms of service. The terms of service will say, ‘We will ban people for sharing illegal content, full stop.’” We will end up in a worse situation than we are currently in, so the reliance on terms of service causes me a big, big problem.
Turning to amendment 100, dealing specifically with the accessibility of this feature for child users, I appreciate the ministerial clarification, and agree that my amendment could have been better worded and potentially causes some problems. However, can the Minister talk more about the level of accessibility? I would like children to be able to see a version of the terms of service that is age-appropriate, so that they understand what is expected of them and others on the platform, and understand when and how they can make a report and how that report will be acted on. The kids who are using Discord, TikTok or YouTube are over 13—well, some of them are—so they are able to read and understand, and they want to know how to make reports and for the reporting functions to be there. One of the biggest complaints we hear from kids is that they do not know how to report things they see that are disturbing.
A requirement for children to have an understanding of how reporting functions work, particularly on social media platforms where people are interacting with each other, and of the behaviour that is expected of them, does not mean that there cannot be a more in-depth and detailed version of the terms of service, laying out potential punishments using language that children may not be able to understand. The amendment would specifically ensure that children have an understanding of that.
We want children to have a great time on the internet. There are so many ace things out there and wonderful places they can access. Lego has been in touch, for example; its website is really pretty cool. We want kids to be able to access that stuff and communicate with their friends, but we also want them to have access to features that allow them to make reports that will keep them safe. If children are making reports, then platforms will say, “Actually, there is real problem with this because we are getting loads of reports about it.” They will then be able to take action. They will be able to have proper risk assessments in place because they will be able to understand what is disturbing people and what is causing the problems.
I am glad to hear the Minister’s words. If he were even more clear about the fact that he would expect children to be able to understand and access information about keeping themselves safe on the platforms, then that would be even more helpful.
To protect free speech and remove any possibility that the Bill could cause tech companies to censor legal content, I seek to remove the so-called “legal but harmful” duties from the Bill. These duties are currently set out in clauses 12 and 13 and apply to the largest in-scope services. They require services to undertake risk assessments for defined categories of harmful but legal content, before setting and enforcing clear terms of service for each category of content.
I share the concerns raised by Members of this House and more broadly that these provisions could have a detrimental effect on freedom of expression. It is not right that the Government define what legal content they consider harmful to adults and then require platforms to risk assess for that content. Doing so may encourage companies to remove legal speech, undermining this Government’s commitment to freedom of expression. That is why these provisions must be removed.
At the same time, I recognise the undue influence that the largest platforms have over our public discourse. These companies get to decide what we do and do not see online. They can arbitrarily remove a user’s content or ban them altogether without offering any real avenues of redress to users. On the flip side, even when companies have terms of service, these are often not enforced, as we have discussed. That was the case after the Euro 2020 final where footballers were subject to the most appalling abuse, despite most platforms clearly prohibiting that. That is why I am introducing duties to improve the transparency and accountability of platforms and to protect free speech through new clauses 3 and 4. Under these duties, category 1 platforms will only be allowed to remove or restrict access to content or ban or suspend users when this is in accordance with their terms of service or where they face another legal obligation. That protects against the arbitrary removal of content.
Companies must ensure that their terms of service are consistently enforced. If companies’ terms of service say that they will remove or restrict access to content, or will ban or suspend users in certain circumstances, they must put in place proper systems and processes to apply those terms. That will close the gap between what companies say they will do and what they do in practice. Services must ensure that their terms of service are easily understandable to users and that they operate effective reporting and redress mechanisms, enabling users to raise concerns about a company’s application of the terms of service. We will debate the substance of these changes later alongside clause 18.
Clause 55 currently defines
“content that is harmful to adults”,
including
“priority content that is harmful to adults”
for the purposes of this legislation. As this concept would be removed with the removal of the adult safety duties, this clause will also need to be removed.
My hon. Friend mentioned earlier that companies will not be able to remove content if it is not part of their safety duties or if it was not a breach of their terms of service. I want to be sure that I heard that correctly and to ask whether Ofcom will be able to risk assess that process to ensure that companies are not over-removing content.
Absolutely. I will come on to Ofcom in a second and respond directly to his question.
The removal of clauses 12, 13 and 55 from the Bill, if agreed by the Committee, will require a series of further amendments to remove references to the adult safety duties elsewhere in the Bill. These amendments are required to ensure that the legislation is consistent and, importantly, that platforms, Ofcom and the Secretary of State are not held to requirements relating to the adult safety duties that we intend to remove from the Bill. The amendments remove requirements on platforms and Ofcom relating to the adult safety duties. That includes references to the adult safety duties in the duties to provide content reporting and redress mechanisms and to keep records. They also remove references to content that is harmful to adults from the process for designating category 1, 2A and 2B companies. The amendments in this group relate mainly to the process for the category 2B companies.
I also seek to amend the process for designating category 1 services to ensure that they are identified based on their influence over public discourse, rather than with regard to the risk of harm posed by content that is harmful to adults. These changes will be discussed when we debate the relevant amendments alongside clause 82 and schedule 11. The amendments will remove powers that will no longer be required, such as the Secretary of State’s ability to designate priority content that is harmful to adults. As I have already indicated, we intend to remove the adult safety duties and introduce new duties on category 1 services relating to transparency, accountability and freedom of expression. While they will mostly be discussed alongside clause 18, amendments 61 to 66, 68 to 70 and 74 will add references to the transparency, accountability and freedom of expression duties to schedule 8. That will ensure that Ofcom can require providers of category 1 services to give details in their annual transparency reports about how they comply with the new duties. Those amendments define relevant content and consumer content for the purposes of the schedule.
We will discuss the proposed transparency and accountability duties that will replace the adult safety duties in more detail later in the Committee’s deliberations. For the reasons I have set out, I do not believe that the current adult safety duties with their risks to freedom of expression should be retained. I therefore urge the Committee that clauses 12, 13 and 55 do not stand part and instead recommend that the Government amendments in this group are accepted.
Can the hon. Lady tell me where in the Bill, as it is currently drafted—so, unamended—it requires platforms to remove legal speech?
It allows the platforms to do that. It allows them, and requires legal but harmful stuff to be taken into account. It requires the platforms to act—to consider, through risk assessments, the harm done to adults by content that is legal but massively harmful.
The hon. Lady is right: the Bill does not require the removal of legal speech. Platforms must take the issue into account—it can be risk assessed—but it is ultimately their decision. I think the point has been massively overstated that, somehow, previously, Ofcom had the power to strike down legal but harmful speech that was not a breach of either terms of service or the law. It never had that power.
Why do the Government now think that there is a risk to free speech? If Ofcom never had that power, if it was never an issue, why are the Government bothered about that risk—it clearly was not a risk—to free speech? If that was never a consideration, it obviously was not a risk to free speech, so I am now even more confused as to why the Government have decided that they will have to strip this measure out of the Bill because of the risk to free speech, because clearly it was not a risk in this situation. This is some of the most important stuff in the Bill for the protection of adults, and the Government are keen to remove it.
No, I will not give way again. The change will ensure that people can absolutely say what they like online, but the damage and harm that it will cause are not balanced by the freedoms that have been won.
As a Back-Bench Member of Parliament, I recommended that the “legal but harmful” provisions be removed from the Bill. When I chaired the Joint Committee of both Houses of Parliament that scrutinised the draft Bill, it was the unanimous recommendation of the Committee that the “legal but harmful” provisions be removed. As a Minister at the Dispatch Box, I said that I thought “legal but harmful” was a problematic term and we should not use it. The term “legal but harmful” does not exist in the Bill, and has never existed in the Bill, but it has provoked a debate that has caused a huge confusion. There is a belief, which we have heard expressed in debate today, that somehow there are categories of content that Ofcom can deem categories for removal whether they are unlawful or not.
During the Bill’s journey from publication in draft to where we are today, it has become more specific. Rather than our relying on general duties of care, written into the Bill are areas of priority illegal activity that the companies must proactively look for, monitor and mitigate. In the original version of the Bill, that included only terrorist content and child sexual exploitation material, but on the recommendation of the Joint Committee, the Government moved in the direction of writing into the Bill at schedule 7 offences in law that will be the priority illegal offences.
The list of offences is quite wide, and it is more comprehensive than any other such list in the world in specifying exactly what offences are in scope. There is no ambiguity for the platforms as to what offences are in scope. Stalking, harassment and inciting violence, which are all serious offences, as well as the horrible abuse a person might receive as a consequence of their race or religious beliefs, are written into the Bill as priority illegal offences.
There has to be a risk assessment of whether such content exists on platforms and what action platforms should take. They are required to carry out such a risk assessment, although that was never part of the Bill before. The “legal but harmful” provisions in some ways predate that. Changes were made; the offences were written into the Bill, risk assessments were provided for, and Parliament was invited to create new offences and write them into the Bill, if there were categories of content that had not been captured. In some ways, that creates a democratic lock that says, “If we are going to start to regulate areas of speech, what is the legal reason for doing that? Where is the legal threshold? What are the grounds for us taking that decision, if it is something that is not already covered in platforms’ terms of service?”
We are moving in that direction. We have a schedule of offences that we are writing into the Bill, and those priority illegal offences cover most of the most serious behaviour and most of the concerns raised in today’s debate. On top of that, there is a risk assessment of platforms’ terms of service. When we look at the terms of service of the companies—the major platforms we have been discussing—we see that they set a higher bar again than the priority illegal harms. On the whole, platforms do not have policies that say, “We won’t do anything about this illegal activity, race hate, incitement to violence, or promotion or glorification of terrorism.” The problem is that although have terms of service, they do not enforce them. Therefore, we are not relying on terms of service. What we are saying, and what the Bill says, is that the minimum safety standards are based on the offences written into the Bill. In addition, we have risk assessment, and we have enforcement based on the terms of service.
There may be a situation in which there is a category of content that is not in breach of a platform’s terms of service and not included in the priority areas of illegal harm. It is very difficult to think of what that could be—something that is not already covered, and over which Ofcom would not have power. There is the inclusion of the new offences of promoting self-harm and suicide. That captures not just an individual piece of content, but the systematic effect of a teenager like Molly Russell—or an adult of any age—being targeted with such content. There are also new offences for cyber-flashing, and there is Zach’s law, which was discussed in the Chamber on Report. We are creating and writing into the Bill these new priority areas of illegal harm.
Freedom of speech groups’ concern was that the Government could have a secret list of extra things that they also wanted risk-assessed, rather enforcement being clearly based either on the law or on clear terms of service. It is difficult to think of categories of harm that are not already captured in terms of service or priority areas of illegal harm, and that would be on such a list. I think that is why the change was made. For freedom of speech campaigners, there was a concern about exactly what enforcement was based on: “Is it based on the law? Is it based on terms of service? Or is it based on something else?”
I personally believed that the “legal but harmful” provisions in the Bill, as far as they existed, were not an infringement on free speech, because there was never a requirement to remove legal speech. I do not think the removal of those clauses from the Bill suddenly creates a wild west in which no enforcement will take place at all. There will be very effective enforcement based on the terms of service, and on the schedule 7 offences, which deal with the worst kinds of illegal activity; there is a broad list. The changes make it much clearer to everybody—platforms and users alike, and Ofcom—exactly what the duties are, how they are enforced and what they are based on.
For future regulation, we have to use this framework, so that we can say that when we add new offences to the scope of the legislation, they are offences that have been approved by Parliament and have gone through a proper process, and are a necessary addition because terms of service do not cover them. That is a much clearer and better structure to follow, which is why I support the Government amendments.
I could not agree more. I suppose that is why this aspect of the Bill is so important, not just to me but to all those categories of user. I mentioned paragraphs (d) to (f), which would require platforms to assess exactly that risk. This is not about being offended. Personally, I have the skin of a rhino. People can say most things to me and I am not particularly bothered by it. My concern is where things that are said online are transposed into real-life harms. I will use myself as an example. Online, we can see antisemitic and conspiratorial content, covid misinformation, and covid misinformation that meets with antisemitism and conspiracies. When people decide that I, as a Jewish Member of Parliament, am personally responsible for George Soros putting a 5G chip in their arm, or whatever other nonsense they have become persuaded by on the internet, that is exactly the kind of thing that has meant people coming to my office armed with a knife. The kind of content that they were radicalised by on the internet led to their perpetrating a real-life, in-person harm. Thank God—Baruch Hashem—neither I nor my staff were in the office that day, but that could have ended very differently, because of the sorts of content that the Bill is meant to protect online users from.
The hon. Lady is talking about an incredibly important issue, but the Bill covers such matters as credible threats to life, incitement to violence against an individual, and harassment and stalking—those patterns of behaviour. Those are public order offences, and they are in the Bill. I would absolutely expect companies to risk-assess for that sort of activity, and to be required by Ofcom to mitigate it. On her point about holocaust denial, first, the shield will mean that people can protect themselves from seeing stuff. The further question would be whether we create new offences in law, which can then be transposed across.
I accept the points that the hon. Member raised, but he is fundamentally missing the point. The categories of information and content that these people had seen and been radicalised by would not fall under the scope of public order offences or harassment. The person was not sending me harassing messages before they turned up at my office. Essentially, social media companies and other online platforms have to take measures to mitigate the risk of categories of offences that are illegal, whether or not they are in the Bill. I am talking about what clauses 12 and 13 covered, whether we call it the “legal but harmful” category or “lawful but awful”. Whatever we name those provisions, by taking out of the Bill clauses relating to the “legal but harmful” category, we are opening up an area of harm that already exists, that has a real-world impact, and that the Bill was meant to go some way towards addressing.
The provisions have taken out the risk assessments that need to be done. The Bill says,
“(e) the level of risk of functionalities of the service facilitating the presence or dissemination of priority content that is harmful to adults, identifying and assessing those functionalities that present higher levels of risk;
(f) the different ways in which the service is used, and the impact of such use on the level of risk of harm that might be suffered by adults;
(g) the nature, and severity, of the harm that might be suffered by adults”.
Again, the idea that we are talking about offence, and that the clauses need to be taken out to protect free speech, is fundamentally nonsense.
I have already mentioned holocaust denial, but it is also worth mentioning health-related disinformation. We have already seen real-world harms from some of the covid misinformation online. It led to people including Piers Corbyn turning up outside Parliament with a gallows, threatening to hang hon. Members for treason. Obviously, that was rightly dealt with by the police, but the kind of information and misinformation that he had been getting online and that led him to do that, which is legal but harmful, will now not be covered by the Bill.
I will also raise an issue I have heard about from a number of people dealing with cancer and conditions such as multiple sclerosis. People online try to discourage them from accessing the proper medical interventions for their illnesses, and instead encourage them to take more vitamin B or adopt a vegan diet. There are people who have died because they had cancer but were encouraged online to not access cancer treatment because they were subject to lawful but awful categories of harm.