Debate on Amendment 33B resumed.
Lord Moylan Portrait Lord Moylan (Con)
- View Speech - Hansard - -

My Lords, I will speak to Amendment 155 in my name, and I am grateful for the support of the noble Baroness, Lady Fox of Buckley, and my noble friend Lord Strathcarron. Some of my remarks in Committee last week did not go down terribly well with Members and, in retrospect, I realise that that was because I was the only Member of the Committee that day who did not take the opportunity to congratulate the noble Baroness, Lady Kidron, on her birthday. So at this very late stage—a week later —I make good that deficiency and hope that, in doing so, I will get a more jocular and welcoming hearing than I did last week. I will speak in a similar vein, though on a different topic and part of the Bill.

This amendment relates to Clause 65, which has 12 subsections. I regard the first subsection as relatively uncontroversial; it imposes a duty on all service providers. The effect of this amendment would be to remove all the remaining subsections, which fall particularly on category 1 providers. What Clause 65 does, in brief, is to make it a statutory obligation for category 1 providers to live up to their terms of service. Although it does not seek to specify what the terms of service must be, it does, in some ways, specify how they should be operated once they have been written—I regard that as very odd, and will come back to the reason why.

I say at the outset that I understand the motivation behind this section of the Bill. It addresses the understandable feeling that if a service provider of any sort says that they have terms of service which mean that, should there be complaints, they will be dealt with in a certain way and to a certain timetable and that you will get a response by a certain time, or if they say that they will remove certain material, that they should do what they say they will do in the terms of service. I understand what the clause is trying to do —to oblige service providers to live up to their terms of service—but this is a very dangerous approach.

First of all, while terms of service are a civil contract between the provider and the user, they are not an equal contract, as we all know. They are written for the commercial benefit and advantage of the companies that write them—not just in the internet world; this is generally true—and they are written on a take it or leave it basis. Of course, they cannot be egregiously disadvantageous to the customer or else the customer would not sign up to them; none the less, they are drafted with the commercial and legal advantage of the companies in question. Terms of service can be extreme. Noble Lords may be aware that, if you have a bank account, the terms of service that your bank has, in effect, imposed on you almost certainly include a right for the bank to close your account at any time it wishes and to give no reason for doing so. I regard that as an extreme terms of service provision, but it is common. They are not written as equal contracts between consumers and service providers.

Why, therefore, would we want to set terms of service in statute? That is what this clause does: to make them enforceable by a regulator under statute. Moreover, why would we want to do it when the providers we are discussing will have, in practice, almost certainly drafted their terms of service under the provisions of a foreign legal system, which we are then asking our regulator to ensure is enforced? My objection is not to try to find a way of requiring providers to live up to the terms of service they publish—indeed, the normal process for doing so would be through a civil claim; instead, I object to the method of doing so set out in this section of the Bill.

We do not use this method with other terms of service features. For example, we do not have a regulator who enforces terms of service on data protection; we have a law that says what companies must do to protect data, and then we expect them to draft terms of service, and to conduct themselves in other ways, that are compatible with that law. We do not make the terms of services themselves enforceable through statute and regulation, yet that is what this Bill does.

When we look at the terms of service of the big providers on the internet—the sorts of people we have in mind for the scope of the Bill—we find that they give themselves, in their terms of service, vast powers to remove a wide range of material. Much of that would fall—I say this without wanting to be controversial —into the category of “legal but harmful”, which in some ways this clause is reviving through the back door.

Of course, what could be “harmful” is extremely wide, because it will have no statutory bounds: it will be whatever Twitter or Google say they will remove in their terms of service. We have no control over what they say in their terms of service; we do not purport to seek such control in the Bill or in this clause. Twitter policy, for example, is to take down material that offends protected characteristics such as “gender” and “gender identity”. Now, those are not protected characteristics in the UK; the relevant protected characteristics in the Equality Act are “sex” and “gender reassignment”. So this is not enforcing our law; our regulator will be enforcing a foreign law, even though it is not the law we have chosen to adopt here.

--- Later in debate ---
Lord Moylan Portrait Lord Moylan (Con)
- Hansard - -

My Lords, my noble friend has explained clearly how terms of service would normally work, which is that, as I said myself, a business might write its own terms of service to its own advantage but it cannot do so too egregiously or it will lose customers, and businesses may aim themselves at different customers. All this is part of normal commercial life, and that is understood. What my noble friend has not really addressed is the question of why uniquely and specifically in this case, especially given the egregious history of censorship by Silicon Valley, he has chosen to put that into statute rather than leave it as a commercial arrangement, and to make it enforceable by Ofcom. For example, when my right honourable friend David Davis was removed from YouTube for his remarks about Covid passes, it would have been Ofcom’s obligation not to vindicate his right to free speech but to cheer on YouTube and say how well it had done for its terms of service.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

Our right honourable friend’s content was reuploaded. This makes the point that the problem at the moment is the opacity of these terms and conditions; what platforms say they do and what they do does not always align. The Bill makes sure that users can hold them to account for the terms of service that they publish, so that people can know what to expect on platforms and have some form of redress when their experience does not match their expectations.

I was coming on to say a bit more about that after making some points about foreign jurisdictions and my noble friend’s Amendment 155. As I say, parts or versions of the service that are used in foreign jurisdictions but not in the UK are not covered by the duties in Clause 65. As such, the Bill does not require a provider to have systems and processes designed to enforce any terms of service not applicable in the UK.

In addition, the duties do not give powers to Ofcom to enforce a provider’s terms of service directly. Ofcom’s role will be focused on ensuring that platforms have systems and processes in place to enforce their own terms of service consistently rather than assessing individual pieces of content.

Requiring providers to set terms of service for specific types of content suggests that the Government view that type of content as harmful or risky. That would encourage providers to prohibit such content, which of course would have a negative impact on freedom of expression, which I am sure is not what my noble friend wants to see. Freedom of expression is essential to a democratic society. Throughout the passage of the Bill, the Government have always committed to ensuring that people can speak freely online. We are not in the business of indirectly telling companies what legal content they can and cannot allow online. Instead, the approach that we have taken will ensure that platforms are transparent and accountable to their users about what they will and will not allow on their services.

Clause 65 recognises that companies, as private entities, have the right to remove content that is legal from their services if they choose to do so. To prevent them doing so, by requiring them to balance this against other priorities, would have perverse consequences for their freedom of action and expression. It is right that people should know what to expect on platforms and that they are able to hold platforms to account when that does not happen. On that basis, I invite the noble Lords who have amendments in this group not to press them.

--- Later in debate ---
Moved by
38: Clause 12, page 12, line 24, leave out subsection (6)
Member’s explanatory statement
This amendment, along with the other amendment to Clause 12 in the name of Lord Moylan, removes requirements on sites to display, on demand, only the parts of a conversation (or in the case of collaboratively-edited content, only the parts of a paragraph, sentence or article) that were written by “verified” users, and to prevent other users from amending (e.g. improving), or otherwise interacting with, such contributions.
Lord Moylan Portrait Lord Moylan (Con)
- Hansard - -

My Lords, I am going to endeavour to be relatively brief. I rise to move Amendment 38 and to speak to Amendments 39, 139 and 140 in this group, which are in my name. All are supported by my noble friend Lord Vaizey of Didcot, to whom I am grateful.

Amendments 38 and 39 relate to Clause 12. They remove subsections (6) and (7) from the Bill; that is, the duty to filter out non-verified users. Noble Lords will understand that this is different from the debate we have just had, which was about content. This is about users and verification of the users, rather than the harm or otherwise of the content. I am sure I did not need to say that, but perhaps it helps to clarify my own thinking to do so. Amendments 139 and 140 are essentially consequential but make it clear that my amendments do not prohibit category 1 services from offering this facility. They make it a choice, not a duty.

I want to make one point only in relation to these amendments. It has been well said elsewhere that this is a Twitter-shaped Bill, but it is trying to apply itself to a much broader part of the internet than Twitter, or things like it. In particular, community-led services like Wikipedia, to which I have made reference before, operate on a totally different basis. The Bill seeks to create a facility whereby members of the public like you and me can, first, say that we want the provider to offer a facility for verifying those who might use their service, and secondly, for us, as members of the public, to be able to say we want to see material from only those verified accounts. However, the contributors to Wikipedia are not verified, because Wikipedia has no system to verify them, and therefore it would be impossible for Wikipedia, as a category 1 service, to be able to comply with this condition on its current model, which is a non-commercial, non-profit one, as noble Lords know from previous comments. It would not be able to operate this clause; it would have to say that either it is going to require every contributing editor to Wikipedia to be verified first in order to do so, which would be extremely onerous; or it would have to make it optional, which would be difficult, but lead to the bizarre conclusion that you could open an article on Wikipedia and find that some of its words or sentences were blocked, and you could not read them because those amendments to the article had been made by someone who had not been verified. Of course, putting a system in place to allow that absurd outcome would itself be an impossible burden on Wikipedia.

My complaint—as always, in a sense—about the Bill is that it misfires. Every time you touch it, it misfires in some way because it has not been properly thought through. It is perhaps trying to do too much across too broad a front, when it is clear that the concern of the Committee is much narrower than trying to bowdlerize Wikipedia articles. That is not the objective of anybody here, but it is what the Bill is tending to do.

I will conclude by saying—I invite my noble friend to comment on this if he wishes; I think he will have to comment on it at some stage—that in reply to an earlier Committee debate, I heard him say somewhat tentatively that he did not think that Wikipedia would qualify as a category 1 service. I am not an advocate for Wikipedia; I am just a user. But we need to know what the Government’s view is on the question of Wikipedia and services like it. Wikipedia is the only community-led service, I think, of such a scale that it would potentially qualify as category 1 because of its size and reach.

If the Minister’s view is that Wikipedia would not qualify as a category 1 service—in which case, my amendments are irrelevant because it would not be caught by this clause—then he needs to say so. More than that, he needs to say on what basis it would not qualify as a category 1 service. Would it be on the face of the Bill? If not, would it be in the directions given by the Secretary of State to the regulator? Would it be a question of the regulator deciding whether it was a category 1 service? Obviously, if you are trying to run an operation such as Wikipedia with a future, you need to know which of those things it is. Do you have legal security against being determined as a category 1 provider or is it merely at the whim—that is not the right word; the decision—of the regulator in circumstances that may legitimately change? The regulator may have a good or bad reason for changing that determination later. You cannot run a business not knowing these things.

I put it to noble Lords that this clause needs very careful thinking through. If it is to apply to community-led services such as Wikipedia, it is an absurdity. If it is not to apply to them because what I think I heard my noble friend say pertains and they are not, in his view, a category 1 service, why are they not a category 1 service? What security do they have in knowing either way? I beg to move.

Baroness Buscombe Portrait Baroness Buscombe (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I will speak to Amendment 106 in my name and the names of my noble and learned friend Lord Garnier and the noble Lord, Lord Moore of Etchingham. This is one of five amendments focused on the need to address the issue of activist-motivated online bullying and harassment and thereby better safeguard the mental health and general well-being of potential victims.

Schedule 4, which defines Ofcom’s objectives in setting out codes of practice for regulated user-to-user services, should be extended to require the regulator to consider the protection of individuals from communications offences committed by anonymous users. The Government clearly recognise that there is a threat of abuse from anonymous accounts and have taken steps in the Bill to address that, but we are concerned that their approach is insufficient and may be counterproductive.

I will explain. The Government’s approach is to require large social media platforms to make provision for users to have their identity verified, and to have the option of turning off the ability to see content shared by accounts whose owners have not done this. However, all this would mean is that people could not see abuse being levelled at them. It would not stop the abuse happening. Crucially, it would not stop other people seeing it, or the damage to his or her reputation or business that the victim may suffer as a result. If I am a victim of online bullying and harassment, I do not want to see it, but I do not want it to be happening at all. The only means I have of stopping it is to report it to the platform and then hope that it takes the right action. Worse still, if I have turned off the ability to see content posted by unverified—that is, anonymous—accounts, I will not be able to complain to the platform as I will not have seen it. It is only when my business goes bust or I am shunned in the street that I realise that something is wrong.

The approach of the Bill seems to be that, for the innocent victim—who may, for example, breed livestock for consumption—it is up that breeder to be proactive to correct harm already done by someone who does not approve of eating meat. This is making a nonsense of the law. This is not how we make laws in this country —until now, it seems. Practically speaking, the worst that is likely to happen is that the platform might ban their account. However, if their victims have had no opportunity to read the abuse or report it, even that fairly low-impact sanction could not be levelled against them. In short, the Bill’s current approach, I am sorry to say, would increase the sense of impunity, not lessen it.

One could argue that, if a potential abuser believes that their victim will not read their abuse, they will not bother issuing it. Unfortunately, this misunderstands the psyche of the online troll. Many of them are content to howl into the void, satisfied that other people who have not turned on the option to filter out content from unverified accounts will still be able to read it. The troll’s objective of harming the victim may be partially fulfilled as a result.

There is also the question of how much uptake there will be of the option to verify one’s identity, and numerous questions about the factors that this will depend on. Will it be attractive? Will there be a cost? How quick and efficient will the process be? Will platforms have the capacity to implement it at scale? Will it have to be done separately for every platform?

If uptake of verification is low, most people simply will not use the option to filter content of unverified accounts, even if it means that they remain more susceptible to abuse, since they would be cutting themselves off from most of their users. Clearly, that is not an option for anyone using social media for any promotional purpose. Even those who use it for purely social reasons will find that they have friends who do not want to be verified. Fundamentally, people use social media because other people use it. Carving oneself off from most of them defeats the purpose of the exercise.

It is not clear what specific measures the Bill could take to address the issue. Conceivably, it could simply ban online platforms from maintaining user accounts whose owners have not had their identities verified. However, this would be truly draconian and most likely lead to major platforms exiting the UK market, as the noble Baroness, Lady Fox, has rightly argued in respect of other possible measures. It would also be unenforceable, since users could simply turn on a VPN, pretend to be from some other country where the rules do not apply and register an account as though they were in that country.

There are numerous underlying issues that the Bill recognises as problems but does not attempt to prescribe solutions for. Its general approach is to delegate responsibility to Ofcom to frame its codes of practice for operators to follow in order to effectively tackle these problems. Specifically, it sets out a list of objectives that Ofcom, in drawing up its codes of practice, will be expected to meet. The protection of users from abuse, specifically by unverified or anonymous users, would seem to be an ideal candidate for inclusion in this list of amendments. If required to do so, Ofcom could study the issue closely and develop more effective solutions over time.

I was pleased to see, in last week’s Telegraph, an article that gave an all too common example of where the livelihood of a chef running a pub in Cornwall has suffered what amounts to vicious abuse online from a vegan who obviously does not approve of the menu, and who is damaging the business’s reputation and putting the chef’s livelihood at risk. This is just one tiny example, if I can put it that way, of the many thousands that are happening all the time. Some 584 readers left comments, and just about everyone wrote in support of the need to do something to support that chef and tackle this vicious abuse.

I return to a point I made in a previous debate: livelihoods, which we are deeply concerned about, are at stake here. I am talking not about big business but about individuals and small and family businesses that are suffering—beyond abuse—loss of livelihood, financial harm and/or reputational damage to business, and the knock-on effects of that.