Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Buscombe
Main Page: Baroness Buscombe (Conservative - Life peer)Department Debates - View all Baroness Buscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 7 months ago)
Lords ChamberMy Lords, the noble Baroness, Lady Kidron, said words to the effect that perhaps we should begin by having particular regard for certain vulnerabilities, but we are dealing with primary legislation and this really concerns me. Lists such as in Clause 12 are really dangerous. It is not a great way to write law. We could be with this law for a long time.
I took the Communications Act 2003 through for Her Majesty’s Opposition, and we were doing our absolute best to future-proof the legislation. There was no mention of the internet in that piece of legislation. With great respect to the noble Lord, Lord McNally, with whom I sparred in those days, in was not that Act that introduced Ofcom but a separate Act. The internet was not even mentioned until the late Earl of Northesk introduced an amendment with the word “internet” to talk about the investigative powers Act.
The reality is that we already had Facebook, and tremendous damage being done through it to people such as my daughter. Noble Lords will remember that in the early days it was Oxford, Cambridge, Yale and Harvard; that is how it all began. It was an amazing thing, and we could not foresee what would happen but there was a real attempt to future-proof. If you start having lists such as in Clause 12, you cannot just add on or change. Cultural mores change. This list, which looks great in 2023, might look really odd in about 2027. Different groups will have emerged and say, “Well, what about me, what about me?”.
I entirely agree with the noble Baroness, Lady Fox. Who will be the decider of what is right, what is rude or what is abusive? I have real concerns with this. The Government have had several years to get this right. I say that with great respect to my noble friend the Minister, but we will have to think about these issues a little further. The design of the technology around all this is what we should be imposing on the tech companies. I was on the Communications and Digital Committee in 2020 when that was a key plank of our report, following the inquiry that we carried out and prior to the Joint Committee, then looking at this issue of “legal but harmful”, et cetera. I am glad that was dropped because—I know that I should not say this—when I asked a civil servant what was meant by “harmful”, he said, “Well, it might upset people”.
It is a very subjective thing. This is difficult for the Government. We must do all we can to support the Government in trying to find the right solutions, but I am sorry to say that I am a lawyer—a barrister—and I worry. We are trying to make things right but, remember, once it is there in an Act, it is there. People will use that as a tool. In 2002, at New Scotland Yard, I was introduced to an incredible website about 65 ways to become a good paedophile. Where does that fit in Clause 12? I have not quite worked that out. Is it sex? What is it? We have to be really careful. I would prefer having no list and making it more general, relying on the system to allow us to opt in.
I support my noble friend Lady Morgan’s amendment on this, which would make it easier for people to say, “Well, that’s fine”, but would not exclude people. What happens if you do not fit within Clause 12? Do you then just have to suck it up? That is not a very House of Lords expression, but I am sure that noble Lords will relate to it.
We have to go with care. I will say a little more on the next group of amendments, on anonymity. It is really hard, but what the Government are proposing is not quite there yet.
That seemed to be provoked by me saying that we must look after the vulnerable, but I am suggesting that we use UK law and the rights that are already established. Is that not better than having a small list of individual items?
My Lords, I support the noble Baroness, Lady Buscombe, on the built-in obsolescence of any list. It would very soon be out of date.
I support the amendments tabled by the noble Lord, Lord Clement-Jones, and by the noble Baroness, Lady Morgan of Cotes. They effectively seek a similar aim. Like the noble Baroness, Lady Fraser, I tend towards those tabled by the noble Lord, Lord Clement-Jones, because they seem clearer and more inclusive, but I understand that they are trying for the same thing. I also register the support for this aim of my noble friend Lady Campbell of Surbiton, who cannot be here but whom I suspect is listening in. She was very keen that her support for this aim was recorded.
The issue of “on by default” inevitably came up at Second Reading. Then and in subsequent discussions, the Minister reiterated that a “default on” approach to user empowerment tools would negatively impact people’s use of these services. Speaking at your Lordships’ Communications and Digital Committee, on which I sat at the time, Minister Scully went further, saying that the strongest option, of having the settings off in the first instance,
“would be an automatic shield against people’s ability to explore what they want to explore on the internet”.
According to the Government’s own list, this was arguing for the ability to explore content that abuses, targets or incites hatred against people with protected characteristics, including race and disability. I struggle to understand why protecting this right takes precedence over ensuring that groups of people with protected characteristics are, well, protected. That is our responsibility. It is precedence, because switching controls one way is not exactly the same as switching them the other way. It is easy to think so, but the noble Baroness, Lady Parminter, explained very clearly that it is not the same. It is undoubtedly easier for someone in good health and without mental or physical disabilities to switch controls off than it is for those with disabilities or vulnerabilities to switch them on. That is self-evident.
It cannot be right that those most at risk of being targeted online, including some disabled people—not all, as we have heard—and those with other protected characteristics, will have the onus on them to switch on the tools to prevent them seeing and experiencing harm. There is a real risk that those who are meant to benefit from user empowerment tools, those groups at higher risk of online harm, including people with a learning disability, will not be able to access the tools because the duties allow category 1 services to design their own user empowerment tools. This means that we are likely to see as many versions of user empowerment tools as there are category 1 services to which this duty applies.
Given what we know about the nature of addiction and self-harm, which has already been very eloquently explained, it surely cannot be the intention of the Bill that those people who are in crisis and vulnerable to eating disorders or self-harm, for example, will be required to seek and activate a set of tools to turn off the very material that feeds their addiction or encourages their appetite for self-harm.
The approach in the Bill does little to prevent people spiralling down this rabbit hole towards ever more harmful content. Indeed, instead it requires people to know that they are approaching a crisis point, and to have sufficient levels of resilience and rationality to locate the switch and turn on the tools that will protect them. That is not how the irrational or distressed mind works.
So, all the evidence that we have about the existence of harm which arises from mental states, which has been so eloquently set out in introducing the amendments— I refer again to my noble friend Lady Parminter, because that is such powerful evidence—tips the balance in favour, I believe, of setting the tools to be on by default. I very much hope the Minister will listen and heed the arguments we have heard set out by noble Lords across the Committee, and come back with some of his own amendments on Report.
My Lords, I am going to endeavour to be relatively brief. I rise to move Amendment 38 and to speak to Amendments 39, 139 and 140 in this group, which are in my name. All are supported by my noble friend Lord Vaizey of Didcot, to whom I am grateful.
Amendments 38 and 39 relate to Clause 12. They remove subsections (6) and (7) from the Bill; that is, the duty to filter out non-verified users. Noble Lords will understand that this is different from the debate we have just had, which was about content. This is about users and verification of the users, rather than the harm or otherwise of the content. I am sure I did not need to say that, but perhaps it helps to clarify my own thinking to do so. Amendments 139 and 140 are essentially consequential but make it clear that my amendments do not prohibit category 1 services from offering this facility. They make it a choice, not a duty.
I want to make one point only in relation to these amendments. It has been well said elsewhere that this is a Twitter-shaped Bill, but it is trying to apply itself to a much broader part of the internet than Twitter, or things like it. In particular, community-led services like Wikipedia, to which I have made reference before, operate on a totally different basis. The Bill seeks to create a facility whereby members of the public like you and me can, first, say that we want the provider to offer a facility for verifying those who might use their service, and secondly, for us, as members of the public, to be able to say we want to see material from only those verified accounts. However, the contributors to Wikipedia are not verified, because Wikipedia has no system to verify them, and therefore it would be impossible for Wikipedia, as a category 1 service, to be able to comply with this condition on its current model, which is a non-commercial, non-profit one, as noble Lords know from previous comments. It would not be able to operate this clause; it would have to say that either it is going to require every contributing editor to Wikipedia to be verified first in order to do so, which would be extremely onerous; or it would have to make it optional, which would be difficult, but lead to the bizarre conclusion that you could open an article on Wikipedia and find that some of its words or sentences were blocked, and you could not read them because those amendments to the article had been made by someone who had not been verified. Of course, putting a system in place to allow that absurd outcome would itself be an impossible burden on Wikipedia.
My complaint—as always, in a sense—about the Bill is that it misfires. Every time you touch it, it misfires in some way because it has not been properly thought through. It is perhaps trying to do too much across too broad a front, when it is clear that the concern of the Committee is much narrower than trying to bowdlerize Wikipedia articles. That is not the objective of anybody here, but it is what the Bill is tending to do.
I will conclude by saying—I invite my noble friend to comment on this if he wishes; I think he will have to comment on it at some stage—that in reply to an earlier Committee debate, I heard him say somewhat tentatively that he did not think that Wikipedia would qualify as a category 1 service. I am not an advocate for Wikipedia; I am just a user. But we need to know what the Government’s view is on the question of Wikipedia and services like it. Wikipedia is the only community-led service, I think, of such a scale that it would potentially qualify as category 1 because of its size and reach.
If the Minister’s view is that Wikipedia would not qualify as a category 1 service—in which case, my amendments are irrelevant because it would not be caught by this clause—then he needs to say so. More than that, he needs to say on what basis it would not qualify as a category 1 service. Would it be on the face of the Bill? If not, would it be in the directions given by the Secretary of State to the regulator? Would it be a question of the regulator deciding whether it was a category 1 service? Obviously, if you are trying to run an operation such as Wikipedia with a future, you need to know which of those things it is. Do you have legal security against being determined as a category 1 provider or is it merely at the whim—that is not the right word; the decision—of the regulator in circumstances that may legitimately change? The regulator may have a good or bad reason for changing that determination later. You cannot run a business not knowing these things.
I put it to noble Lords that this clause needs very careful thinking through. If it is to apply to community-led services such as Wikipedia, it is an absurdity. If it is not to apply to them because what I think I heard my noble friend say pertains and they are not, in his view, a category 1 service, why are they not a category 1 service? What security do they have in knowing either way? I beg to move.
My Lords, I will speak to Amendment 106 in my name and the names of my noble and learned friend Lord Garnier and the noble Lord, Lord Moore of Etchingham. This is one of five amendments focused on the need to address the issue of activist-motivated online bullying and harassment and thereby better safeguard the mental health and general well-being of potential victims.
Schedule 4, which defines Ofcom’s objectives in setting out codes of practice for regulated user-to-user services, should be extended to require the regulator to consider the protection of individuals from communications offences committed by anonymous users. The Government clearly recognise that there is a threat of abuse from anonymous accounts and have taken steps in the Bill to address that, but we are concerned that their approach is insufficient and may be counterproductive.
I will explain. The Government’s approach is to require large social media platforms to make provision for users to have their identity verified, and to have the option of turning off the ability to see content shared by accounts whose owners have not done this. However, all this would mean is that people could not see abuse being levelled at them. It would not stop the abuse happening. Crucially, it would not stop other people seeing it, or the damage to his or her reputation or business that the victim may suffer as a result. If I am a victim of online bullying and harassment, I do not want to see it, but I do not want it to be happening at all. The only means I have of stopping it is to report it to the platform and then hope that it takes the right action. Worse still, if I have turned off the ability to see content posted by unverified—that is, anonymous—accounts, I will not be able to complain to the platform as I will not have seen it. It is only when my business goes bust or I am shunned in the street that I realise that something is wrong.
The approach of the Bill seems to be that, for the innocent victim—who may, for example, breed livestock for consumption—it is up that breeder to be proactive to correct harm already done by someone who does not approve of eating meat. This is making a nonsense of the law. This is not how we make laws in this country —until now, it seems. Practically speaking, the worst that is likely to happen is that the platform might ban their account. However, if their victims have had no opportunity to read the abuse or report it, even that fairly low-impact sanction could not be levelled against them. In short, the Bill’s current approach, I am sorry to say, would increase the sense of impunity, not lessen it.
One could argue that, if a potential abuser believes that their victim will not read their abuse, they will not bother issuing it. Unfortunately, this misunderstands the psyche of the online troll. Many of them are content to howl into the void, satisfied that other people who have not turned on the option to filter out content from unverified accounts will still be able to read it. The troll’s objective of harming the victim may be partially fulfilled as a result.
There is also the question of how much uptake there will be of the option to verify one’s identity, and numerous questions about the factors that this will depend on. Will it be attractive? Will there be a cost? How quick and efficient will the process be? Will platforms have the capacity to implement it at scale? Will it have to be done separately for every platform?
If uptake of verification is low, most people simply will not use the option to filter content of unverified accounts, even if it means that they remain more susceptible to abuse, since they would be cutting themselves off from most of their users. Clearly, that is not an option for anyone using social media for any promotional purpose. Even those who use it for purely social reasons will find that they have friends who do not want to be verified. Fundamentally, people use social media because other people use it. Carving oneself off from most of them defeats the purpose of the exercise.
It is not clear what specific measures the Bill could take to address the issue. Conceivably, it could simply ban online platforms from maintaining user accounts whose owners have not had their identities verified. However, this would be truly draconian and most likely lead to major platforms exiting the UK market, as the noble Baroness, Lady Fox, has rightly argued in respect of other possible measures. It would also be unenforceable, since users could simply turn on a VPN, pretend to be from some other country where the rules do not apply and register an account as though they were in that country.
There are numerous underlying issues that the Bill recognises as problems but does not attempt to prescribe solutions for. Its general approach is to delegate responsibility to Ofcom to frame its codes of practice for operators to follow in order to effectively tackle these problems. Specifically, it sets out a list of objectives that Ofcom, in drawing up its codes of practice, will be expected to meet. The protection of users from abuse, specifically by unverified or anonymous users, would seem to be an ideal candidate for inclusion in this list of amendments. If required to do so, Ofcom could study the issue closely and develop more effective solutions over time.
I was pleased to see, in last week’s Telegraph, an article that gave an all too common example of where the livelihood of a chef running a pub in Cornwall has suffered what amounts to vicious abuse online from a vegan who obviously does not approve of the menu, and who is damaging the business’s reputation and putting the chef’s livelihood at risk. This is just one tiny example, if I can put it that way, of the many thousands that are happening all the time. Some 584 readers left comments, and just about everyone wrote in support of the need to do something to support that chef and tackle this vicious abuse.
I return to a point I made in a previous debate: livelihoods, which we are deeply concerned about, are at stake here. I am talking not about big business but about individuals and small and family businesses that are suffering—beyond abuse—loss of livelihood, financial harm and/or reputational damage to business, and the knock-on effects of that.