(1 year, 7 months ago)
Lords ChamberMy Lords, it is a pleasure to be collaborating with the noble Baroness, Lady Morgan. We seem to have been briefed by the same people, been to the same meetings and drawn the same conclusions. However, there are some things that are worth saying twice and, although I will try to avoid a carbon copy of what the noble Baroness said, I hope the central points will make themselves.
The internet simply must be made to work for its users above all else—that is the thrust of the two amendments that stand in our names. Through education and communication, the internet can be a powerful means of improving our lives, but it must always be a safe platform on which to enjoy a basic right. It cannot be said often enough that to protect users online is to protect them offline. To create a strict division between the virtual and the public realms is to risk ignoring how actions online can have life and death repercussions, and that is at the heart of what these amendments seek to bring to our attention.
I was first made aware of these amendments at a briefing from the Samaritans, where we got to know each other. There I heard the tragic accounts of those whose loved ones had taken their own lives due to exposure to harmful content online. I will not repeat their accounts—this is not the place to do that—but understanding only a modicum of their grief made it obvious to me that the principle of “safest option by default” must underline all our decision-making on this.
I applaud the work already done by Members of this House to ensure the safety of young people online. Yet it is vital, as the noble Baroness has said, that we do not create a drop-off point for future users—one in which turning 18 means sudden exposure to the most harmful content lurking online, as it is always there. Those most at risk of suicide due to exposure to harmful content are aged between their late teens and early 20s. In fact, a 2017 inquiry into the suicides of young people found harmful content accessed online in 26% of the deaths of under 20s and 13% of the deaths of 20 to 24 year-olds. It is vital for us to empower users from their earliest years.
In the Select Committee—I see fellow members sitting here today—we have been looking at digital exclusion and the need for education at all levels for those using the internet. Looking for good habits established in the earliest years is the right way to start, but it goes on after that, because the world that young people go on to inhabit in adulthood is one where they are already in control of the internet—if they had the education earlier. Adulthood comes with the freedom to choose how one expresses oneself online—of course it does—but this must not be at the cost of their continuing freedom from the most insidious content that puts their mental health at risk. Much mention has been made of the triple shield and I need not go there again. Its origins and perhaps deficiencies have been mentioned already.
The Center for Countering Digital Hate recently conducted an experiment, creating new social media accounts that showed interest in body image and mental health. This study found that TikTok served suicide-related content to new accounts within 2.6 minutes, with eating disorder content being recommended within 8 minutes. At the very least, these disturbing statistics tell us that users should have the option to opt in to such content, and not have to suffer this harm before later opting out. While the option to filter out certain categories of content is essential, it must be toggled on by default if safety is to be our primary concern.
The principle of safest by default creates not only a less harmful environment, but one in which users are in a position to define their own online experience. The space in which we carry out our public life is increasingly located on a small number of social media platforms—those category 1 platforms already mentioned several times—which everyone, from children to pensioners, uses to communicate and share their experiences.
We must then ensure that the protections we benefit from offline continue online: namely, protection from the harm and hate that pose a threat to our physical and mental well-being. When a child steps into school or a parent into their place of work, they must be confident that those with the power to do so have created the safest possible environment for them to carry out their interactions. This basic confidence must be maintained when we log in to Twitter, Instagram, TikTok or any other social media giant.
My Lords, my Amendment 43 tackles Clause 12(1), which expressly says that the duties in Clause 12 are to “empower” users. My concern is to ensure that, first, users are empowered and, secondly, legitimate criticism around the characteristics listed in Clause 12(11) and (12), for example, is not automatically treated as abusive or inciting hatred, as I fear it could be. My Amendment 283ZA specifies that, in judging content that is to be filtered out after a user has chosen to switch on various filters, the providers act reasonably and pause to consider whether they have “reasonable grounds” to believe that the content is of the kind in question—namely, abusive or problematic.
Anything under the title “empower adult users” sounds appealing—how can I oppose that? After all, I am a fan of the “taking back control” form of politics, and here is surely a way for users to be in control. On paper, replacing the “legal but harmful” clause with giving adults the opportunity to engage with controversial content if they wish, through enhanced empowerment tools, sounds positive. In an earlier discussion of the Bill, the noble Baroness, Lady Featherstone, said that we should treat adults as adults, allowing them to confront ideas with the
“better ethics, reason and evidence”—[Official Report, 1/2/23; col. 735.]
that has been the most effective way to deal with ideas from Socrates onwards. I say, “Hear, hear” to that. However, I worry that, rather than users being in control, there is a danger that the filter system might infantilise adult users and disempower them by hard-wiring into the Bill a duty and tendency to hide content from users.
There is a general weakness in the Bill. I have noted that some platforms are based on users moderating their own sites, which I am quite keen on, but this will be detrimentally affected by the Bill. It would leave users in charge of their own moderation, with no powers to decide what is in, for example, Wikipedia or other Wikimedia projects, which are added to, organised and edited by a decentralised community of users. So I will certainly not take the phrase “user empowerment” at face value.
I am slightly concerned about linguistic double-speak, or at least confusion. The whole Bill is being brought forward in a climate in which language is weaponised in a toxic minefield—a climate of, “You can’t say that”. More nerve-rackingly, words and ideas are seen as dangerous and interchangeable with violent acts, in a way that needs to be unpicked before we pass this legislation. Speakers can be cancelled for words deemed to threaten listeners’ safety—but not physical safety; the opinions are said to be unsafe. Opinions are treated as though they cause damage or harm as viscerally as physical aggression. So lawmakers have to recognise the cultural context and realise that the law will be understood and applied in it, not in the abstract.
I am afraid that the language in Clause 12(1) and (2) shows no awareness of this wider backdrop—it is worryingly woolly and vague. The noble Baroness, Lady Morgan, talked about dangerous content, and all the time we have to ask, “Who will interpret what is dangerous? What do we mean by ‘dangerous’ or ‘harmful’?”. Surely a term such as “abusive”, which is used in the legislation, is open to wide interpretation. Dictionary definitions of “abusive” include words such as “rude”, “insulting” and “offensive”, and it is certainly subjective. We have to query what we mean by the terms when some commentators complain that they have been victims of online abuse, but when you check their timelines you notice that, actually, they have been subject just to angry, and sometimes justified, criticism.
I recently saw a whole thread arguing that the Labour Party’s recent attack ads against the Prime Minister were an example of abusive hate speech. I am not making a point about this; I am asking who gets to decide. If this is the threshold for filtering content, there is a danger of institutionalising safe space echo chambers. It can also be a confusing word for users, because if someone applies a user empowerment tool to protect themselves from abuse, the threshold at which the filter operates could be much lower than they intend or envisage but, by definition, the user would not know what had been filtered out in their name, and they have no control over the filtering because they never see the filtered content.