Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- View Speech - Hansard - -

My Lords, this group of amendments looks at the treatment of legal content accessed by adults. The very fact that Parliament feels that legislation has a place in policing access to legal material is itself worrying. This door was opened by the Government in the initial draft Bill, but, as we have already heard, after a widespread civil liberties backlash against the legal but harmful clauses, we are left with Clause 65. As has been mentioned, I am worried that this clause, and some of the amendments, might well bring back legal but harmful for adults by the back door. One of the weasel words here is “harmful”. As I have indicated before, it is difficult to work out from the groupings when to raise which bit, so I am keeping that for your Lordships until later and will just note that I am rather nervous about the weasel word “harmful”.

Like many of us, I cheered at the removal of the legal but harmful provisions, but I have serious reservations about their replacement with further duties via terms of service, which imposes a duty on category 1 services to have systems and processes in place to take down or restrict access to content, and to ban or suspend users in accordance with terms of service, as the noble Lord, Lord Moylan, explained. It is one of the reasons I support his amendment. It seems to me to be the state outsourcing the grubby job of censorship to private multinational companies with little regard for UK law.

I put my name to Amendment 155 in the name of the noble Lord, Lord Moylan, because I wanted to probe the Government’s attitude to companies’ terms of service. Platforms have no obligation to align their terms of service with freedom of expression under UK law. It is up to them. I am not trying to impose on them what they do with their service users. If a particular platform wishes to say, “We don’t want these types of views on our platform”, fine, that is its choice. But when major platforms’ terms of service, which are extensive, become the basis on which UK law enforces speech, I get nervous. State regulators are to be given the role of ensuring that all types of lawful speech are suppressed online, because the duty applies to all terms of service, whatever they are, regarding the platforms’ policies on speech suppression, censorship, user suspension, bans and so on. This duty is not restricted to so-called harmful content; it is whatever content the platform wishes to censor.

What is more, Clause 65 asks Ofcom to ensure that individuals who express lawful speech are suspended or banned from platforms if in breach of the platforms’ Ts & Cs, and that means limiting those individuals from expressing themselves more widely, beyond the specific speech in question. That is a huge green light to interfere in UK citizens’ freedom of expression, in my opinion.

I stress that I am not interested in interfering in the terms and conditions of private companies, although your Lordships will see later that I have an amendment demanding that they introduce free-speech clauses. That is because of the way we seem to be enacting the law via the terms of service of private companies. They should of course be free to dictate their own terms of service, and it is reasonable that members of the public should know what they are and expect them to be upheld. But that does not justify the transformation of these private agreements into statutory duties—that is my concern.

So, why are we allowing this Bill to ask companies to enforce censorship policies in the virtual public square that do not exist in UK law? When companies’ terms of service permit the suppression of speech, that is up to them, but when they supress speech far beyond the limitations of speech in UK law and are forced to do so by a government regulator such as Ofcom, are we not in trouble? It means that corporate terms of service, which are designed to protect platforms’ business interests, are trumping case law on free speech that has evolved over many years.

Those terms of service are also frequently in flux, according to fashion or ownership; one only has to look at the endless arguments, which I have yet to understand, about Twitter’s changing terms of service after the Elon Musk takeover. Is Ofcom’s job to follow Elon Musk’s ever-changing terms of service and enforce them on the British public as if they are law?

The terms and conditions are therefore no longer simply a contract between a company and the user; their being brought under statute means that big tech will be exercising public law functions, with Ofcom as the enforcer, ensuring that lawful speech is suppressed constantly, in line with private companies’ terms of service. This is an utter mess and not in any way adequate to protect free speech. It is a fudge by the Government: they were unpopular on “lawful but harmful”, so they have outsourced it to someone else to do the dirty work.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - - - Excerpts

My Lords, it has been interesting to hear so many noble Lords singing from the same hymn sheet—especially after this weekend. My noble friend Lord McNally opened this group by giving us his wise perspective on the regulation of new technology. Back in 2003, as he mentioned, the internet was not even mentioned in the Communications Act. He explained how regulation struggles to keep up and how quantum leaps come with a potential social cost; all that describes the importance of risk assessment of these novel technologies.

As we have heard from many noble Lords today, on Report in the Commons the Government decided to remove the adult safety duties—the so-called “legal but harmful” aspect of the Bill. I agree with the many noble Lords who have said that this has significantly weakened the protection for adults under the Bill, and I share the scepticism many expressed about the triple shield.

Right across the board, this group of amendments, with one or two exceptions, rightly aims to strengthen the terms of service and user empowerment duties in the Bill in order to provide a greater baseline of protection for adults, without impinging on others’ freedom of speech, and to reintroduce some risk-assessment requirement on companies. The new duties will clearly make the largest and riskiest companies expend more effort on enforcing their terms of service for UK users. However, the Government have not yet presented any modelling on what effect this will have on companies’ terms of service. I have some sympathy with what the noble Lord, Lord Moylan, said: the new duties could mean that terms of service become much longer and lawyered. This might have an adverse effect on freedom of expression, leading to the use of excessive takedown measures rather than looking at other more systemic interventions to control content such as service design. We heard much the same argument from the noble Baroness, Lady Fox. They both made a very good case for some of the amendments I will be speaking to this afternoon.

On the other hand, companies that choose to do nothing will have an easier life under this regime. Faced with stringent application of the duties, companies might make their terms of service shorter, cutting out harms that are hard to deal with because of the risk of being hit with enforcement measures if they do not. Therefore, far from strengthening protections via this component of the triple shield, the Bill risks weakening them, with particular risks for vulnerable adults. As a result, I strongly support Amendments 33B and 43ZA, which my noble friend Lord McNally spoke to last week at the beginning of the debate on this group.

Like the noble Baroness, Lady Kidron, I strongly support Amendments 154, 218 and 160, tabled by the noble Lord, Lord Stevenson, which would require regulated services to maintain “adequate and appropriate” terms of service, including provisions covering the matters listed in Clause 12. Amendment 44, tabled by the right reverend Prelate the Bishop of Oxford and me, inserts a requirement that services to which the user empowerment duties apply

“must make a suitable and sufficient assessment of the extent to which they have carried out the duties in this section including in each assessment material changes from the previous assessment such as new or removed user empowerment features”.

The noble Viscount, Lord Colville, spoke very well to that amendment, as did the noble Baronesses, Lady Fraser and Lady Kidron.

Amendment 158, also tabled by me and the right reverend Prelate, inserts a requirement that services

“must carry out a suitable and sufficient assessment of the extent to which they have carried out the duties under sections 64 and 65 ensuring that assessment reflects any material changes to terms of service”.

That is a very good way of meeting some of the objections that we have heard to Clause 65 today.

These two amendments focus on risk assessment because the new duties do not have an assessment regime to work out whether they work, unlike the illegal content and children’s duties, as we have heard. Risk assessments are vital to understanding the environment in which the services are operating. A risk assessment can reduce bureaucracy by allowing companies to rule out risks which are not relevant to them, and it can increase user safety by revealing new risks and future-proofing a regime.

The Government have not yet provided, in the Commons or in meetings with Ministers, any proper explanation of why risk assessment duties have been removed along with the previous adult safety duties, and they have not explained in detail why undertaking a risk assessment is in any way a threat to free speech. They are currently expecting adults to manage their own risks, without giving them the information they need to do so. Depriving users of basic information about the nature of harms on a service prevents them taking informed decisions as to whether they want to be on it at all.

Without these amendments, the Bill cannot be said to be a complete risk management regime. There will be no requirement to explain to Ofcom or to users of a company’s service the true nature of the harms that occur on its service, nor the rationale behind the decisions made in these two fundamental parts of the service. This is a real weakness in the Bill, and I very much hope that the Minister will listen to the arguments being made this afternoon.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

I will come on to say a bit more about how Ofcom goes about that work.

The Bill will ensure that providers have the information they need to understand whether they are in compliance with their duties under the Bill. Ofcom will set out how providers can comply in codes of practice and guidance that it publishes. That information will help providers to comply, although they can take alternative action if they wish to do so.

The right reverend Prelate’s amendments also seek to provide greater transparency to Ofcom. The Bill’s existing duties already account for this. Indeed, the transparency reporting duties set out in Schedule 8 already enable Ofcom to require category 1, 2A and 2B services to publish annual transparency reports with relevant information, including about the effectiveness of the user empowerment tools, as well as detailed information about any content that platforms prohibit or restrict, and the application of their terms of service.

Amendments 159, 160 and 218, tabled by the noble Lord, Lord Stevenson, seek to require user-to-user services to create and abide by minimum terms of service recommended by Ofcom. The Bill already sets detailed and binding requirements on companies to achieve certain outcomes. Ofcom will set out more detail in codes of practice about the steps providers can take to comply with their safety duties. Platforms’ terms of service will need to provide information to users about how they are protecting users from illegal content, and children from harmful content.

These duties, and Ofcom’s codes of practice, ensure that providers take action to protect users from illegal content and content that is harmful to children. As such, an additional duty to have adequate and appropriate terms of service, as envisaged in the amendments, is not necessary and may undermine the illegal and child safety duties.

I have previously set out why we do not agree with requiring platforms to set terms of service for legal content. In addition, it would be inappropriate to delegate this much power to Ofcom, which would in effect be able to decide what legal content adult users can and cannot see.

Amendment 155, tabled by my noble friend Lord Moylan, seeks to clarify whether and how the Bill makes the terms of service of foreign-run platforms enforceable by Ofcom. Platforms’ duties under Clause 65 apply only to the design, operation and use of the service in the United Kingdom and to UK users, as set out in Clause 65(11). Parts or versions of the service which are used in foreign jurisdictions—

Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- Hansard - -

On that, in an earlier reply the Minister explained that platforms already remove harmful content because it is harmful and because advertisers and users do not like it, but could he tell me what definition of “harmful” he thinks he is using? Different companies will presumably have a different interpretation of “harmful”. How will that work? It would mean that UK law will require the removal of legal speech based on a definition of harmful speech designed by who—will it be Silicon Valley executives? This is the problem: UK law is being used to implement the removal of content based on decisions that are not part of UK law but with implications for UK citizens who are doing nothing unlawful.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

The noble Baroness’s point gets to the heart of the debate that we have had. I talked earlier about the commercial incentive that there is for companies to take action against harmful content that is legal which users do not want to see or advertisers do not want their products to be advertised alongside, but there is also a commercial incentive to ensure that they are upholding free speech and that there are platforms on which people can interact in a less popular manner, where advertisers that want to advertise products legally alongside that are able to do so. As with anything that involves the market, the majority has a louder voice, but there is room for innovation for companies to provide products that cater to minority tastes within the law.

--- Later in debate ---
Lord Griffiths of Burry Port Portrait Lord Griffiths of Burry Port (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, it is a pleasure to be collaborating with the noble Baroness, Lady Morgan. We seem to have been briefed by the same people, been to the same meetings and drawn the same conclusions. However, there are some things that are worth saying twice and, although I will try to avoid a carbon copy of what the noble Baroness said, I hope the central points will make themselves.

The internet simply must be made to work for its users above all else—that is the thrust of the two amendments that stand in our names. Through education and communication, the internet can be a powerful means of improving our lives, but it must always be a safe platform on which to enjoy a basic right. It cannot be said often enough that to protect users online is to protect them offline. To create a strict division between the virtual and the public realms is to risk ignoring how actions online can have life and death repercussions, and that is at the heart of what these amendments seek to bring to our attention.

I was first made aware of these amendments at a briefing from the Samaritans, where we got to know each other. There I heard the tragic accounts of those whose loved ones had taken their own lives due to exposure to harmful content online. I will not repeat their accounts—this is not the place to do that—but understanding only a modicum of their grief made it obvious to me that the principle of “safest option by default” must underline all our decision-making on this.

I applaud the work already done by Members of this House to ensure the safety of young people online. Yet it is vital, as the noble Baroness has said, that we do not create a drop-off point for future users—one in which turning 18 means sudden exposure to the most harmful content lurking online, as it is always there. Those most at risk of suicide due to exposure to harmful content are aged between their late teens and early 20s. In fact, a 2017 inquiry into the suicides of young people found harmful content accessed online in 26% of the deaths of under 20s and 13% of the deaths of 20 to 24 year-olds. It is vital for us to empower users from their earliest years.

In the Select Committee—I see fellow members sitting here today—we have been looking at digital exclusion and the need for education at all levels for those using the internet. Looking for good habits established in the earliest years is the right way to start, but it goes on after that, because the world that young people go on to inhabit in adulthood is one where they are already in control of the internet—if they had the education earlier. Adulthood comes with the freedom to choose how one expresses oneself online—of course it does—but this must not be at the cost of their continuing freedom from the most insidious content that puts their mental health at risk. Much mention has been made of the triple shield and I need not go there again. Its origins and perhaps deficiencies have been mentioned already.

The Center for Countering Digital Hate recently conducted an experiment, creating new social media accounts that showed interest in body image and mental health. This study found that TikTok served suicide-related content to new accounts within 2.6 minutes, with eating disorder content being recommended within 8 minutes. At the very least, these disturbing statistics tell us that users should have the option to opt in to such content, and not have to suffer this harm before later opting out. While the option to filter out certain categories of content is essential, it must be toggled on by default if safety is to be our primary concern.

The principle of safest by default creates not only a less harmful environment, but one in which users are in a position to define their own online experience. The space in which we carry out our public life is increasingly located on a small number of social media platforms—those category 1 platforms already mentioned several times—which everyone, from children to pensioners, uses to communicate and share their experiences.

We must then ensure that the protections we benefit from offline continue online: namely, protection from the harm and hate that pose a threat to our physical and mental well-being. When a child steps into school or a parent into their place of work, they must be confident that those with the power to do so have created the safest possible environment for them to carry out their interactions. This basic confidence must be maintained when we log in to Twitter, Instagram, TikTok or any other social media giant.

Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- View Speech - Hansard - -

My Lords, my Amendment 43 tackles Clause 12(1), which expressly says that the duties in Clause 12 are to “empower” users. My concern is to ensure that, first, users are empowered and, secondly, legitimate criticism around the characteristics listed in Clause 12(11) and (12), for example, is not automatically treated as abusive or inciting hatred, as I fear it could be. My Amendment 283ZA specifies that, in judging content that is to be filtered out after a user has chosen to switch on various filters, the providers act reasonably and pause to consider whether they have “reasonable grounds” to believe that the content is of the kind in question—namely, abusive or problematic.

Anything under the title “empower adult users” sounds appealing—how can I oppose that? After all, I am a fan of the “taking back control” form of politics, and here is surely a way for users to be in control. On paper, replacing the “legal but harmful” clause with giving adults the opportunity to engage with controversial content if they wish, through enhanced empowerment tools, sounds positive. In an earlier discussion of the Bill, the noble Baroness, Lady Featherstone, said that we should treat adults as adults, allowing them to confront ideas with the

“better ethics, reason and evidence”—[Official Report, 1/2/23; col. 735.]

that has been the most effective way to deal with ideas from Socrates onwards. I say, “Hear, hear” to that. However, I worry that, rather than users being in control, there is a danger that the filter system might infantilise adult users and disempower them by hard-wiring into the Bill a duty and tendency to hide content from users.

There is a general weakness in the Bill. I have noted that some platforms are based on users moderating their own sites, which I am quite keen on, but this will be detrimentally affected by the Bill. It would leave users in charge of their own moderation, with no powers to decide what is in, for example, Wikipedia or other Wikimedia projects, which are added to, organised and edited by a decentralised community of users. So I will certainly not take the phrase “user empowerment” at face value.

I am slightly concerned about linguistic double-speak, or at least confusion. The whole Bill is being brought forward in a climate in which language is weaponised in a toxic minefield—a climate of, “You can’t say that”. More nerve-rackingly, words and ideas are seen as dangerous and interchangeable with violent acts, in a way that needs to be unpicked before we pass this legislation. Speakers can be cancelled for words deemed to threaten listeners’ safety—but not physical safety; the opinions are said to be unsafe. Opinions are treated as though they cause damage or harm as viscerally as physical aggression. So lawmakers have to recognise the cultural context and realise that the law will be understood and applied in it, not in the abstract.

I am afraid that the language in Clause 12(1) and (2) shows no awareness of this wider backdrop—it is worryingly woolly and vague. The noble Baroness, Lady Morgan, talked about dangerous content, and all the time we have to ask, “Who will interpret what is dangerous? What do we mean by ‘dangerous’ or ‘harmful’?”. Surely a term such as “abusive”, which is used in the legislation, is open to wide interpretation. Dictionary definitions of “abusive” include words such as “rude”, “insulting” and “offensive”, and it is certainly subjective. We have to query what we mean by the terms when some commentators complain that they have been victims of online abuse, but when you check their timelines you notice that, actually, they have been subject just to angry, and sometimes justified, criticism.

I recently saw a whole thread arguing that the Labour Party’s recent attack ads against the Prime Minister were an example of abusive hate speech. I am not making a point about this; I am asking who gets to decide. If this is the threshold for filtering content, there is a danger of institutionalising safe space echo chambers. It can also be a confusing word for users, because if someone applies a user empowerment tool to protect themselves from abuse, the threshold at which the filter operates could be much lower than they intend or envisage but, by definition, the user would not know what had been filtered out in their name, and they have no control over the filtering because they never see the filtered content.

--- Later in debate ---
Baroness Bull Portrait Baroness Bull (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I support the noble Baroness, Lady Buscombe, on the built-in obsolescence of any list. It would very soon be out of date.

I support the amendments tabled by the noble Lord, Lord Clement-Jones, and by the noble Baroness, Lady Morgan of Cotes. They effectively seek a similar aim. Like the noble Baroness, Lady Fraser, I tend towards those tabled by the noble Lord, Lord Clement-Jones, because they seem clearer and more inclusive, but I understand that they are trying for the same thing. I also register the support for this aim of my noble friend Lady Campbell of Surbiton, who cannot be here but whom I suspect is listening in. She was very keen that her support for this aim was recorded.

The issue of “on by default” inevitably came up at Second Reading. Then and in subsequent discussions, the Minister reiterated that a “default on” approach to user empowerment tools would negatively impact people’s use of these services. Speaking at your Lordships’ Communications and Digital Committee, on which I sat at the time, Minister Scully went further, saying that the strongest option, of having the settings off in the first instance,

“would be an automatic shield against people’s ability to explore what they want to explore on the internet”.

According to the Government’s own list, this was arguing for the ability to explore content that abuses, targets or incites hatred against people with protected characteristics, including race and disability. I struggle to understand why protecting this right takes precedence over ensuring that groups of people with protected characteristics are, well, protected. That is our responsibility. It is precedence, because switching controls one way is not exactly the same as switching them the other way. It is easy to think so, but the noble Baroness, Lady Parminter, explained very clearly that it is not the same. It is undoubtedly easier for someone in good health and without mental or physical disabilities to switch controls off than it is for those with disabilities or vulnerabilities to switch them on. That is self-evident.

It cannot be right that those most at risk of being targeted online, including some disabled people—not all, as we have heard—and those with other protected characteristics, will have the onus on them to switch on the tools to prevent them seeing and experiencing harm. There is a real risk that those who are meant to benefit from user empowerment tools, those groups at higher risk of online harm, including people with a learning disability, will not be able to access the tools because the duties allow category 1 services to design their own user empowerment tools. This means that we are likely to see as many versions of user empowerment tools as there are category 1 services to which this duty applies.

Given what we know about the nature of addiction and self-harm, which has already been very eloquently explained, it surely cannot be the intention of the Bill that those people who are in crisis and vulnerable to eating disorders or self-harm, for example, will be required to seek and activate a set of tools to turn off the very material that feeds their addiction or encourages their appetite for self-harm.

The approach in the Bill does little to prevent people spiralling down this rabbit hole towards ever more harmful content. Indeed, instead it requires people to know that they are approaching a crisis point, and to have sufficient levels of resilience and rationality to locate the switch and turn on the tools that will protect them. That is not how the irrational or distressed mind works.

So, all the evidence that we have about the existence of harm which arises from mental states, which has been so eloquently set out in introducing the amendments— I refer again to my noble friend Lady Parminter, because that is such powerful evidence—tips the balance in favour, I believe, of setting the tools to be on by default. I very much hope the Minister will listen and heed the arguments we have heard set out by noble Lords across the Committee, and come back with some of his own amendments on Report.

Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- Hansard - -

Before the noble Baroness sits down, I wanted to ask for clarification, because I am genuinely confused. When it comes to political rights for adults in terms of their agency, they are rights which we assume are able to be implemented by everyone. But we recognise that in the adult community —this is offline now; I mean in terms of how we understand political rights—there may well be people who lack capacity or are vulnerable, and we take that into account. But we do not generally organise political rights and access to, for example, voting or free speech around the most vulnerable in society. That is not because we are insensitive or inhumane, or do not understand. The moving testimonies we have heard about people with eating disorders and so on are absolutely spot-on accurate. But are we suggesting that the world online should be organised around vulnerable adults, rather than adults and their political rights?

Baroness Bull Portrait Baroness Bull (CB)
- Hansard - - - Excerpts

I do not have all the answers, but I do think we heard a very powerful point from the right reverend Prelate. In doing the same for everybody, we do not ensure equality. We need to have varying approaches, in order that everybody has equality of access. As the Bill stands, it says nothing about vulnerable adults. It simply assumes that all adults have full capacity, and I think what these amendments seek to do is find a way to recognise that simply thinking about children, and then that everybody aged 18 is absolutely able to take care of themselves and, if I may say, “suck it up”, is not the world we live in. We can surely do better than that.

--- Later in debate ---
Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- Hansard - -

Just to clarify, in a way we have reduced this debate to whether the default position should be on or off, although in fact that is only one aspect of this. My concern, and what I maybe spent too long talking about, is what happens if we turn the toggles to “on”. The assumption we keep making is that once they are on, we are safe. The difficulty is that the categories of what is filtered out after turning them on are not necessarily what the user thinks they are. I am simply asking how you get around that; otherwise, we think it is too easy—turn it on or off; press the button. Is it not problematic for us all if, in thinking you are going to stop seeing hate, hate turns out actually to be legitimate and interesting political ideas?

Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- Hansard - - - Excerpts

As ever, the noble Baroness is an important voice in bursting our bubble in the Chamber. I continue to respect her for that. It will not be perfect; there is no perfect answer to all this. I am siding with safety and caution rather than a bit of a free-for-all. Sometimes there might be overcaution and aspects of debate where the platforms, the regulator, the media, and discussion and debate in this Chamber would say, “The toggles have got it wrong”, but we just have to make a judgment about which side we are on. That is what I am looking forward to hearing from the Minister.

These amendments are supported on all sides and by a long list of organisations, as listed by the noble Baroness, Lady Morgan, and the noble Lord, Lord Clement-Jones. The Minister has not conceded very much at all so far to this Committee. We have heard compelling speeches, such as those from the noble Baroness, Lady Parminter, that have reinforced my sense that he needs to give in on this when we come to Report.

I will also speak to my Amendment 38A. I pay tribute to John Penrose MP, who was mentioned by the noble Baroness, Lady Harding, and his work in raising concerns about misinformation and in stimulating discussion outside the Chambers among parliamentarians and others. Following discussions with him and others in the other place, I propose that users of social media should have the option to filter out content the provenance of which cannot be authenticated.

As we know, social media platforms are often awash with content that is unverified, misleading or downright false. This can be particularly problematic when it comes to sensitive or controversial topics such as elections, health or public safety. In these instances, it can be difficult for users to know whether the information presented to them is accurate. Many noble Lords will be familiar with the deep-fake photograph of the Pope in a white puffa jacket that recently went viral, or the use of imagery for propaganda purposes following the Russian invasion of Ukraine.

The Content Authenticity Initiative has created an open industry standard for content authenticity and provenance. Right now, tools such as Adobe Photoshop allow users to turn on content credentials to securely attach provenance data to images and any edits then made to those images. That technology has now been adopted by camera manufacturers such as Leica and Nikon, so the technology is there to do some of this to help give us some reassurance.

Amendment 38A would allow users to filter out unverified content and is designed to flag posts or articles that do not come from a reliable source or have not been independently verified by a reputable third party. Users could then choose to ignore or filter out such content, ensuring that they are exposed only to information that has been vetted and verified. This would not only help users to make more informed decisions but help to combat the spread of false information on social media platforms. By giving users the power to filter out unverified content, we can help to ensure that social media platforms are not used to spread harmful disinformation or misinformation.

Amendments 42 and 45, in the name of my noble friend Lord Stevenson, are good and straightforward improvements to the Bill, suggesting that Clause 12(11) should include all the protected characteristics in the Equality Act 2010. I listened closely to the argument of the noble Baroness, Lady Buscombe, about listing the various characteristics, but I think it makes much more sense to do what my noble friend is suggesting and just reference the Equality Act. If we chose separately to amend the Equality Act to change protected characteristics, that change would then flow through into this Bill. That seems to be a much more sensible and pragmatic way to proceed. Similarly, in Amendment 45 my noble friend is suggesting that consideration should be given by the Secretary of State as to whether any of these tools should also be available to child users, and we have heard good arguments in favour of that when it comes to dealing with the cliff-edge effect, as described by the noble Baroness, Lady Kidron.

Lastly—and, perhaps the Committee will forgive me, slightly bizarrely—Amendment 283ZA, in the name of the noble Baroness, Lady Fox, to Clause 170(7) in Part 11 invites us to consider, among other things, whether a bot can act reasonably. Given that self-coding bots are now technically possible, I would be fascinated to hear the Minister’s view on whether an autonomous bot can have reason. I asked ChatGPT this question. For the help of the Committee, it replied:

“As an artificial intelligence language model, I do not have consciousness or emotions, and therefore, I do not have a capacity for ‘reason’ in the way that humans understand it. However, I am programmed to use advanced algorithms that allow me to process and analyze large amounts of data, recognize patterns, and provide reasoned responses based on the information available to me. This allows me to simulate a form of reasoning, but it is important to note that it is not the same as human reasoning, as I do not have subjective experiences or personal biases. Ultimately, my abilities are limited to the algorithms and data that have been programmed into my system, and I cannot generate my own subjective experiences or judgments.”


That is the view of the algorithm as to whether or not bots can have reason. I look forward to the Minister’s response.

--- Later in debate ---
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

The content which we have added to Clause 12 is a targeted approach. It reflects input from a wide range of interested parties, with whom we have discussed this, on the areas of content that users are most concerned about. The other protected characteristics that do not appear are, for instance, somebody’s marriage or civil partnership status or whether they are pregnant. We have focused on the areas where there is the greatest need for users to be offered the choice about reducing their exposure to types of content because of the abuse they may get from it. This recognises the importance of clear, enforceable and technically feasible duties. As I said a moment ago in relation to the point made by my noble friend Lady Buscombe, we will keep it under review but it is right that these provisions be debated at length—greater length than I think the Equality Bill was, but that was long before my time in your Lordships’ House, so I defer to the noble Lord’s experience and I am grateful that we are debating them thoroughly today.

I will move now, if I may, to discuss Amendments 43 and 283ZA, tabled by the noble Baroness, Lady Fox of Buckley. Amendment 43 aims to ensure that the user empowerment content features do not capture legitimate debate and discussion, specifically relating to the characteristics set out in subsections (11) and (12). Similarly, her Amendment 283ZA aims to ensure that category 1 services apply the features to content only when they have reasonable grounds to infer that it is user empowerment content.

With regard to both amendments, I can reassure the noble Baroness that upholding users’ rights to free expression is an integral principle of the Bill and it has been accounted for in drafting these duties. We have taken steps to ensure that legitimate online discussion or criticism will not be affected, and that companies make an appropriate judgment on the nature of the content in question. We have done this by setting high thresholds for inclusion in the content categories and through further clarification in the Bill’s Explanatory Notes, which I know she has consulted as well. However, the definition here deliberately sets a high threshold. By targeting only abuse and incitement to hatred, it will avoid capturing content which is merely challenging or robust discussion on controversial topics. Further clarity on definitions will be provided by Ofcom through regulatory guidance, on which it will be required to consult. That will sit alongside Ofcom’s code of practice, which will set out the steps companies can take to fulfil their duties.

Baroness Fox of Buckley Portrait Baroness Fox of Buckley (Non-Afl)
- Hansard - -

I appreciate the Minister’s comments but, as I have tried to indicate, incitement to hatred and abuse, despite people thinking they know what those words mean, is causing huge difficulty legally and in institutions throughout the land. Ofcom will have its work cut out, but it was entirely for that reason that I tabled this amendment. There needs to be an even higher threshold, and this needs to be carefully thought through.

Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

But as I think the noble Baroness understands from that reference, this is a definition already in statute, and with which Parliament and the courts are already engaged.

The Bill’s overarching freedom of expression duties also apply to Clause 12. Subsections (4) to (7) of Clause 18 stipulate that category 1 service providers are required to assess the impact on free expression from their safety policies, including the user empowerment features. This is in addition to the duties in Clause 18(2), which requires all user-to-user services to have particular regard to the importance of protecting freedom of expression when complying with their duties. The noble Baroness’s Amendment 283ZA would require category 1 providers to make judgments on user empowerment content to a similar standard required for illegal content. That would be disproportionate. Clause 170 already specifies how providers must make judgments about whether content is of a particular kind, and therefore in scope of the user empowerment duties. This includes making their judgment based on “all relevant information”. As such, the Bill already ensures that the user empowerment content features will be applied in a proportionate way that will not undermine free speech or hinder legitimate debate online.

Amendment 45, tabled by the noble Lord, Lord Stevenson of Balmacara, would require the Secretary of State to lay a Statement before Parliament outlining whether any of the user empowerment duties should be applied to children. I recognise the significant interest that noble Lords have in applying the Clause 12 duties to children. The Bill already places comprehensive requirements on Part 3 services which children are likely to access. This includes undertaking regular risk assessments of such services, protecting children from harmful content and activity, and putting in place age-appropriate protections. If there is a risk that children will encounter harm, such as self-harm content or through unknown or unverified users contacting them, service providers will need to put in place age- appropriate safety measures. Applying the user empowerment duties for child users runs counter to the Bill’s child safety objectives and may weaken the protections for children—for instance, by giving children an option to see content which is harmful to them or to engage with unknown, unverified users. While we recognise the concerns in this area, for the reasons I have set out, the Government do not agree with the need for this amendment.

I will resist the challenge of the noble Lord, Lord Knight, to talk about bots because I look forward to returning to that in discussing the amendments on future-proofing. With that, I invite noble Lords—