Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Fox of Buckley
Main Page: Baroness Fox of Buckley (Non-affiliated - Life peer)Department Debates - View all Baroness Fox of Buckley's debates with the Department for Digital, Culture, Media & Sport
(1 year, 5 months ago)
Lords ChamberMy Lords, Amendments 233 and 234 from the noble Lord, Lord Knight of Weymouth, were well motivated, so I will be brief. I just have a couple of queries.
First, we need to consider what the criteria are for who is considered worthy of the privileged status of receiving Ofcom approval as a researcher. We are discussing researchers as though they are totally reliable and trustworthy. We might even think that if they are academic researchers, they are bound to be. However, there was an interesting example earlier this week of confirmation bias leading to mistakes when King’s College had to issue a correction to its survey data that was used in the BBC’s “Mariana in Conspiracyland”. King’s College admitted that it had wildly overestimated the numbers of those reading conspiracy newspaper, The Light, and wildly overestimated the numbers of those attending what it dubbed conspiracy demonstrations. By the way, BBC Verify has so far failed to verify the mistake it repeated. I give this example not as a glib point but because we cannot just say that because researchers are accredited elsewhere they should just be allowed in. I also think that the requirement to give the researchers
“all such assistance as they may reasonably require to carry out their research”
sounds like a potentially very time-consuming and expensive effort.
The noble Lord, Lord Allan of Hallam, raised points around “can’t” or “won’t”, and whether this means researchers “must” or “should”, and who decides whether it is ethical that they “should” in all instances. There are ethical questions here that have been raised. Questions of privacy are not trivial. Studying individuals as specimens of “badthink” or “wrongthink” might appear in this Committee to be in the public interest but without the consent of people it can be quite damaging. We have to decide which questions fulfil the public interest so sufficiently that consent could be overridden in that way.
I do not think this is a slam-dunk, though it looks like a sensible point. I do not doubt that all of us want more research, and good research, and data we can use in arguments, whatever side we are on, but it does not mean we should just nod something through without at least pausing.
My Lords, I declare an interest as a trustee of the International Centre for the Study of Radicalisation at the War Studies department of King’s College London. That is somewhere that conducts research using data of the kind addressed in this group, so I have a particular interest in it.
We know from the kind of debates that the noble Lord, Lord Knight, referred to that it is widely accepted that independent researchers benefit hugely from access to relevant information from service providers to research online safety matters. That is why my Amendment 234, supported by the noble Lords, Lord Clement-Jones and Lord Knight, aims to introduce an unavoidable mandatory duty for regulated platforms to give access to that data to approved researchers.
As the noble Lord, Lord Knight, said, there are three ways in which this would be done. First, the timeframe for Ofcom’s report would be accelerated; secondly, proposed new Clause 147 would allow Ofcom to appoint the researchers; and, thirdly, proposed new Clause 148 would require Ofcom to write a code of practice on data access, setting up the fundamental principles for data access—a code which, by the way, should answer some of the concerns quite reasonably voiced by the noble Baroness, Lady Fox.
The internet is absolutely the most influential environment in our society today, but it is a complete black box, and we have practically no idea what is going on in some of the most important parts of it. That has a terrible impact on our ability to devise sensible policies and mitigate harm. Instead, we have a situation where the internet companies decide who accesses data, how much of it and for what purposes.
In answer to his point, I can tell the noble Lord, Lord Allan, who they give the data to—they give it to advertisers. I do not know if anyone has bought advertising on the internet, but it is quite a chilling experience. You can find out a hell of a lot about quite small groups of people if you are prepared to pay for the privilege of trying to reach them with one of your adverts: you can find out what they are doing in their bedrooms, what their mode of transport is to get to work, how old they are, how many children they have and so on. There is almost no limit to what you can find out about people if you are an advertiser and you are prepared to pay.
In fact, only the companies themselves can see the full picture of what goes on on the internet. That puts society and government at a massive disadvantage and makes policy-making virtually impossible. Noble Lords should be in no doubt that these companies deliberately withhold valuable information to protect their commercial interests. They obfuscate and confuse policymakers, and they protect their reputations from criticism about the harms they cause by withholding data. One notable outcome of that strategy is that it has taken years for us to be here today debating the Online Safety Bill, precisely because policy-making around the internet has been so difficult and challenging.
A few years ago, we were making some progress on this issue. I used to work with the Institute for Strategic Dialogue using CrowdTangle, a Facebook product. It made a big impact. We were working on a project on extremism, and having access to CrowdTangle revolutionised our understanding of how the networks of extremists that were emerging in British politics were coming together. However, since then, platforms have gone backwards a long way and narrowed their data-sharing. The noble Lord, Lord Knight, mentioned that CrowdTangle has essentially been closed down, and Twitter has basically stopped providing its free API for researchers—it charges for some access but even that is quite heavily restricted. These retrograde steps have severely hampered our ability to gather the most basic data from otherwise respectable and generally law-abiding companies. It has left us totally blind to what is happening on the rest of the internet—the bit beyond the nice bit; the Wild West bit.
Civil society plays a critical role in identifying harmful content and bad behaviour. Organisations such as the NSPCC, the CCDH, the ISD—which I mentioned—the Antisemitism Policy Trust and King’s College London, with which I have a connection, prove that their work can make a really big difference.
It is not as though other parts of our economy or society have the same approach. In fact, in most parts of our world there is a mixture of public, regulator and expert access to what is going on. Retailers, for instance, publish what is sold in our shops. Mobile phones, hospitals, banks, financial markets, the broadcast media—they all give access, both to the public and to their regulators, to a huge amount of data about what is going on. Once again, internet companies are claiming exceptional treatment—that has been a theme of debates on the Online Safety Bill—as if what happens online should, for some reason, be different from what happens in the rest of the world. That attitude is damaging the interests of our country, and it needs to be reversed. Does anyone think that the FSA, the Bank of England or the MHRA would accept this state of affairs in their regulated market? They absolutely would not.
Greater access to and availability of data and information about systems and processes would hugely improve our understanding of the online environment and thereby protect the innovation, progress and prosperity of the sector. We should not have to wait for Ofcom to be able to identify new issues and then appoint experts to look at them closely; there should be a broader effort to be in touch with what is going on with the internet. It is the nature of regulation that Ofcom will heavily rely on researchers and civil society to help enforce the Online Safety Bill, but this can be achieved only if researchers have sufficient access to data.
As the noble Lord, Lord Allan, pointed out, legislators elsewhere are making progress. The EU’s Digital Services Act gives a broad range of researchers access to data, including civil society and non-profit organisations dedicated to public interest research. The DSA sets out a framework for vetting and access procedures in detail, as the noble Baroness, Lady Fox, rightly pointed out, creating an explicit role for new independent supervisory authorities and digital services co-ordinators to manage that process.
Under Clause 146, Ofcom must produce a report exploring such access within two years of that section of the Bill coming into effect. That is too long. There is no obligation on the part of the regulator or service providers to take this further. No arguments have been put forward for this extended timeframe or relative uncertainty. In contrast, the arguments to speed up the process are extremely persuasive, and I invite my noble friend the Minister to address those.
My Lords, I will address my remarks to government Amendment 268AZA and its consequential amendments. I rather hope that we will get some reassurance from the Minister on these amendments, about which I wrote to him just before the debate. I hope that that was helpful; it was meant to be constructive. I also had a helpful discussion with the noble Lord, Lord Allan.
As has already been said, the real question relates to the threshold and the point at which this measure will clock in. I am glad that the Government have recognised the importance of the dangers of encouraging or assisting serious self-harm. I am also grateful for the way in which they have defined it in the amendment, relating to it grievous bodily harm and severe injury. The amendment says that this also
“includes successive acts of self-harm which cumulatively reach that threshold”.
That is important; it means, rather than just one act, a series of them.
However, I have a question about subsection (10), which states that:
“A provider of an internet service by means of which a communication is sent, transmitted or published is not to be regarded as a person who sends, transmits or publishes it”.
We know from bereaved parents that algorithms have been set up which relay this ghastly, horrible and inciteful material that encourages and instructs. That is completely different from those organisations that are trying to provide support.
I am grateful to Samaritans for all its help with my Private Member’s Bill, and for the briefing that it provided in relation to this amendment. As it points out, over 5,500 people in England and Wales took their own lives in 2021 and self-harm is
“a strong risk factor for future suicide”.
Interestingly, two-thirds of those taking part in a Samaritans research project said that
“online forums and advice were helpful to them”.
It is important that there is clarity around providing support and not encouraging and goading people into activity which makes their self-harming worse and drags them down to eventually ending their own lives. Three-quarters of people who took part in that Samaritans research said that they had
“harmed themselves more severely after viewing self-harm content online”.
It is difficult to know exactly where this offence sits and whether it is sufficiently narrowly drawn.
I am grateful to the Minister for arranging for me to meet the Bill team to discuss this amendment. When I asked how it was going to work, I was somewhat concerned because, as far as I understand it, the mechanism is based on the Suicide Act, as amended, which talks about the offence of encouraging or assisting suicide. The problem as I see it is that, as far as I am aware, there has not been a string of prosecutions following the suicide of many young people. We have met their families and they have been absolutely clear about how their dead child or sibling—whether a child or a young adult—was goaded, pushed and prompted. I recently had experience outside of a similar situation, which fortunately did not result in a death.
The noble Lord, Lord Allan, has already addressed some of the issues around this, and I would not want the amendment not to be there because we must address this problem. However, if we are to have an offence here, with a threshold that the Government have tried to define, we must understand why, if assisting and encouraging suicide on the internet is already a criminal offence, nothing has happened and there have been no prosecutions.
Why is subsection (10) in there? It seems to negate the whole problem of forwarding on through dangerous algorithms content which is harmful. We know that a lot of the people who are mounting this are not in the UK, and therefore will be difficult to catch. It is the onward forwarding through algorithms that increases the volume of messaging to the vulnerable person and drives them further into the downward spiral that they find themselves in—which is perhaps why they originally went to the internet.
I look forward to hearing the Government’s response, and to hearing how this will work.
My Lords, this group relates to communications offences. I will speak in support of Amendment 265, tabled by the noble Lord, Lord Moylan, and in support of his opposition to Clause 160 standing part of the Bill. I also have concerns about Amendments 267AA and 267AB, in the name of the noble Baroness, Lady Kennedy. Having heard her explanation, perhaps she can come back and give clarification regarding some of my concerns.
On Clause 160 and the false communications offence, unlike the noble Lord, Lord Moylan, I want to focus on psychological harm and the challenge this poses for freedom of expression. I know we have debated it before but, in the context of the criminal law, it matters in a different way. It is worth us dwelling on at least some aspects of this.
The offence refers to what is described as causing
“non-trivial psychological or physical harm to a likely audience”.
As I understand it—maybe I want some clarity here—it is not necessary for the person sending the message to have intended to cause harm, yet there is a maximum sentence of 51 weeks in prison, a fine, or both. We need to have the context of a huge cultural shift when we consider the nature of the harm we are talking about.
J.S. Mill’s harm principle has now been expanded, as previously discussed, to include traumatic harm caused by words. Speakers are regularly no-platformed for ideas that we are told cause psychological harm, at universities and more broadly as part of the whole cancel culture discussion. Over the last decade, harm and safety have come no longer to refer just to physical safety but have been conflated. Historically, we understood the distinction between physical threats and violence as distinct from speech, however aggressive or incendiary that speech was; we did not say that speech was the same as or interchangeable with bullets or knives or violence—and now we do. I want us to at least pause here.
What counts as psychological harm is not a settled question. The worry is that we have an inability to ascertain objectively what psychological harm has occurred. This will inevitably lead to endless interpretation controversies and/or subjective claims-making, at least some of which could be in bad faith. There is no median with respect to how humans view or experience controversial content. There are wildly divergent sensibilities about what is psychologically harmful. The social media lawyer Graham Smith made a really good point when he said that speech is not a physical risk,
“a tripping hazard … a projecting nail … that will foreseeably cause injury … Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people.”
That is true.
We have seen an example of the potential disputes over what creates psychological harm in a case in the public realm over the past week. The former Culture Secretary, Nadine Dorries, who indeed oversaw much of this Bill in the other place, had her bullying claims against the SNP’s John Nicolson MP overturned by the standards watchdog. Her complaints had previously been upheld by the standards commissioner. John Nicolson tweeted, liked and retweet offensive and disparaging material about Ms Dorries 168 times over 24 hours—which, as they say, is a bit OTT. He “liked” tweets describing Ms Dorries as grotesque, a “vacuous goon” and much worse. It was no doubt very unpleasant for her and certainly a personalised pile-on—the kind of thing the noble Baroness, Lady Kennedy, just talked about—and Ms Dorries would say it was psychologically harmful. But her complaint was overturned by new evidence that led to the bullying claim being turned down. What was this evidence? Ms Dorries herself was a frequent and aggressive tweeter. So, somebody is a recipient of something they say causes them psychological harm, and it has now been said that it does not matter because they are the kind of person who causes psychological harm to other people. My concern about turning this into a criminal offence is that the courts will be full of those kinds of arguments, which I do not think we want.