Read Bill Ministerial Extracts
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Anderson of Stoke-on-Trent
Main Page: Baroness Anderson of Stoke-on-Trent (Labour - Life peer)Department Debates - View all Baroness Anderson of Stoke-on-Trent's debates with the Department for Digital, Culture, Media & Sport
(1 year, 9 months ago)
Lords ChamberMy Lords, it is a privilege to follow the noble Baroness, Lady Morgan—and slightly intimidating. I draw the House’s attention to my register of interests: I am a director of the Antisemitism Policy Trust and a director of HOPE not hate, and I remain the chief executive of Index on Censorship. I have also had appalling experiences online. In all these capacities I have been intimately involved with the passage of this legislation over the last two years. Like every one of your Lordships, I desperately want to see a better and safer internet for all users, especially children and the most vulnerable, but I worry about the unintended consequences of certain clauses, particularly for our collective and legal right of freedom of expression.
There are certain core premises that should guide our approach to online regulation. What is legal offline should be legal online. We need secure and safe communication channels to protect us all of us, but especially dissidents and journalists, so end-to-end encryption needs to be safeguarded. Our ability to protect our identities online can be life-saving, for domestic violence victims as much as for political dissidents, so we need to ensure that the principle of online anonymity is protected. Each of these principles is undermined by the current detail of the Bill, and I hope to work with many of your Lordships in the weeks ahead to add additional safeguards.
However, some of my greatest concerns about the current proposals relate to illegal content: the definition of what is illegal, the arbiters of illegality and, in turn, what happens to the content. The current proposals require the platforms to determine what is illegal content and then delete it. In theory this seems completely reasonable, but the reality will be more complicated.
I fear what a combination of algorithms and corporate prosecution may mean for freedom of expression online. The risk appetite of the platforms is likely to be severely reduced by this legislation. Therefore, I believe that they are likely to err on the side of caution when considering where the illegality threshold falls, leading to over-deletion. This will be compounded by the use of algorithms rather than people to detect nuance and illegal content.
I will give your Lordships an example of an unintended consequence this has already led to. A video of anti-government protests in Lebanon was deleted on some current platforms because an algorithm picked up only one word of the Arabic chants: Hezbollah, an organisation rightly proscribed in the UK. But the video actually featured anti-Hezbollah chants. It was an anti-extremism demonstration and, I would speculate, contained anti-extremist messaging that many of us would like to see go viral rather than be deleted.
Something is already twice as likely to be deleted from a platform by an algorithm if it is in Urdu or Arabic, rather than English. This will become even more common unless we tighten the definition of illegality and provide platforms with a digital evidence locker where content can be stored before a final decision on deletion is made, thus protecting our speech online.
The issue of deletion is deeply personal for me. Many of your Lordships may be aware that, as a female Jewish Labour Member of the other place, I was subjected to regular and vicious anti-Semitic and misogynist online abuse—abuse that too often became threats of violence and death. Unfortunately, these threats continue and have a direct effect on my personal security. I know when I am most vulnerable because I see a spike in my comments online. These comments are monitored—thankfully not by me—and, when necessary, are referred to the police, with the relevant evidence chain, so that people can be prosecuted.
Can the Minister explain how these people will be prosecuted for harassment, or worse, if the content is automatically deleted? How will I know if someone is threatening to kill me if the threat has already gone? I genuinely believe that the Government wish to make people safer online, as do we all, but I fear that this Bill will not only curtail free speech online but make me and others much less safe offline. There is significant work to do to make sure that is not the case.