Online Harms White Paper

Lord Anderson of Ipswich Excerpts
Tuesday 30th April 2019

(5 years, 6 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Anderson of Ipswich Portrait Lord Anderson of Ipswich (CB)
- Hansard - -

My Lords, it is a pleasure to follow the noble Lord, Lord McNally. We once appeared on “Question Time” together, although it was the Reading University version, rather than the BBC one.

John Perry Barlow, the libertarian and Grateful Dead lyricist who died last year, wrote in 1996 that the internet was,

“creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity”.

To national Governments, those,

“weary giants of flesh and steel”,

he directed a famous warning in his Declaration of the Independence of Cyberspace:

“You are not welcome among us. You have no sovereignty where we gather”.


Those words still have the capacity to inspire, particularly in the start-up culture of Silicon Valley where First Amendment freedoms are sacred and trust in government is low. Having been lucky enough, as we all have, to live through the early stages of the communications revolution that is in the process of transforming our world, and having benefited incalculably from the connections it has brought me to people and sources of knowledge that I would never otherwise have encountered, I would go so far as to say, as Wordsworth controversially said of another revolution:

“Bliss was it in that dawn to be alive”.


But as this White Paper repeatedly demonstrates, the scale and intensity with which communication is now possible have brought in their wake the potential for new and serious harms, harms for which counter-speech and alternative narratives are a necessary but insufficient answer. Even the imperative of free speech, central though it is, cuts both ways. Bullies, stalkers and foul-mouthed abusers inhibit the online freedoms of others, in much the same way as anti-social behaviour in the real world drives the most vulnerable from the public square. The risk that free speech will be chilled by overregulation is real and acute. However, underregulation too can inhibit freedom of speech, in particular, the freedom of the women and minority groups who, in Parliament—and, I suspect, elsewhere—attract a disproportionate amount of online abuse. It was saddening, even shocking, to read in the White Paper that 67% of women in the UK experience a feeling of apprehension when thinking about using the internet or social media.

Regulation, to my mind at least, should be a last resort. How sure are we that it is needed? After all, we have laws against the dissemination of terrorist materials, malicious communication, defamation, the incitement of racial and religious hatred and the intentional causing of harassment, alarm and distress. No doubt other such laws could and will be imagined, although never, I hope, the overbroad restrictions on so-called “extremist activity” that were contemplated in the Queen’s Speeches of 2015 and 2016. However, laws of this kind were developed for a world of physical interactions and legal borders. They require perpetrators to be identified and brought to justice in our own jurisdictions. Those who are abroad, or who can effectively ensure their anonymity, cannot be reached. The delicate framework of our analogue laws is not on its own sufficient to contain the turbocharged power of internet communication, let alone to discourage online behaviour that is anti-social rather than unlawful.

What, then, of self-regulation by the internet intermediaries? It already exists, of course, and will continue to be central to any regulatory scheme, but its inadequacy is illustrated by the regular evidence sessions—most recently last week—in which Facebook, Twitter and YouTube are questioned by the Home Affairs Select Committee. They speak of their high standards, terms of service and internal guidelines. They claim credit for recruiting human moderators, for their use of AI and for suspending and deleting accounts. They in turn are criticised for their lack of transparency, for the patchy and inconsistent application of their standards and for their unwillingness to volunteer information that could be of assistance to law enforcement. This is not a satisfactory state of affairs. Partly, that is a function of the sheer size of the task, with hours of video uploaded to YouTube every second, limited numbers of human monitors and algorithms that are good at spotting nipples and rather less good at spotting irony. These problems will continue whoever sets the standards.

But the status quo also reveals a democratic deficit. We in Parliament, not unaccountable executives in California, should be approving the ground rules for those who do business in our country, and an independent regulator, accountable to Parliament, should be encouraging compliance and enforcing where necessary. That is what democratic governance of the internet needs to look like, as some of the tech companies seem now to acknowledge. The concept of the statutory duty of care, proposed by Professor Lorna Woods and given effect in this White Paper, is one that I support, as is the principle that companies cannot realistically be required to check every piece of content before upload for lawfulness, irrespective of whether we remain subject to the e-commerce directive, which requires that to be the case.

Much of the criticism of the White Paper has centred on the use of nebulous terms such as “trolling”, “extremism”, “harm” and “offence”. These are far too broad to be treated as blanket prohibitions with coercive consequences. Some of them could usefully be lost altogether. But in broadcasting at least the concept of harm has proved tolerable in the context of a detailed and context-specific code of practice. My experience of Ofcom, which I should say has extended to representing it at the recent Gaunt case in the English courts and in the European Court of Human Rights, is that with really good guidance even quite broad concepts are capable of being applied with ample regard for human rights. As it explained its approach to me at the time, every case is a freedom of expression case and it starts with a presumption of freedom. It remains to be seen whether the internet platforms are as susceptible to regulation as the broadcasters with which they so often nowadays share a screen. The sheer volume of material and the fact it is not generated by the platforms themselves will ensure that and will require the regulator inevitably to prioritise.

Some platforms might react by overcensoring the content that they allow to be carried—a risk that surely must be mitigated by some mechanism for independent review. Others might display the “refusal to act responsibly” ascribed to Facebook last week by the Privacy Commissioner of Canada in his statement on his Cambridge Analytica investigation—I hope that Sir Nick Clegg was listening. There will be practical difficulties, some of them unexpected, because, as the Minister said when introducing the White Paper, no one has done it before.

Liam Byrne MP has pointed out that, for all the benefits brought by the previous Industrial Revolution, it took numerous Factories Acts over the course of more than a century before its worst excesses could be curbed. This White Paper leaves many important issues for another day—market concentration, unattributable personalised advertising, lack of algorithmic transparency—but I like it more than I expected to, and I hope to participate on its journey into law.