Debates between Viscount Younger of Leckie and Baroness Kidron during the 2017-2019 Parliament

Social Media: News

Debate between Viscount Younger of Leckie and Baroness Kidron
Thursday 11th January 2018

(6 years, 10 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Baroness Kidron Portrait Baroness Kidron (CB)
- Hansard - - - Excerpts

My Lords, it is a great privilege to open a debate with such a broad range of informed speakers to follow. The question in front of us produces a number of interrelated and extremely important issues. I shall not attempt to cover them all but, instead, simply to set the scene for the detailed contributions that are to follow.

The interface between humans and information, be it visual, graphic, moving images, sound or text, is as long as our history. Our understanding of what to expect from those interactions is seen through the prism of technological innovations, cultural understanding and legal frameworks. It is encapsulated by the concepts of broadcast and publishing.

In this long history, the online service providers are an anomaly. The military and academic labs where the web originated were home to groups of skilled and active participants in an open web who saw the potential of decentralised networked computers as liberating and democratising. This was a physical network; these were academics and computer scientists bound by cables, not commerce. They did not consider themselves publishers, nor responsible for the content of others.

This view was almost immediately contested and overturned by early court judgments, but founders of the nascent platform successfully fought back. Citing the First Amendment, they insisted that their network of small networks had no controlling force and that the occasional misuse or obscenity was a small price to pay for a world with no gatekeepers.

The US “safe harbor” provisions in Section 230 of the Communications Decency Act 1996 allowed online service providers to host, hold and transfer information with no liability for content. This principle was mirrored around the world, including in the e-commerce directive of 2000 that codified online service providers as “mere conduits”. This was Web 1.0.

Much of the internet’s utopian promise came true. But what nobody anticipated, including its founders, was how rapidly it would become highly commercialised. Ironically, the “safe harbor” provisions of Section 230, established to protect the common good from a few dissonant voices, now work against that common good. Those who publish online are incentivised to categorise themselves as online service providers in order to benefit from having no liability for content. It is a commercial advantage that has seen the exponential rise of a vanishingly small number of companies with unparalleled power, no collective oversight and unlimited piles of cash. This is Web 2.0, and it is in that context that we are having our debate.

Amazon has set up a movie studio. Facebook has earmarked $1 billion to commission original content this year. YouTube has fully equipped studios in eight countries. The Twitter Moments strand exists to,

“organize and present compelling content”.

Apple reviews every app submitted to its store,

“based on a set of technical, content, and design criteria”.

By any other frame of reference, this commissioning, editing and curating is for broadcasting or publishing.

In giving evidence to the Communications Committee on 19 December, representatives of Facebook and Google agreed that the vast proportion of their income comes from advertising—87% and 98% respectively. This advertising is embedded in, pops up in between and floats across the content that their users engage with. Sir Martin Sorrell, chief executive of WPP, was clear what that means when he said that,

“Google, Facebook and others are media companies … They cannot masquerade as technology companies, particularly when they place advertisements”.

In common with publishers and broadcasters, these companies use editorial content as bait for advertising. They aggregate and spread the news, and provide data points and key words: behaviours that determine what is most important, how widely it should be viewed and by whom. In common with news publishers, they offer a curated view of what is going on in the world.

The Silicon Valley companies are content creators, aggregators, editors, information cataloguers, broadcasters and publishers. Indeed, severally and together they publish far more media than any other publisher in any other context—but, in claiming to be “mere conduits”, they are ducking the responsibilities that the rest of the media ecosystem is charged with.

The media is understood to be a matter of huge public and social interest because it affects common values, certain freedoms and individual rights. For the same set of reasons, it is subject to a complex matrix of regulatory and legal frameworks. But publishing and, by extension, broadcasting are not only legal and commercial constructs but cultural constructs with operating norms that reflect a long history of societal values and expectations, one of which is that those involved are responsible for content. They are responsible because, traditionally, they make large sums of money; they are responsible because they juggle those commercial interests with editorial interests; they are responsible because, within those editorial interests, they are expected to balance freedom of expression against the vulnerabilities, sensitivities and rights of the individual; and they are responsible because they are a controlling force over the veracity, availability and quality of information that is central to the outcome of our collective civic life.

In November, there was an outcry after a journalist reported that algorithms were auto-suggesting horrific videos to young users of YouTube Kids. Google’s response was not proactively to look at the content on its kids’ channel but to ask users to flag content, thereby leaving it to pre-schoolers to police the platform. Google did not dispute that the videos were disturbing or that the channel would be better off without them, but in its determination to uphold the fallacy of being a “mere conduit”, it was prepared to outsource its responsibilities to children as young as four and five.

Whatever the protestations, this is not a question of free speech; it is a question of money. The Google representative giving evidence to the Communications Committee said that to moderate all content on YouTube would take a workforce 180,000 people. Irrespective of the veracity of that statement, for a publisher or broadcaster, checking that your content is safe for children is not an optional extra; it is a price of doing business, a cost before profit. In October last year, Google’s parent company, Alphabet, was worth $700 billion.

I am not suggesting a return to a pre-tech era; nor am I advocating censorship. The media environment has never been, and hopefully will never be, home to a homogenous worldview. Nor should one romanticise its ability to “do the right thing”. It is a changing and fraught public space in which standards and taste are hotly contested and often crushingly low. But editorial standards and oversight, retraction, industry codes, statutory regulation, legal liability, and parliamentary oversight are no hazard to free speech. On the contrary—as information technologies have become ever more powerful, in democracies we demand that they uphold minimum standards precisely to protect free speech from powerful corporate and political interests.

The advances and possibilities of the networked world will always excite and will hopefully, in time, answer some of society’s greatest needs—but these companies occupy a legal space on a false premise, giving them a commercial advantage based on their ability to publish with impunity. That in turn undermines other media, threatens plurality and increasingly contributes to an insupportable cultural environment fuelled by a business model that trades attention for advertising revenue.

Sean Parker, co-founder of Facebook, said that when setting up Facebook the question on the table was:

“'How do we consume as much of your time and conscious attention as possible?”.


The answer was that,

“we … give you a little dopamine hit every once in a while, because someone liked or commented on a photo … to get you to contribute more content … It’s a social-validation feedback loop … exploiting a vulnerability in human psychology”.

The hermetic spiral of content ends in ever more polarised views as users become blind to other perspectives, denuding us of a common space. The result is the abuse of public figures and the spread of bullying, hate and misogynist content at unparalleled levels. The ad revenue model fails to compensate content creators adequately and we have seen the wholesale collapse of other creative industries, the long-term cultural costs of which we have yet to calculate.

In the battle for our attention we have seen the weaponisation of information to political ends. While nothing new in itself, the commoditisation of political narratives and the lack of accountability has promoted a surge of fake news, locally and internationally funded, and with it comes a democratic deficit. This was frighteningly illustrated by the outcome of a Channel 4 survey last year in which fewer than 4% of people were able to correctly identify false news stories from true. The cost goes beyond the cultural and political. Our attention is secured by an eye-watering regime of data collection and with it a disturbing invasion of privacy and free will. The insights and potential for social and political control enabled by unfettered data profiling without redress or oversight undermine our human rights, our rights as citizens and the need for privacy in which to determine who we are as people.

The appropriation of our personal data is predicated on the use of intellectual property law. The very same companies that rigorously avoid editorial standards and regulatory responsibilities for content are happy to employ the protection of terms and conditions running to hundreds of pages that protect their commercial interests. The cherry picking of regulatory structures is at best hypocritical. Lionel Barber, editor of the FT, suggests that we “drop the pretence”. In a soon to be published paper from a group of industry insiders comes the suggestion of a new status of “online content provider”, with an accompanying online responsibility Bill and a new regulator. But perhaps, just as the arrival of networked computers led to a new legal status of “safe harbor”, the arrival of networked tech conglomerates requires an entirely new definition, based on the interrelation of society and technology.

Because, while big tech has yet to wake up to the societal responsibilities of its current businesses, the rest of us are hurtling towards Web 3.0: a fully networked world of smart homes and smart cities that will see the big five companies—seven if we include China—monopolise whole sectors and particular technologies, controlling both demand and supply, mediating all our behaviours and yet remaining beyond the jurisdiction of Governments.

We must never forget the extraordinary potential and social good in the technologies already invented and in use and in those still emerging, including publishing at a grand scale. However, while the internet is young, it is no longer young enough to be exempt from its adult responsibilities. This is no longer an industry in need of protection while it incubates. These are the most powerful companies in the world.

In finishing, I ask the Minister to tell the House whether the scope of the Government’s digital charter will include a review of the legal status of online service providers and an ethical framework for content. Perhaps he will also say whether he agrees with me that the same standards and responsibilities should apply to the media activities of online service providers in parity with other media players. Finally, what steps are the Government taking to create an international consensus for a global governance strategy for online service providers? I beg to move.

Viscount Younger of Leckie Portrait Viscount Younger of Leckie (Con)
- Hansard - -

My Lords, I may sound like a long-playing record, but in this debate we have just a few minutes to spare on timings. I ask that every Back-Bench speech concludes as the clock reaches four minutes, as otherwise the wind-up speeches may have to be shortened.