Social Media Services

Lord Stevenson of Balmacara Excerpts
Monday 12th November 2018

(6 years, 1 month ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Asked by
Lord Stevenson of Balmacara Portrait Lord Stevenson of Balmacara
- Hansard - -

To ask Her Majesty's Government what plans they have to (1) impose a statutory duty of care upon large providers of social media services within the United Kingdom in respect of the users or members of those services; and (2) establish a regulator to enforce such a duty.

Lord Stevenson of Balmacara Portrait Lord Stevenson of Balmacara (Lab)
- Hansard - -

My Lords, in some senses this short debate today is a continuation of good and important discussions we had during the Digital Economy Act 2017 and the Data Protection Act 2018—the same stars, perhaps, a smaller audience and a much smaller scale, but nevertheless the points may be similar. In particular, the debate that I hope we shall have tonight builds on concerns about the approach being taken under the Digital Economy Act to require age verification for access to commercial pornography; it picks up on the pioneering work led by the noble Baroness, Lady Kidron, during the Data Protection Act 2018, requiring all internet providers to have age-appropriate systems in place, and it relates to discussions we had in the same Bill about the possibility of introducing a personal copyright for data and the question of whether individuals could be data controllers of their own data—issues which I hope will be picked up by the new Centre for Data Ethics and Innovation.

The three areas I want to concentrate on tonight are: first, who is responsible in government for this whole area of policy and what is the current timetable? Secondly, what is the ambition? Is it a charter, a voluntary code or primary legislation? What is it? Thirdly, I want to use this debate to suggest that the Government should legislate to place a duty of care on social media companies, enforced by a trusted regulator and underpinned by direct responsibility and regulations, to protect people from reasonably foreseeable harms when they are using social media services.

On the first point of who is in charge, we welcomed the Government’s May 2018 response to the Green Paper consultation, announcing a White Paper in the near future and setting out plans for legislation that will cover:

“the full range of online harms, including both harmful and illegal content”.

That is a quote from the Statement. I read, in an interview in the Daily Telegraph, that the Home Secretary is now promising new laws to regulate social media firms, saying:

“We will therefore be bringing some form of legislation which we will set out in a White Paper on online harms in the winter”.


I take that to be a way of saying “soon”—hopefully, the curious phrase “some form of legislation” is something the Minister can unpick when he gets up to respond later this evening. Having said that, legislating to ban illegal content is one thing, and difficult enough, but defining, let alone banning, what is “harmful” is brave and will be leading us into subjective decisions about material which is always going to be problematic.

The previous DCMS Secretary of State was fond of referencing an idea for a digital charter. That has gone a bit quiet recently. Can the Minister give us an update? What is it? How is it to be established? Will it have the effect of primary legislation? Is this the legislation the Home Secretary is referring to in his gnomic Statement? Will there be powers to fine and ban?

Who else is prowling around in this jungle? The new Secretary of State for Health is calling for action on online harms, focusing on the mental health impacts of social media on young people and announcing, at the time of his party conference, that he had asked the CMO to draw up screen-time guidelines. Artificial intelligence is also a concern here, so BEIS has an interest. What happens in Scotland, Wales or Northern Ireland? It is a crowded field. In a sense, all this activity is good, but it leaves open the question of who is leading on this? The Home Office does not normally share responsibilities willingly, and joint legislation is not a model that generally works well in Whitehall. Can the Minister confirm whether DCMS is still in the lead and what is the current timetable? Would spring be a fairer assessment than winter?

Secondly, what is the ambition? Over the last few months, more evidence has emerged from authoritative sources of the harms caused by social media. Ofcom and the ICO jointly published some research on the harms experienced by adult internet users, with 45% indicating that they have experienced some form of online harm.

Last week we heard in the other place the Information Commissioner’s startling evidence to the Select Committee about the Cambridge Analytica scandal, and there is more to come on that. Only 10 days ago, the Law Commission published a very interesting scoping review of the current law on abusive and offensive online communications, confirming that there were weaknesses in the current regime. It is doing more work on the nature of some of the offending behaviour in the online environment and the extra degrees of harm that it can cause. It is also looking at the effective targeting of serious harm and criminality, and at eliminating overlapping offences and the ambiguity of terminology concerning what is or is not “obscene”. The NSPCC has recently highlighted what it calls the “failure” of self-regulation in this area. The Children’s Commissioner has also called for action, saying:

“The rights enjoyed by children offline must be extended online”.


One problem here is clearly evidence, which is vital to drafting effective legislation, but it is not easy to pin down evidence on fast-moving, innovative services like the internet. The software of social media services changes every week and perhaps more often—every day—and it will be difficult to isolate long-term impacts from particular services through “gold standard” randomised control trials. The potential range is very wide. In their response to the Green Paper consultation, the Government said:

“Potential areas where the Government will legislate include the social media code of practice, transparency reporting and online advertising”.


They also referred to,

“platform liability for illegal content; responding to the … Law Commission Review of abusive communications online; and working with the Information Commissioner’s Office on the age-appropriate design code”.

They added that a White Paper would also allow them to incorporate,

“new, emerging issues, including disinformation and mass misuse of personal data and work to tackle online harms”.

That all sounds great but questions remain. Will this result in a statutory code and regulations? Will there be penalties for non-compliance or breaches? If so, will they be on the right scale, and by whom will they be administered? Will it be Ofcom or a new regulator? And what about companies based outside the UK?

We come back to the basic question of how we regulate an innovative and fast-moving sector, largely headquartered outside the UK, and what tools we have available. If it is true that the technologies in use today represent only 10% of what is likely to be introduced in the next decade or so, how do we future-proof our regulatory structures? This is where the idea of a duty of care comes in. Following public health scares in the 1990s, the Health and Safety Executive adopted a rigorous version of the “precautionary principle”, requiring a joint approach to as yet unknown risks and placing the companies offering such services in the forefront of efforts to limit the harms caused by products and services that threaten public health and safety, but always working in partnership with the regulator.

We might find that this principle is already in play in this sector. In response to a Written Question that I put down earlier this year, the noble Baroness, Lady Buscombe, confirmed that a duty of care contained in the Health and Safety at Work etc. Act 1974 applies to artificial intelligence deployed in the workplace. Therefore, robotic machines are caught by the Act.

That principled approach is now being advocated by a growing number of organisations and individuals—indeed, it was mentioned by the Home Secretary in the interview I have already quoted. The Carnegie UK Trust has suggested that the way to do this is for primary legislation to place a duty of care on the social media companies to prevent reasonably foreseeable harm befalling their customers or users. This builds in a degree of future-proofing and encompasses the remarkable breadth of activity that one finds on social networks.

This approach is based on a long history of legislation protecting against harms: the Occupiers’ Liability Act 1957, which is still in force today; the Health and Safety at Work etc. Act 1974, which contains three duties of care; and the Health and Safety Executive, the regulator, which has stood the test of time. It is interesting that both regimes defend the public interest in areas that might at first glance be considered remote from the public interest—private land and commercial workplaces—but in truth they should serve as an example to us in regulating, in the public interest, these newly powerful technologies. After all, social networks are environments built in code by private companies for what are often super-profits. Everything that happens in those environments either is governed by code that the company has provided or takes place under the terms and conditions that the companies set.

Imposing a duty of care on social media companies might produce a mutual advantage in practice. A duty of care is not about total risk reduction, stifling all innovation; it is about a company having a legal responsibility to have a clear grasp of what risks are inherent in its current and future products and services, and then taking the right steps proportionate to the severity of those risks: highly risky activity with high-potential harms requires strong action; low-risk activity, far less or even none. The companies that can show they are taking reasonable actions to mitigate the harm that their services can cause will have a competitive advantage. The Ofcom/ICO study shows that there is considerable concern among users about what the social media companies are doing.

We all care about red tape. Bad regulation is to be avoided, not least because it represents cost to the economy. However, good regulation is an investment: a company investing in actions to prevent reasonably foreseeable harms is following the most economically efficient route to reducing those harms. Otherwise the costs fall on society. If it is right to operate a “polluter pays” principle, whereby the costs of pollution prevention and control measures are met by the polluter, why is that principle not equally valid in the social media companies?

Finally, the choice of regulator will be important. Under this proposal, the regulator does not merely fine or sanction but plays an active role to help companies help themselves. The regulator should gather and broker best practice across the industry. We probably need to look at best practice in financial services and environmental regulation, and even at the Bribery Act 2010 and the strong penalties under the Health and Safety at Work etc. Act 1974. We should also consider whether personal liability should attach to the directors and executives of the companies that are guilty of transgression.

In conclusion, I put it to the Minister that we now have enough credible evidence of harms emerging to invoke the well-established precautionary principle, and that the answer to many of the problems we can see in this fast-developing sector, many of which are raised by the Green Paper, may lie in moving to a joint system of risk-based regulation for social media companies operating in the UK, backed by a powerful regulator. We look forward to the Government’s White Paper, as well as to the answers to my initial questions, and to debating these issues further.