Monday 12th November 2018

(6 years, 1 month ago)

Lords Chamber
Read Hansard Text Read Debate Ministerial Extracts
Question for Short Debate
19:35
Asked by
Lord Stevenson of Balmacara Portrait Lord Stevenson of Balmacara
- Hansard - - - Excerpts

To ask Her Majesty's Government what plans they have to (1) impose a statutory duty of care upon large providers of social media services within the United Kingdom in respect of the users or members of those services; and (2) establish a regulator to enforce such a duty.

Lord Stevenson of Balmacara Portrait Lord Stevenson of Balmacara (Lab)
- Hansard - - - Excerpts

My Lords, in some senses this short debate today is a continuation of good and important discussions we had during the Digital Economy Act 2017 and the Data Protection Act 2018—the same stars, perhaps, a smaller audience and a much smaller scale, but nevertheless the points may be similar. In particular, the debate that I hope we shall have tonight builds on concerns about the approach being taken under the Digital Economy Act to require age verification for access to commercial pornography; it picks up on the pioneering work led by the noble Baroness, Lady Kidron, during the Data Protection Act 2018, requiring all internet providers to have age-appropriate systems in place, and it relates to discussions we had in the same Bill about the possibility of introducing a personal copyright for data and the question of whether individuals could be data controllers of their own data—issues which I hope will be picked up by the new Centre for Data Ethics and Innovation.

The three areas I want to concentrate on tonight are: first, who is responsible in government for this whole area of policy and what is the current timetable? Secondly, what is the ambition? Is it a charter, a voluntary code or primary legislation? What is it? Thirdly, I want to use this debate to suggest that the Government should legislate to place a duty of care on social media companies, enforced by a trusted regulator and underpinned by direct responsibility and regulations, to protect people from reasonably foreseeable harms when they are using social media services.

On the first point of who is in charge, we welcomed the Government’s May 2018 response to the Green Paper consultation, announcing a White Paper in the near future and setting out plans for legislation that will cover:

“the full range of online harms, including both harmful and illegal content”.

That is a quote from the Statement. I read, in an interview in the Daily Telegraph, that the Home Secretary is now promising new laws to regulate social media firms, saying:

“We will therefore be bringing some form of legislation which we will set out in a White Paper on online harms in the winter”.


I take that to be a way of saying “soon”—hopefully, the curious phrase “some form of legislation” is something the Minister can unpick when he gets up to respond later this evening. Having said that, legislating to ban illegal content is one thing, and difficult enough, but defining, let alone banning, what is “harmful” is brave and will be leading us into subjective decisions about material which is always going to be problematic.

The previous DCMS Secretary of State was fond of referencing an idea for a digital charter. That has gone a bit quiet recently. Can the Minister give us an update? What is it? How is it to be established? Will it have the effect of primary legislation? Is this the legislation the Home Secretary is referring to in his gnomic Statement? Will there be powers to fine and ban?

Who else is prowling around in this jungle? The new Secretary of State for Health is calling for action on online harms, focusing on the mental health impacts of social media on young people and announcing, at the time of his party conference, that he had asked the CMO to draw up screen-time guidelines. Artificial intelligence is also a concern here, so BEIS has an interest. What happens in Scotland, Wales or Northern Ireland? It is a crowded field. In a sense, all this activity is good, but it leaves open the question of who is leading on this? The Home Office does not normally share responsibilities willingly, and joint legislation is not a model that generally works well in Whitehall. Can the Minister confirm whether DCMS is still in the lead and what is the current timetable? Would spring be a fairer assessment than winter?

Secondly, what is the ambition? Over the last few months, more evidence has emerged from authoritative sources of the harms caused by social media. Ofcom and the ICO jointly published some research on the harms experienced by adult internet users, with 45% indicating that they have experienced some form of online harm.

Last week we heard in the other place the Information Commissioner’s startling evidence to the Select Committee about the Cambridge Analytica scandal, and there is more to come on that. Only 10 days ago, the Law Commission published a very interesting scoping review of the current law on abusive and offensive online communications, confirming that there were weaknesses in the current regime. It is doing more work on the nature of some of the offending behaviour in the online environment and the extra degrees of harm that it can cause. It is also looking at the effective targeting of serious harm and criminality, and at eliminating overlapping offences and the ambiguity of terminology concerning what is or is not “obscene”. The NSPCC has recently highlighted what it calls the “failure” of self-regulation in this area. The Children’s Commissioner has also called for action, saying:

“The rights enjoyed by children offline must be extended online”.


One problem here is clearly evidence, which is vital to drafting effective legislation, but it is not easy to pin down evidence on fast-moving, innovative services like the internet. The software of social media services changes every week and perhaps more often—every day—and it will be difficult to isolate long-term impacts from particular services through “gold standard” randomised control trials. The potential range is very wide. In their response to the Green Paper consultation, the Government said:

“Potential areas where the Government will legislate include the social media code of practice, transparency reporting and online advertising”.


They also referred to,

“platform liability for illegal content; responding to the … Law Commission Review of abusive communications online; and working with the Information Commissioner’s Office on the age-appropriate design code”.

They added that a White Paper would also allow them to incorporate,

“new, emerging issues, including disinformation and mass misuse of personal data and work to tackle online harms”.

That all sounds great but questions remain. Will this result in a statutory code and regulations? Will there be penalties for non-compliance or breaches? If so, will they be on the right scale, and by whom will they be administered? Will it be Ofcom or a new regulator? And what about companies based outside the UK?

We come back to the basic question of how we regulate an innovative and fast-moving sector, largely headquartered outside the UK, and what tools we have available. If it is true that the technologies in use today represent only 10% of what is likely to be introduced in the next decade or so, how do we future-proof our regulatory structures? This is where the idea of a duty of care comes in. Following public health scares in the 1990s, the Health and Safety Executive adopted a rigorous version of the “precautionary principle”, requiring a joint approach to as yet unknown risks and placing the companies offering such services in the forefront of efforts to limit the harms caused by products and services that threaten public health and safety, but always working in partnership with the regulator.

We might find that this principle is already in play in this sector. In response to a Written Question that I put down earlier this year, the noble Baroness, Lady Buscombe, confirmed that a duty of care contained in the Health and Safety at Work etc. Act 1974 applies to artificial intelligence deployed in the workplace. Therefore, robotic machines are caught by the Act.

That principled approach is now being advocated by a growing number of organisations and individuals—indeed, it was mentioned by the Home Secretary in the interview I have already quoted. The Carnegie UK Trust has suggested that the way to do this is for primary legislation to place a duty of care on the social media companies to prevent reasonably foreseeable harm befalling their customers or users. This builds in a degree of future-proofing and encompasses the remarkable breadth of activity that one finds on social networks.

This approach is based on a long history of legislation protecting against harms: the Occupiers’ Liability Act 1957, which is still in force today; the Health and Safety at Work etc. Act 1974, which contains three duties of care; and the Health and Safety Executive, the regulator, which has stood the test of time. It is interesting that both regimes defend the public interest in areas that might at first glance be considered remote from the public interest—private land and commercial workplaces—but in truth they should serve as an example to us in regulating, in the public interest, these newly powerful technologies. After all, social networks are environments built in code by private companies for what are often super-profits. Everything that happens in those environments either is governed by code that the company has provided or takes place under the terms and conditions that the companies set.

Imposing a duty of care on social media companies might produce a mutual advantage in practice. A duty of care is not about total risk reduction, stifling all innovation; it is about a company having a legal responsibility to have a clear grasp of what risks are inherent in its current and future products and services, and then taking the right steps proportionate to the severity of those risks: highly risky activity with high-potential harms requires strong action; low-risk activity, far less or even none. The companies that can show they are taking reasonable actions to mitigate the harm that their services can cause will have a competitive advantage. The Ofcom/ICO study shows that there is considerable concern among users about what the social media companies are doing.

We all care about red tape. Bad regulation is to be avoided, not least because it represents cost to the economy. However, good regulation is an investment: a company investing in actions to prevent reasonably foreseeable harms is following the most economically efficient route to reducing those harms. Otherwise the costs fall on society. If it is right to operate a “polluter pays” principle, whereby the costs of pollution prevention and control measures are met by the polluter, why is that principle not equally valid in the social media companies?

Finally, the choice of regulator will be important. Under this proposal, the regulator does not merely fine or sanction but plays an active role to help companies help themselves. The regulator should gather and broker best practice across the industry. We probably need to look at best practice in financial services and environmental regulation, and even at the Bribery Act 2010 and the strong penalties under the Health and Safety at Work etc. Act 1974. We should also consider whether personal liability should attach to the directors and executives of the companies that are guilty of transgression.

In conclusion, I put it to the Minister that we now have enough credible evidence of harms emerging to invoke the well-established precautionary principle, and that the answer to many of the problems we can see in this fast-developing sector, many of which are raised by the Green Paper, may lie in moving to a joint system of risk-based regulation for social media companies operating in the UK, backed by a powerful regulator. We look forward to the Government’s White Paper, as well as to the answers to my initial questions, and to debating these issues further.

19:46
Baroness Kidron Portrait Baroness Kidron (CB)
- Hansard - - - Excerpts

I thank the noble Lord, Lord Stevenson of Balmacara, for introducing this timely debate and illustrating why it is so important. I also thank him for his kind words. I refer the House to my broad interests in this area.

The statutory duty of care as set out by Will Perrin and Professor Lorna Woods is an important and very welcome prospect. A duty of care is proportionate. The higher the risk, the greater the responsibility of the company to consider its impact in advance. A duty of care is a concept that users themselves can understand. It offers an element of future-proofing, since companies would have to evaluate the risk of a service or product failing to meet the standard of “reasonably foreseeable harm”. It would also ensure that powerful global companies that hide behind the status of being “mere conduits” are held responsible for the safety of the online services they provide. However, a duty of care works only if it applies to all digital services, all harms and all users.

The risks of drawing too narrowly the parameters to which services must comply is highlighted by the provisions of the Digital Economy Act 2017, which sought to restrict children’s access to pornography based on scale and yet failed to bring platforms such as Twitter within scope, despite 500,000 pornographic images being posted daily. Equally, if the duty of care applies to some harms and not others, the opportunity to develop a systemic approach will be missed. Many headlines are preoccupied with the harms associated with content or contact but there is a host of others. For example, behavioural design— otherwise known as “nudge and sludge”—is a central component of many of the services we use. The nudge pushes us to act in the interests of the online service, while the sludge features are those deliberately designed to undermine or obfuscate our ability to act in our own best interests. It is designed to be addictive and involves the deliberate manipulation of free will.

It is also necessary to consider how a duty of care characterises whom we are protecting. We know that children often experience specific harms online differently from adult users. Some categories of people whom we would not consider vulnerable in other settings become targets online—for example, female MPs or journalists. Some harms are prejudicial to whole groups. Examples are the racial bias found in algorithms used to determine bail conditions and sentencing terms in the US, or the evidence that just a handful of sleep-deprived children in a classroom diminishes the academic achievement of the entire class. Of course, there are harms to society as whole, such as the undeclared political profiling that influences electoral outcomes.

I understand that the proposal for a duty of care policy is still under consideration, but I would be grateful if the Minister would outline the Government’s current thinking about scope, including the type and size of services, what harms the Government seek to address and whether they will be restricted to harms against individuals.

When setting out their safety strategy in 2017, the Government made a commitment that what is unacceptable offline should be unacceptable online. That is an excellent place to start, not least because the distinction between online and offline increasingly does not apply. The harms we face are cross-cutting and only by seeing them as an integrated part of our new augmented reality can we begin to consider how to address them.

But defence against harm is not the only driver, we should hope that the technology we use is designed to fulfil our rights, to enable our development and to reflect the values embodied in our laws and international agreements. With that in mind, I propose four pillars of safety that might usefully be incorporated into a broader strategy: parity, safety by design, accountability and enforcement. Parity online and offline could be supported by the publication of guidance to provide clarity about how existing protections apply to the digital environment. The noble Lord, Lord Stevenson, mentioned the Health and Safety at Work Act and the Law Commission recently published a scoping report on abusive and offensive online communications.

Alongside such sector-by-sector analysis, the Government might also consider an overarching harmonisation Bill. Such a Bill would operate in a similar way to Section 3 of the Human Rights Act by creating an obligation to interpret legislation in a way that creates parity of protection and redress online and offline to the extent that it is possible to do so.

This approach applies also to international agreements. At the 5Rights Foundation we are supporting the United Nations Committee on the Rights of the Child in writing a general comment that will formally outline the relevance of the 40-plus articles of the charter to the digital environment. Clarifying, harmonising, consolidating and enhancing existing agreements, laws and regulations would underpin the parity principle and deliver offline norms and expectations in online settings. Will the Minister say whether the Government are considering this approach?

The second pillar is the widely supported principle of safety and privacy by design. In its March 2018 report Secure by Design the DCMS concluded that government and industry action was “urgently” required to ensure that internet-connected devices have,

“strong security … built in by design”.

Minimum universal standards are also a demand of the Department for Business, Energy and Industrial Strategy and the consumer organisation Which?. They are also a central concern of the Child Dignity Alliance technical working group to prevent the spread of images of child sexual abuse. It will publish its report and make recommendations on Friday.

We should also look upstream at the design of smart devices and operating systems. For example, if Google and Apple were to engineer safety and privacy by design into Android and IOS operating systems, it would be transformative.

There is also the age-appropriate design code that many of us had our names to. The Government’s response to the safety strategy acknowledges the code, but it is not clear that they have recognised its potential to address a considerable number of interrelated harms, nor its value as a precedent for safety by design that could be applied more widely. At the time, the Minister undertook that the Secretary of State would work closely in consultation with the Information Commissioner and me to ensure that the code is robust and practical, and meets the development needs of children. I ask the Minister to restate that commitment this evening.

The third pillar is accountability—saying what you will do, doing what you said and demonstrating that you have done it. Accountability must be an obligation, not a tool of lobbyists to account only for what they wish us to know. The argument made by services that they cannot publish data about complaints, or offer a breakdown of data by age, harm and outcome because of commercial sensitivities, remains preposterous. Research access to commercial data should be mandated so that we can have independent benchmarking against which to measure progress, and transparency reporting must be comprehensive, standardised and subject to regulatory scrutiny.

This brings me to enforcement. What is illegal should be clearly defined, not by private companies but by Parliament. Failure to comply must have legal consequences. What is contractually promised must be upheld. Among the most powerful ways to change the culture of the online world would be the introduction of a regulatory backstop for community standards, terms and conditions, age restrictions and privacy notices. This would allow companies the freedom to set their own rules, and routine failure by a company to adhere to its own published rules would be subject to enforcement notices and penalties.

Where users have existing vulnerabilities, a higher bar of safety by default must be the norm. Most importantly, the nuanced approaches that we have developed offline to live together must apply online. Any safety strategy worth its title must not balk at the complexity but must cover all harms from the extreme to the quotidian.

While it is inappropriate for me leap ahead of the findings of the House of Lords committee inquiry on who should be the regulator, it is clear that this is a sector that requires oversight and that all in the enforcement chain need resources and training.

I appreciate the Government’s desire to be confident that their response is evidence-based, but this is a fast- moving world. A regulator needs to be independent of industry and government, with significant powers and resources. The priorities of the regulator may change but the pillars—parity, safety by design, accountability and enforcement—could remain constant.

The inventor of the web, Sir Tim Berners-Lee, recently said that,

“the web is functioning in a dystopian way. We have online abuse, prejudice, bias, polarisation, fake news, there are lots of ways in which it is broken”.

It is time to fix what is broken. A duty of care as part of that fix is warmly welcome, but I hope that the Minister will offer us a sneak preview of a much bolder vision of what we might expect from the Government’s White Paper when it comes.

19:57
Baroness Grender Portrait Baroness Grender (LD)
- Hansard - - - Excerpts

My Lords, I thank the noble Lord, Lord Stevenson of Balmacara, for initiating this debate on such an important subject. It is timely because while so much seems to be at the stage on initiation, very little has reached a conclusion, so it is good to take stock. It is good that he has led us through a complex debate with his usual clarity. As ever, it has also been a real treat to hear in more detail about the work that the noble Baroness, Lady Kidron, has been doing in this area. She has already achieved so much in her work on the age-appropriate design code, with full support from these Benches and in particular from my noble friend Lord Clement-Jones. As we have heard, she is not satisfied with that and is pushing on to bigger and better achievements.

As a mum of a Generation Z 13 year-old, I am grateful for everything that the noble Baroness and the noble Lord, Lord Stevenson, are doing in this area. I guess the danger is that we will have sorted this only by the time we get to—what I believe we are now calling—Generation Alpha. It is possible we will look back on this time with horror and wonder what we did, as legislators who failed to move with the times, to a generation of children. While real joy comes from the internet, for a child the dangers are only too real.

The ICO call for evidence regarding the age-appropriate design code is very welcome, and I look forward to hearing the commitment that the noble Baroness, Lady Kidron, will be included every step of the way. An obligation will be placed on providers of online services and apps used by children. I just add that one of the difficulties here is dealing with children playing games such as “Assassin’s Creed”—which many under-18s play but is rated 18 due to bad language and serious gore—in the same way that for years children have watched movies with a slightly older age restriction.

Bar one other child, mine was the last of his contemporaries aged 11 to move from brick to smart phone. The head teacher of his secondary school asked all parents to check their children’s social media every night. It will come as no surprise to the expert and knowledgeable speakers here tonight that literally no one checks, so groups of children without the knowledge of how to edit themselves are not unusual on platforms from which they are all banned but still manage to sign up to. The five rights correctly identify that they will struggle to delete their past and need the ability to do just that.

As we know, kids are both tech wizards and extremely naive. You set screen times and safety measures and then discover they have created a new person. You have to release security to download stuff but you then realise they have accepted the kind of friends who call themselves David Beckham or whatever. On my last training for this, for safeguarding as a school governor, I was taught that children above 11 are now getting more savvy about online dangers, but it is the 8, 9 and 10 year-olds—or, as I prefer to call it, the Minecraft generation—who have an open door to literally everyone.

It is the school-age child we should continue ask ourselves questions about when we look at whether the legislation is working. As every school leader or governor knows, safeguarding is taken so seriously that we are trained again and again to check on safeguarding issues the whole time. However, the minute a smartphone is delivered into a child’s hand—or to the sibling of a friend, which is much more of a problem—the potential to cut across the best safeguarding rules are gone and the potential for harm begins. When the NSPCC tells us that children can be groomed through the use of sexting within 45 minutes, we have to act.

I would like us to cast our minds back to 2003—which, in internet years, I guess would be our equivalent of medieval times—when the Communications Act placed a duty on Ofcom to set standards for the content of programmes, including,

“that generally accepted standards are applied to the content of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.

That requirement stemmed from a consensus at the time that broadcasting, by virtue of its universality in virtually every home in country—and therefore its influence on people’s lives—should abide by certain societal standards. Exactly the same could be said now about social media, which is even more ubiquitous and, arguably, more influential, especially for young people.

However, it was striking to read the evidence given recently to the Communications Select Committee by the larger players—which, I must point out, is still in draft form. When those large social media companies were asked to ensure a similar approach, they seemed to be seeking greater clarity and definition of what constitutes harm and to whom this would happen, rather than saying, “Where do I sign?”

When the Minister responds, perhaps he could explain what the difference is now from 2003? If in 2003 there was general acceptance relating to content of programmes for television and radio, protecting the public from offensive and harmful material, why have those definitions changed, or what makes them undeliverable now? Why did we understand what we meant by “harm” in 2003 but appear to ask what it is today?

The digital charter was welcomed in January 2018 and has been a valuable addition to this debate. We hope for great progress in the White Paper, which I understand will be produced in early 2019. However, I am sure that others know better than me and perhaps the Minister will tell us. When he does, will he give us a sneak peek at what progress the Government are making in looking at online platforms—for instance, on legal liability and sharing of content? It would be good to know whether the scales are now moving towards greater accountability. I understand that Ofcom was a witness at the Commons DCMS Select Committee last week. It said that discussions had been open and positive and we would like to hear more.

I recently had the privilege of being on the Artificial Intelligence Select Committee. Our report Ready, Willing and Able? made clear that there is a need for much greater transparency in this area. Algorithms and deep neural networks that cannot be accountable should not be used on humans until full transparency is available. As the report concludes:

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take”.


I look forward to the debate on that report next week.

As with the AI Select Committee investigation, it is clear in this debate that there are many organisations in the field—from the ICO to Ofcom, from the Centre for Data Ethics to the ASA. The question becomes: is a single body is required here, or do we, as a Parliament, increase resource and put greater responsibility into one existing organisation? The danger of the lack of clarity and consistency becomes apparent if we do not.

I would welcome a comment from the Minister on the latest efforts in Germany in this area with its network enforcement law and its threatened fines of large sums if platforms do not rapidly take down hate speech and other illegal content. Does the Minister believe that it is possible to do that here? I was interested to hear that, as a result of such changes in German law, Facebook has had to increase its staff numbers in this safeguarding area—by a disproportionately large number in comparison with anywhere else in Europe.

The need for platforms and larger players to reform themselves regularly is starting to show. In the Lords Communications Select Committee session, Facebook was keen to point out its improvements to its algorithm for political advertising. Indeed, the large players will be quick to point out that they have developed codes and ethical principles. However, the AI Select Committee believes, as the Minister will have seen, that there is a need for a clear ethical code around AI with five principles. First, AI should be for the common good; secondly, it should be intelligible and fair; thirdly, it should not be used to diminish the data rights of individuals, families or communities; fourthly, everyone has the right to be educated to flourish alongside AI; and, fifthly, the power to hurt, destroy or deceive should never be vested in AI. Who could argue with that?

In a warm up for next week’s debate, I wonder whether the Minister believes, as I do, that whether we are pre-Brexit, post-Brexit, or over-a-cliff-without-a-parachute-Brexit—which is currently looking more likely by the day—we in the UK still have the capacity to lead globally on an ethical framework in this area. In the committee we were also able to provide clarity on responsibility between the regulatory bodies. It was useful work.

One of the first pieces of legislation I successfully amended in this place with colleagues on these Benches was the Criminal Justice and Courts Act 2015. A friend of mine who had been a victim of revenge porn had found how inadequate the legislation was and that the police were unable to act. The debate around it was typical of so many of the debates in this area. A whole generation of legislators—us—born well before the advent of the smartphone was setting laws for a generation who literally photograph everything. The dilemma became about how far ahead of what is already happening in society we need to be. It should be all the way and it is now a criminal act with an automatic sentence of two years. Unfortunately, awareness of this law is still quite low, but I would like to guide us towards the deterrence factor in this discussion.

While I have concentrated most of my comments on the future generations, a word needs to be said for the parents. The Cambridge Analytica scandal and the investigation into the spending by Brexit campaigners in the referendum suggest that the general public as well as children need help to protect them from micro-targeting and bias in algorithms—all delivered through social media platforms. There is a danger that this will further break the trust—if there is any left—in the political processes. It is a reminder that while fines and investigations highlight such practices and behaviours, they are not the only steps to take to deal with them.

The forthcoming White Paper will look at institutional responsibilities and whether new regulatory powers should be called on by either existing regulators or others. Again, any clarity on the thought process and, of course, the timescale from the Minister will be welcome. While we wait for that White Paper, we can all reach the conclusion that the status quo does not work. Governments cannot wait until this regulation debate becomes outdated. If “harm” as a definition was good enough for TV and radio content in 2003, it is good enough for content on social media platforms today.

20:09
Lord Ashton of Hyde Portrait The Parliamentary Under-Secretary of State, Department for Digital, Culture, Media and Sport (Lord Ashton of Hyde) (Con)
- Hansard - - - Excerpts

My Lords, that was brief but fun. As the noble Lord, Lord Stevenson, said, it is nice to have a sort of flashback to the many interesting debates we have had. Although not many noble Lords are in the Chamber, these are important issues and in a way I feel that I could answer the points made by all noble Lords by saying, “This is very important. I agree that the status quo is not acceptable. We understand that. The points raised will be actively considered. We have not made any final decisions and we will reach an answer in the White Paper”. I could then sit down, but I shall try to be a bit more reassuring and helpful than that.

The temptation is always to try to get the Minister to commit to things that he should not. However, the fact is that the White Paper is not finished and we have not made decisions in a lot of these cases. We are actively considering everything that has been brought up in this debate and we are certainly ready and willing to talk to all noble Lords. The noble Baroness, Lady Kidron, will have heard the Minister give evidence at the session of the Lords Communications Committee earlier today. She talked about these issues, and in a sense the noble Baroness is probably ahead of me. However, we are very happy to continue talking.

As I say, we will be publishing the White Paper on online harms this winter. Although the noble Lord, Lord Stevenson, said generously that perhaps we should suggest the spring, we mean the winter—I think we all know what winter means. That is the current plan and it will be a precursor to legislation if it is required. We will set out our plans to ensure that social media platforms take more responsibility for online harms, which is what we are all aiming for. It is a complex area and we are considering carefully all the options, including but not limited to a statutory duty of care and a regulator.

However, we have to bear in mind that the internet offers huge benefits. We sometimes spend our lives talking about the problems and the harms, but it is important to note that not only here in this country but in developing countries the internet makes a tremendous difference to growing economies, making us more productive and raising living standards. In many cases it enhances the quality of life. It is also true that the industry has taken significant steps with existing industry-led initiatives, in particular through the application of technology. We are not saying that it is perfect, which is why we are producing the White Paper. We know that in some cases legislation may be needed, but there are technological solutions and it would be wrong not to acknowledge that.

In part because of the influence of noble Lords and parliamentarians more widely, there has been a movement towards a “Think safety first” approach, which has been mentioned. In that, the safety considerations are embedded into the product development. For example, Facebook, Instagram and Apple have all recently brought in tools for users to monitor and limit their screen time. That shows the clear role that technology has to play in tackling online harms. However, we agree that more can be done and we will set out our plans in the White Paper to support the development and adoption of safety technologies, and more importantly to empower and educate users.

We have already said that as a Government we will bring forward legislation if that is necessary. We have pointed to a social media code of practice and transparency reporting as areas where we think that legislative action may be required. We are also exploring whether additional measures are needed. We know that there is public concern about a broad range of online activity ranging from terrorism to child sexual exploitation, along with children’s access to inappropriate but legal content. Of course, the boundary between the legal and the illegal is not always easy to define. We will set out a clear and coherent framework to tackle these issues in a proportionate and appropriate manner, but which importantly will also support the continued growth and innovation of the digital economy. We also do not want to stifle legitimate free speech or prevent innovation, where the UK is a significant global leader.

The White Paper will also address public concerns and ensure continued confidence in the digital economy. We know from a joint report by Ofcom and the ICO that eight out of 10 internet users have concerns about going online. Noble Lords will be aware that the Communications Committee, to which I have just referred, is undertaking an inquiry in this area and we look forward to its conclusions. We support and accept the range and openness of the debate taking place. A great deal of research is being done and we are engaging with industry as well. We have read a lot of the proposals which have been put forward but we are also engaging face to face with stakeholders. For example, the Home Secretary was in Silicon Valley just last week talking to tech companies with particular reference to child sexual exploitation as well as looking at the role of advertising, which was mentioned by the noble Baroness.

The problem with the duty of care model is that while in some cases it seems to be an easy and good answer to the problem, it is not as straightforward as it sounds and needs careful thinking through. We are not against it. We are certainly considering the different models of duty of care because it does not have a fixed meaning in English law. There are different areas, such as health and safety, environmental protection and common law; we are looking carefully at all those. Of course, that includes the model mentioned by the noble Lord, Lord Stevenson, and others, which was put forward recently by Professor Woods and Mr Perrin at the Carnegie Trust. In fact, their proposal applies only to social media companies and focuses on the processes put in place by companies to protect users. We are certainly looking at that and have not written it off in any way.

That is not the only regulatory model that might be appropriate to tackle online harms. We are looking more broadly at a range of options and examples from a wide range of sectors. I cannot be more specific than that at the moment except to say that we are keeping an open mind. We are considering the whole spectrum, from self-regulation on the one hand to a duty of care, a regulator and prescriptive statutory regulation on the other; there are several ways in which that might be put in place. I acknowledge the point made by noble Lords that a duty of care has the benefit of an element of future-proofing, which is another thing we have to consider rather than specific statutory regulation.

The other factor to bear in mind is the international aspect of this. We are working closely with like-minded countries as we design solutions. We are among the leading countries in trying to tackle this issue. Obviously, we have looked at and are considering measures such as the e-safety commissioner in Australia, an ombudsman-like model that can issue fines. The possible problem with an ombudsman is that it is a complaints-based solution and things may need to be done quicker than that; we are looking at that. Similarly, the noble Baroness, Lady Grender, talked about the German rules on taking down illegal content. We think that there are possible conflicts there with EU law but, again, we have not written that off and we are studying it carefully.

I want to take up a number of specific questions. I will allow myself a little more time—not too much longer, do not worry—because we have a whole hour and there are only four of us. The noble Lord, Lord Stevenson, asked about who is in the lead on this. The publication will be a joint effort between the DCMS and the Home Office. Despite what the noble Lord said about the lack of success, the Data Protection Act, which was a joint effort with the DCMS, worked well. Obviously, the wide range of harms that we are looking at are relevant to the Home Office and the DCMS; we are also working closely with other departments.

The digital charter is part of a rolling programme of work. The online harms White Paper is part of the digital charter, which we will continue to update and which includes things such as the Centre for Data Ethics and Innovation Consultation, our age-verification work—that will come before your Lordships’ House very soon—and the White Paper itself.

Noble Lords have asked to whom companies owe a duty or responsibility. We think that platforms should be responsible for protecting their users from experiencing harms through their services. However, defining that in an online environment is complex. We are thinking carefully about the problem of how this might look in an online environment where half a billion people, say, are online, if we say that there is a responsibility in that environment. We have to look at the theory behind this as well as the practicalities. We are considering that.

How will we enforce compliance with whatever regulations we propose? We think that we will need a proportionate suite of graduated, effective sanctions, with the aim of securing future compliance and remedying the wrongs. That comes with a variety of challenges, such as international enforcement. We are considering regulation in other areas, including with international partners in forums such as the G20 and the OECD.

On the companies that are in the scope of the proposed regulatory framework, we are looking methodically at the platforms that will be affected and on which platform users are exposed to the greatest risk. We accept that looking at a risk-based approach would be sensible. That will include, but may not be limited to, social media companies because we want this to be future-proofed and not limited to today’s business models. We are exploring how we might develop a future-proofed approach. We will seek to take a proportionate approach depending on the size of businesses and the risks associated with their activities.

I was asked whether we are looking beyond platforms and about operating systems. We need to influence the development of new and emerging platforms. As part of the design process for any new website or app, all companies should actively ensure that they build a safe experience for their customers. We will promote a “think safety first” approach for all companies and in the forthcoming White Paper set out how we will work with industry to ensure that start-ups, SMEs and other companies have the practical tools and guidance they need.

As for harmonisation of legislation, we have said—and noble Lords have repeated it—that what is illegal offline should be illegal online. As has been mentioned, the Law Commission’s recent review of abusive and offensive online communications reported on the parity between legislation offline and online. It concluded that,

“for the most part … abusive online communications are, at least theoretically, criminalised to the same or even a greater degree than equivalent offline offending … Practical and cultural barriers mean that not all harmful online conduct is pursued in terms of criminal law enforcement to the same extent that it might be in an offline context”.

Therefore we welcome the second phase of the Law Commission’s review.

The scope of online harms in the White Paper has not been finally settled, but we are looking to address the full range of harms, from the clearly defined illegal to the legal but harmful. Boundaries between such harms are sometimes hard to define. We are taking a pragmatic approach. There is clearly a place for technology in educational efforts as well as legislation. We will set out a clear and coherent framework that tackles that.

On what I said on the previous Bill about the Secretary of State’s commitments on parity, safety by design, accountability and enforcement, I see no reason to detract from that, albeit we have a new Secretary of State. I have no reason to believe that anything has changed and we will certainly look at those questions.

I repeat to noble Lords that we are trying to keep a very open mind; it is not settled. We welcome all input from noble Lords, particularly those here tonight. We look forward to the Communications Committee’s report on safety on the internet.

Our approach will be guided by some key considerations, including the responsibilities that tech companies should have to prevent and protect against online harms, the importance of innovation to the digital sector, upholding a free and open internet, and the international scale of this challenge. We will set out the details of our approach this winter.

House adjourned at 8.27 pm.