Social Media Services

Baroness Grender Excerpts
Monday 12th November 2018

(5 years, 8 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Baroness Grender Portrait Baroness Grender (LD)
- Hansard - -

My Lords, I thank the noble Lord, Lord Stevenson of Balmacara, for initiating this debate on such an important subject. It is timely because while so much seems to be at the stage on initiation, very little has reached a conclusion, so it is good to take stock. It is good that he has led us through a complex debate with his usual clarity. As ever, it has also been a real treat to hear in more detail about the work that the noble Baroness, Lady Kidron, has been doing in this area. She has already achieved so much in her work on the age-appropriate design code, with full support from these Benches and in particular from my noble friend Lord Clement-Jones. As we have heard, she is not satisfied with that and is pushing on to bigger and better achievements.

As a mum of a Generation Z 13 year-old, I am grateful for everything that the noble Baroness and the noble Lord, Lord Stevenson, are doing in this area. I guess the danger is that we will have sorted this only by the time we get to—what I believe we are now calling—Generation Alpha. It is possible we will look back on this time with horror and wonder what we did, as legislators who failed to move with the times, to a generation of children. While real joy comes from the internet, for a child the dangers are only too real.

The ICO call for evidence regarding the age-appropriate design code is very welcome, and I look forward to hearing the commitment that the noble Baroness, Lady Kidron, will be included every step of the way. An obligation will be placed on providers of online services and apps used by children. I just add that one of the difficulties here is dealing with children playing games such as “Assassin’s Creed”—which many under-18s play but is rated 18 due to bad language and serious gore—in the same way that for years children have watched movies with a slightly older age restriction.

Bar one other child, mine was the last of his contemporaries aged 11 to move from brick to smart phone. The head teacher of his secondary school asked all parents to check their children’s social media every night. It will come as no surprise to the expert and knowledgeable speakers here tonight that literally no one checks, so groups of children without the knowledge of how to edit themselves are not unusual on platforms from which they are all banned but still manage to sign up to. The five rights correctly identify that they will struggle to delete their past and need the ability to do just that.

As we know, kids are both tech wizards and extremely naive. You set screen times and safety measures and then discover they have created a new person. You have to release security to download stuff but you then realise they have accepted the kind of friends who call themselves David Beckham or whatever. On my last training for this, for safeguarding as a school governor, I was taught that children above 11 are now getting more savvy about online dangers, but it is the 8, 9 and 10 year-olds—or, as I prefer to call it, the Minecraft generation—who have an open door to literally everyone.

It is the school-age child we should continue ask ourselves questions about when we look at whether the legislation is working. As every school leader or governor knows, safeguarding is taken so seriously that we are trained again and again to check on safeguarding issues the whole time. However, the minute a smartphone is delivered into a child’s hand—or to the sibling of a friend, which is much more of a problem—the potential to cut across the best safeguarding rules are gone and the potential for harm begins. When the NSPCC tells us that children can be groomed through the use of sexting within 45 minutes, we have to act.

I would like us to cast our minds back to 2003—which, in internet years, I guess would be our equivalent of medieval times—when the Communications Act placed a duty on Ofcom to set standards for the content of programmes, including,

“that generally accepted standards are applied to the content of television and radio services so as to provide adequate protection for members of the public from the inclusion in such services of offensive and harmful material”.

That requirement stemmed from a consensus at the time that broadcasting, by virtue of its universality in virtually every home in country—and therefore its influence on people’s lives—should abide by certain societal standards. Exactly the same could be said now about social media, which is even more ubiquitous and, arguably, more influential, especially for young people.

However, it was striking to read the evidence given recently to the Communications Select Committee by the larger players—which, I must point out, is still in draft form. When those large social media companies were asked to ensure a similar approach, they seemed to be seeking greater clarity and definition of what constitutes harm and to whom this would happen, rather than saying, “Where do I sign?”

When the Minister responds, perhaps he could explain what the difference is now from 2003? If in 2003 there was general acceptance relating to content of programmes for television and radio, protecting the public from offensive and harmful material, why have those definitions changed, or what makes them undeliverable now? Why did we understand what we meant by “harm” in 2003 but appear to ask what it is today?

The digital charter was welcomed in January 2018 and has been a valuable addition to this debate. We hope for great progress in the White Paper, which I understand will be produced in early 2019. However, I am sure that others know better than me and perhaps the Minister will tell us. When he does, will he give us a sneak peek at what progress the Government are making in looking at online platforms—for instance, on legal liability and sharing of content? It would be good to know whether the scales are now moving towards greater accountability. I understand that Ofcom was a witness at the Commons DCMS Select Committee last week. It said that discussions had been open and positive and we would like to hear more.

I recently had the privilege of being on the Artificial Intelligence Select Committee. Our report Ready, Willing and Able? made clear that there is a need for much greater transparency in this area. Algorithms and deep neural networks that cannot be accountable should not be used on humans until full transparency is available. As the report concludes:

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take”.


I look forward to the debate on that report next week.

As with the AI Select Committee investigation, it is clear in this debate that there are many organisations in the field—from the ICO to Ofcom, from the Centre for Data Ethics to the ASA. The question becomes: is a single body is required here, or do we, as a Parliament, increase resource and put greater responsibility into one existing organisation? The danger of the lack of clarity and consistency becomes apparent if we do not.

I would welcome a comment from the Minister on the latest efforts in Germany in this area with its network enforcement law and its threatened fines of large sums if platforms do not rapidly take down hate speech and other illegal content. Does the Minister believe that it is possible to do that here? I was interested to hear that, as a result of such changes in German law, Facebook has had to increase its staff numbers in this safeguarding area—by a disproportionately large number in comparison with anywhere else in Europe.

The need for platforms and larger players to reform themselves regularly is starting to show. In the Lords Communications Select Committee session, Facebook was keen to point out its improvements to its algorithm for political advertising. Indeed, the large players will be quick to point out that they have developed codes and ethical principles. However, the AI Select Committee believes, as the Minister will have seen, that there is a need for a clear ethical code around AI with five principles. First, AI should be for the common good; secondly, it should be intelligible and fair; thirdly, it should not be used to diminish the data rights of individuals, families or communities; fourthly, everyone has the right to be educated to flourish alongside AI; and, fifthly, the power to hurt, destroy or deceive should never be vested in AI. Who could argue with that?

In a warm up for next week’s debate, I wonder whether the Minister believes, as I do, that whether we are pre-Brexit, post-Brexit, or over-a-cliff-without-a-parachute-Brexit—which is currently looking more likely by the day—we in the UK still have the capacity to lead globally on an ethical framework in this area. In the committee we were also able to provide clarity on responsibility between the regulatory bodies. It was useful work.

One of the first pieces of legislation I successfully amended in this place with colleagues on these Benches was the Criminal Justice and Courts Act 2015. A friend of mine who had been a victim of revenge porn had found how inadequate the legislation was and that the police were unable to act. The debate around it was typical of so many of the debates in this area. A whole generation of legislators—us—born well before the advent of the smartphone was setting laws for a generation who literally photograph everything. The dilemma became about how far ahead of what is already happening in society we need to be. It should be all the way and it is now a criminal act with an automatic sentence of two years. Unfortunately, awareness of this law is still quite low, but I would like to guide us towards the deterrence factor in this discussion.

While I have concentrated most of my comments on the future generations, a word needs to be said for the parents. The Cambridge Analytica scandal and the investigation into the spending by Brexit campaigners in the referendum suggest that the general public as well as children need help to protect them from micro-targeting and bias in algorithms—all delivered through social media platforms. There is a danger that this will further break the trust—if there is any left—in the political processes. It is a reminder that while fines and investigations highlight such practices and behaviours, they are not the only steps to take to deal with them.

The forthcoming White Paper will look at institutional responsibilities and whether new regulatory powers should be called on by either existing regulators or others. Again, any clarity on the thought process and, of course, the timescale from the Minister will be welcome. While we wait for that White Paper, we can all reach the conclusion that the status quo does not work. Governments cannot wait until this regulation debate becomes outdated. If “harm” as a definition was good enough for TV and radio content in 2003, it is good enough for content on social media platforms today.