(1 year, 5 months ago)
Lords ChamberMy Lords, I just want to make some brief comments in support of the principle of what the noble Lord, Lord Knight, is aiming at in this amendment.
The Bill is going to have a profound impact on children in the United Kingdom. We hope that the most profound impact will be that it will significantly advance their interests in terms of safety online. But it will also potentially have a significant impact on what they can access online and the functionality of different services. They are going to experience new forms of age assurance, about which they may have very strong views. For example, the use of their biometric data to estimate their age will be there to protect them, but they may still have strong views about that.
I have said many times that there may be some measures in the Bill that will encourage services to become 18-plus only. That is not adult in the sense of adult content. Ordinary user-to-user social media services may look at the obligations and say, “Frankly, we would much rather restrict ourselves to users from the UK who identify as being 18-plus, rather than have to take on board all the associated liabilities in respect of children”—not because they are irresponsible, but precisely because they are responsible, and they can see that there is a lot of work to do in order to be legally and safely available to those under 18. For all those reasons, it is really important that the child advocacy body looks at things such as the United Nations Convention on the Rights of the Child and the rights of children to access information, and that it is able to take a view on them.
The reason I think that is important—as will any politician who has been out and spoken in schools—is that very often children are surprising in terms of what they see as their priorities. We make assumptions about their priorities, which can often be entirely wrong. There has been some really good work done on this. There was a project called EU Kids Online, back in the days of the EU, which used to look at children right across the European Union and ask them what their experience of being online was like and what was important to them. There are groups such as Childnet International, which for years has been convening groups of children and taking them to places such as the Internet Governance Forum. That always generates a lot of information that we here would not have thought of, about what children feel is really important to them about their online experience.
For all those reasons, it really would be helpful to institutionalise this in the new regime as some kind of body that looks in the round at children’s interests—their interests to stay safe, but also their interests to be able to access a wide variety of online services and to use the internet as they want to use it. I hope that that strengthens the case the noble Lord, Lord Knight, has made for such a body to exist in some kind of coalition-like format.
My Lords, I am afraid that I have some reservations about this amendment. I was trying not to, but I have. The way that the noble Lord, Lord Allan of Hallam, explained the importance of listening to young people is essential—in general, not being dictated to by them, but to understand the particular ways that they live their lives; the lived experience, to use the jargon. Particularly in relation to a Bill that spends its whole time saying it is designed to protect young people from harm, it might be worth having a word with them and seeing what they say. I mean in an ongoing way—I am not being glib. That seems very sensible.
I suppose my concern is that this becomes a quango. We have to ask who is on it, whether it becomes just another NGO of some kind. I am always concerned about these kinds of organisations when they speak “on behalf of”. If you have an advocacy body for children that says, “We speak on behalf of children”, that makes me very anxious. You can see that that can be a politically very powerful role, because it seems to have the authority of representing the young, whereas actually it can be entirely fictitious and certainly not democratic or accountable.
The key thing we discussed in Committee, which the noble Lord, Lord Knight of Weymouth, is very keen on—and I am too—is that we do not inadvertently deny young people important access rights to the internet in our attempt to protect them. That is why some of these points are here. The noble Baroness, Lady Kidron, was very keen on that. She wants to protect them but does not want to end up with them being denied access to important parts of the internet. That is all good, but I just think this body is wrong.
The only other thing to draw noble Lords’ attention to—I am not trying to be controversial, but it is worth nothing—is that child advocacy is currently in a very toxic state because of some of the issues around who represents children. As we speak, there is a debate about, for example, whether the NSPCC has been captured by Stonewall. I make no comment because I do not know; I am just noting it. We have had situations where a child advocacy group such as Mermaids is now discredited because it is seen to have been promoting chest binders for young people, to have gone down the gender ideology route, which some people would argue is child abuse of a sort, advocating that young women remove their breasts—have double mastectomies. This is all online, by the way.
I know that some people would say, “Oh, you’re always going on about that”, but I raise it because it is a very real and current discussion. I know a lot of people who work in education, with young people or in children’s rights organisations, and they keep telling me that they are tearing themselves apart. I just wondered whether the noble Lord, Lord Knight, might note that there is a danger of walking into a minefield here—which I know he does not mean to walk into—by setting up an organisation that could end up being the subject of major culture wars rows or, even worse, one of those dreaded quangos that pretends it is representing people but does not.
(1 year, 5 months ago)
Lords ChamberMy Lords, I will add to my noble friend’s call for us to consider whether Clause 158 should be struck from the Bill as an unnecessary power for the Secretary of State to take. We have discussed powers for the Secretary of State throughout the Bill, with some helpful improvements led by the noble Baroness, Lady Stowell. This one jars in particular because it is about media literacy; some of the other powers related to whether the Secretary of State could intervene on the codes of practice that Ofcom would issue. The core question is whether we trust Ofcom’s discretion in delivering media literacy and whether we need the Secretary of State to have any kind of power to intervene.
I single out media literacy because the clue is in the name: literacy is a generic skill that you acquire about dealing with the online world; it is not about any specific text. Literacy is a broader set of skills, yet Clause 158 has a suggestion that, in response to specific forms of content or a specific crisis happening in the world, the Secretary of State would want to takesb this power to direct the media literacy efforts. To take something specific and immediate to direct something that is generic and long-term jars and seems inappropriate.
I have a series of questions for the Minister to elucidate why this power should exist at all. It would be helpful to have an example of what kind of “public statement notice”—to use the language in the clause—the Government might want to issue that Ofcom would not come up with on its own. Part of the argument we have been presented with is that, somehow, the Government might have additional information, but it seems quite a stretch that they could come up with that. In an area such as national security, my experience has been that companies often have a better idea of what is going on than anybody in government.
Thousands of people out there in the industry are familiar with APT 28 and APT 29 which, as I am sure all noble Lords know, are better known by their names Fancy Bear and Cozy Bear. These are agents of the Russian state that put out misinformation. There is nothing that UK agencies or the Secretary of State might know about them that is not already widely known. I remember talking about the famous troll factory run by Prigozhin, the Internet Research Agency, with people in government in the context of Russian interference—they would say “Who?” and have to go off and find out. In dealing with threats such as that between the people in the companies and Ofcom, you certainly want a media literacy campaign which tells you about these troll agencies and how they operate and gives warnings to the public, but I struggle to see why you need the Secretary of State to intervene as opposed to allowing Ofcom’s experts to work with company experts and come up with a strategy to deal with those kinds of threat.
The other example cited of an area where the Secretary of State might want to intervene is public health and safety. It would be helpful to be specific; had they had it, how would the Government have used this power during the pandemic in 2020 and 2021? Does the Minister have examples of what they were frustrated about and would have done with these powers that Ofcom would not do anyway in working with the companies directly? I do not see that they would have had secret information which would have meant that they had to intervene rather than trusting Ofcom and the companies to do it.
Perhaps there has been an interdepartmental workshop between DHSC, DCMS and others to cook up this provision. I assume that Clause 158 did not come from nowhere. Someone must have thought, “We need these powers in Clause 158 because we were missing them previously”. Are there specific examples of media literacy campaigns that could not be run, where people in government were frustrated and therefore wanted a power to offer it in future? It would be really helpful to hear about them so that we can understand exactly how the Clause 158 powers will be used before we allow this additional power on to the statute book.
In the view of most people in this Chamber, the Bill as a whole quite rightly grants the Government and Ofcom, the independent regulator, a wide range of powers. Here we are looking specifically at where the Government will, in a sense, overrule the independent regulator by giving it orders to do something it had not thought of doing itself. It is incumbent on the Government to flesh that out with some concrete examples so that we can understand why they need this power. At the moment, as noble Lords may be able to tell, these Benches are not convinced that they do.
My Lords, I will be very brief. The danger with Clause 158 is that it discredits media literacy as something benign or anodyne; it will become a political plaything. I am already sceptical, but if ever there was anything to add to this debate then it is that.
We established in our last debate that the notion of a recognised news publisher will go much broader than a broadcaster. I put it to the Minister that we could end up in an interesting situation where one bit of the Bill says, “You have to protect content from these people because they are recognised news publishers”. Another bit, however, will be a direction to the Secretary of State saying that, to deal with this crisis, we are going to give a media literacy direction that says, “Please get rid of all the content from this same news publisher”. That is an anomaly that we risk setting up with these different provisions.
On the previous group, I raised the issue of legal speech that was labelled as misinformation and removed in the extreme situation of a public health panic. This was seemingly because the Government were keen that particular public health information was made available. Subsequently, we discovered that those things were not necessarily untrue and should not have been removed. Is the Minister arguing that this power is necessary for the Government to direct that certain things are removed on the basis that they are misinformation—in which case, that is a direct attempt at censorship? After we have had a public health emergency in which “facts” have been contested and shown to not be as black and white or true as the Government claimed, saying that the power will be used only in extreme circumstances does not fill me with great confidence.
(1 year, 5 months ago)
Lords ChamberOn that point specifically, having worked inside one of the companies, they fear legal action under all sorts of laws, but not under the European Convention on Human Rights. As the Minister explained, it is for public bodies; if people are going to take a case on Article 10 grounds, they will be taking it against a public body. There are lots of other grounds to go after a private company but not ECHR compliance.
My Lords, I genuinely appreciate this debate. The noble Lord, Lord Clement-Jones, made what I thought was a very important point, which is, in going through the weeds of the Bill—and some people have been involved in it for many years, looking at the detail—I appreciate that it can be easy to forget the free speech point. It is important that it has been raised but it also constantly needs to be raised. That is the point: it is, as the noble Lord, Lord Allan of Hallam, admitted, a speech-restricting Bill where we are working out the balance.
I apologise to the noble and learned, Lord Hope of Craighead, for not acknowledging that he has constantly emphasised the distinction between free speech and free expression. He and I will not agree on this; it is that we do not have time for this argument now rather than me not understanding. But he has been diligent in his persistence in trying to at least raise the issues and that is important.
I was a bit surprised by the Minister’s response because, for the first time ever, since I have been here, there has been some enthusiasm across the House for one of my amendments—it really is unprecedented—Amendment 162 on the public order offences. I thought that the Minister might have noted that, because he has noted it every other time there has been a consensus across the House. I think he ought to look again at Amendment 162.
To indicate the muddle one gets in, in terms of public order offences and illegality, the police force in Cheshire, where I am from, has put out a film online today saying that misgendering is a crime. That is the police who have said that. It is not a crime and the point about these things, and the difficulty we are concerned with, is asking people to remove and censor material based on illegality or public offences that they should not be removing. That is my concern: censorship.
To conclude, I absolutely agree with the noble Lord, Lord Allan of Hallam, that of course free speech does not mean saying whatever you want wherever you want. That is not free speech, and I am a free speech absolutist. Even subreddits—if people know what they are—think they are policing each other’s speech. There are norms that are set in place. That is fine with me—that multitude.
My concern is that a state body such as Ofcom is going to set norms of what is acceptable free speech that are lower than free speech laws by demanding, on pain of breach of the law, with fines and so on, that these private companies have to impose their own terms of service, which can actually then set a norm, leading them to be risk-averse, and set a norm for levels of speech that are very dangerous. For example, when you go into work, you cannot just say anything, but there are people such as Maya Forstater, who said something at work and was disciplined and lost her job and has just won more than £100,000, because she was expressing her views and opinions. The Equality Act ran to her aid and she has now won and been shown to be right. You cannot do that if your words have disappeared and are censored.
I could talk about this for a long time, as noble Lords know. I hope that at least, as the Bill progresses, even when it becomes an Act, the Government could just stamp on its head, “Don’t forget free speech”—but before then, as we end this process, they could come back with some concessions to some of the amendments that have been raised here today. That would be more than just words. I beg leave to withdraw the amendment.
(1 year, 5 months ago)
Lords ChamberMy Lords, Amendment 172 is exceptionally helpful in putting the priority harms for children on the face of the Bill. It is something that we have asked for and I know the pre-legislative scrutiny committee asked for it and it is good to see it there. I want to comment to make sure that we all have a shared understanding of what this means and that people out there have a shared understanding.
My understanding is that “primary priority” is, in effect, a red light—platforms must not expose children to that content if they are under 18—while “priority” is rather an amber light and, on further review, for some children it will be a red light and for other children it be a green light, and they can see stuff in there. I am commenting partly having had the experience of explaining all this to my domestic focus group of teenagers and they said, “Really? Are you going to get rid of all this stuff for us?” I said, “No, actually, it is quite different”. It is important in our debate to do that because otherwise there is a risk that the Bill comes into disrepute. I look at something like showing the harms to fictional characters. If one has seen the “Twilight” movies, the werewolves do not come off too well, and “Lord of the Rings” is like an orc kill fest.
As regards the point made by the noble Baroness, Lady Harding, about going to the cinema, we allow older teenagers to go to the cinema and see that kind of thing. Post the Online Safety Bill, they will still be able to access it. When we look at something like fictional characters, the Bill is to deal with the harm that is there and is acknowledged regarding people pushing quite vile stuff, whereby characters have been taken out of fiction and a gory image has been created, twisted and pushed to a younger child. That is what we want online providers to do—to prevent an 11 year-old seeing that—not to stop a 16 year-old enjoying the slaughter of werewolves. We need to be clear that that is what we are doing with the priority harms; we are not going further than people think we are.
There are also some interesting challenges around humour and evolving trends. This area will be hard for platforms to deal with. I raised the issue of the Tide pod challenge in Committee. If noble Lords are not familiar, it is the idea that one eats the tablets, the detergent things, that one puts into washing machines. It happened some time ago. It was a real harm and that is reflected here in the “do not ingest” provisions. That makes sense but, again talking to my focus group, the Tide pod challenge has evolved and for older teenagers it is a joke about someone being stupid. It has become a meme. One could genuinely say that it is not the harmful thing that it was. Quite often one sees something on the internet that starts harmful—because kids are eating Tide pods and getting sick—and then over time it becomes a humorous meme. At that point, it has ceased to be harmful. I read it as that filter always being applied. We are not saying, “Always remove every reference to Tide pods” but “At a time when there is evidence that it is causing harm, remove it”. If at a later stage it ceases to be harmful, it may well move into a category where platforms can permit it. It is a genuine concern.
To our freedom of expression colleagues, I say that we do not want mainstream platforms to be so repressive of ordinary banter by teenagers that they leave those regulated mainstream platforms because they cannot speak any more, even when the speech is not harmful, and go somewhere else that is unregulated—one of those platforms that took Ofcom’s letter, screwed it up and threw it in the bin. We do not want that to be an effect of the Bill. Implementation has to be very sensitive to common trends and, importantly, as I know the noble Baroness, Lady Kidron, agrees, has to treat 15, 16 and 17 year-olds very differently from 10, 11 or 12 year-olds. That will be hard.
The other area that jumped out was about encouraging harm through challenges and stunts. That immediately brought “Jackass” to mind, or the Welsh version, “Dirty Sanchez”, which I am sure is a show that everyone in the House watched avidly. It is available on TV. Talking about equality, one can go online and watch it. It is people doing ridiculous, dangerous things, is enjoyed by teenagers and is legal and acceptable. My working assumption has to be that we are expecting platforms to distinguish between a new dangerous stunt such as the choking game—such things really exist—from a ridiculous “Jackass” or “Dirty Sanchez” stunt, which has existed for years and is accessible elsewhere.
The point that I am making in the round is that it is great to have these priority harms in the Bill but it is going to be very difficult to implement them in a meaningful way whereby we are catching the genuinely harmful stuff but not overrestricting. But that is that task that we have set Ofcom and the platforms. The more that we can make it clear to people out there what we are expecting to happen, the better. We are not expecting a blanket ban on all ridiculous teenage humour or activity. We are expecting a nuanced response. That is really helpful as we go through the debate.
I just have a question for the noble Lord. He has given an excellent exposé of the other things that I was worried about but, even when he talks about listing the harms, I wonder how helpful it is. Like him, I read them out to a focus group. Is it helpful to write these things, for example emojis, down? Will that not encourage the platforms to over-panic? That is my concern.
On the noble Baroness’s point, that is why I intervened in the debate: so that we are all clear. We are not saying that, for priority content, it is an amber light and not a red light. We are not saying, “Just remove all this stuff”; it would be a wrong response to the Bill to say, “It’s a fictional character being slaughtered so remove it”, because now we have removed “Twilight”, “Watership Down” and whatever else. We are saying, “Think very carefully”. If it is one of those circumstances where this is causing harm—they exist; we cannot pretend that they do not—it should be removed. However, the default should not be to remove everything on this list; that is the point I am really trying to make.
(1 year, 6 months ago)
Lords ChamberMy Lords, I am pleased that the noble Lord, Lord Knight of Weymouth, has given us an opportunity to talk about transparency reports with these amendments, which are potentially a helpful addition to the Bill. Transparency is one of the huge benefits that the legislation may bring. One of the concerns that the public have and that politicians have always had with online platforms is that they appear to be a black box—you cannot see what is going on in them.
In the entire edifice that we are constructing in the Online Safety Bill, there are huge opportunities to change that. The platforms will have to do risk assessments —there are measures in the Bill to make sure that information about these is put out—and they will have to take active steps to mitigate any risks they find. Again, we may get directions and guidance from Ofcom that will explain to the public exactly what is expected of them. The final piece of the jigsaw is the transparency reports that show the outcomes—how a platform has performed and what it has done to meet its obligations in dealing with content and behaviour on its services.
For the record, I previously worked for one of the platforms, and I would have said that I was on the pro-transparency wing of the transparency party inside the company. I believed that it was in the platform’s interest: if you do not tell people what you are doing, they will make things up about you, and what they make up will generally be worse than what you are actually doing. So there are huge advantages to the platforms from being transparent.
The noble Lord, Lord Knight, has picked up on some important points in his Amendment 160B, which talks about making sure that the transparency report is not counterproductive by giving the bad guys information that they could use to ill effect. That is a valid point; it is often debated inside the platforms. Sometimes, I argued furiously with my colleagues in the platforms about why we should disclose information. They would ask, “What about the bad guys?” Sometimes I challenged that, but other times it would have been a genuine and accurate concern. The noble Lord mentioned things such as child sexual abuse material, and we have to recognise that the bad guys are incredibly devious and creative, and if you show them anything that they can use against you to get around your systems, they will try to do that. That is a genuine and valid concern.
The sort of thing that you might put into a transparency report is, for example, whether you have banned particular organisations. I would be in favour of indicating to the public that an organisation is banned, but you can see that the potential impact of that is that all the people you are concerned about would create another organisation with a different name and then get back on to your platform. We need to be alive to those kinds of concerns.
It is also relevant to Amendment 165 and the terms of service that the more granular and detailed your terms of service are, the better they are for public information, but there are opportunities to get around them. Again, we would have that argument internally. I would say, “If we are prohibiting specific hate speech terms, tell people that, and then they won’t use them”. For me, that would be a success, as they are not using those hate speech terms anymore, but, of course, they may then find alternative hate speech terms that they can use instead. You are facing that battle all the time. That is a genuine concern that I hope we will be able to debate. I hope that Ofcom will be able to mitigate that risk by discussing with platforms what these transparency reports should look like. In a sense, we are doing a risk assessment of the transparency report process.
Amendment 229 on effectiveness is really interesting. My experience was that if you did not have a transparency report, you were under huge pressure to produce one and that once you produced one, nobody was interested. For fear of embarrassing anyone in the Committee, I would be curious to know how many noble Lords participating in this debate have read the transparency reports already produced by Meta Platforms, Google and others. If they have not read them, they should not be embarrassed, because my experience was that I would talk to regulators and politicians about something they had asked me to come in to talk about, such as hate speech or child sexual abuse material, and I learned to print off the transparency report. I would go in and say, “Well, you know what we are doing; it’s in our transparency report”. They would ask, “What transparency report?”, and I would have to show them. So, having produced a transparency report, every time we published it, we would expect there to be public interest, but little use was made of it. That is not a reason not to do them—as I said, I am very much in favour of doing them—but, on their own, they may not be effective, and Amendment 229 touches on that.
I was trying to think of a collective noun for transparency reports and, seeing as they shed light, I think it may be a “chandelier”. Where we may get the real benefit is if Ofcom can produce a chandelier of transparency reports, taking all the information it gets from the different platforms, processing it and selecting the most relevant information—the reports are often too long for people to work their way through—so that it can enable comparisons. That is really good and it is quite good for the industry that people know that platform A did this, platform B did that, and platform C did something else. They will take note of that, compare with each other and want to get into the best category. It is also critical that Ofcom puts this into user-friendly language, and Ofcom has quite a good record of producing intelligible reports. In the context of Amendment 229, a review process is good. One of the things that might come out of that, thinking ahead, would be Ofcom’s role in producing meta transparency reports, the chandelier that will shed light on what the whole sector is doing.
My Lords, for once I want to be really positive. I am actually very positive about this whole group of amendments because more transparency is essential in what we are discussing. I especially like Amendment 165 from the noble Lord, Lord Stevenson of Balmacara, because it is around terms of service for user-to-user services and ensures that information can be sought on the scope as well as the application. This is important because so much has been put on user-to-user services as well as on terms of service. You need to know what is going on.
I want particularly to compliment Amendment 229 that says that transparency reports should be
“of sufficient quality to enable service users and researchers to make informed judgements”,
et cetera. That is a very elegant way in which to say that they should not be gobbledegook. If we are going to have them, they should be clear and of a quality that we can read. Obviously, we do not want them to be unreadable and full of jargon and legalistic language. I am hoping that that is the requirement.
I shall speak very briefly at this hour, just to clarify as much as anything. It seems important to me that there is a distinction between small platforms and large platforms, but my view has never been that if you are small, you have no potential harms, any more than if you are large, you are harmful. The exception should be the rule. We have to be careful of arbitrary categorisation of “small”. We have to decide who is going to be treated as though they are a large category 1 platform. I keep saying but stress again: do not assume that everybody agrees what significant risk of harm or hateful content is. It is such highly disputed political territory outside the online world and this House that we must recognise that it is not so straightforward.
I am very sympathetic, by the way, to the speeches made about eating disorders and other issues. I see that very clearly, but other categories of speech are disputed and argued over—I have given loads of examples. We end up where it is assumed that the manifestoes of mass shooters appear on these sites, but if you read any of those manifestoes of mass shooters, they will often be quoting from mainstream journalists in mainstream newspapers, the Bible and a whole range of things. Just because they are on 4Chan, or wherever, is not necessarily the problem; it is much more complicated.
I ask the Minister, and the proposers of the amendment, to some extent: would it not be straightforwardly the case that if there is a worry about a particular small platform, it might be treated differently—
I just want to react to the manifestos of mass shooters. While source material such the Bible is not in scope, I think the manifesto of a shooter is clear incitement to terrorism and any platform that is comfortable carrying that is problematic in my view, and I hope it would be in the noble Baroness’s view as well.
I was suggesting that we have a bigger problem than it appearing on a small site. It quotes from mainstream media, but it ends up being broadly disseminated and not because it is on a small site. I am not advocating that we all go round carrying the manifestos of mass shooters and legitimising them. I was more making the point that it can be complicated. Would not the solution be that you can make appeals that a small site is treated differently? That is the way we deal with harmful material in general and the way we have dealt with, for example, RT as press without compromising on press freedom. That is the kind of point I am trying to make.
I understand lots of concerns but I do not want us to get into a situation where we destroy the potential of all smaller platforms—many of them doing huge amounts of social good, part of civil society and all the rest of it—by treating them as though they are large platforms. They just will not have the resources to survive, that is all my point is.
(1 year, 7 months ago)
Lords ChamberIn talking about individuals and investigations, the noble Baroness reminded me of one class of content where we do have clarity, and that is contempt of court. That is a frequent request. We know that it is illegal in that case because a judge writes to the company and says, “You must not allow this to be said because it is in contempt of court”, but that really is the exception. In most other cases, someone is saying, “I think it is illegal”. In live proceedings, in most cases it is absolutely clear because a judge has told you.
That is very helpful.
I am concerned that removing so-called illegal content for the purpose of complying with the regulatory system covers not only that which reaches conviction in a criminal court but possibly anything that a platform determines could be illegal, and therefore it undermines our own legal system. As I have said, that marks a significant departure from the rule of law. It seems that the state is asking or mandating private companies to make determinations about what constitutes illegality.
The obligations on a platform to determine what constitutes illegality could obviously become a real problem, particularly in relation to limitations on free expression. As we have already heard, the Public Order Act 1986 criminalises, for example, those who stir up hatred through the use of words, behaviour or written material. That is contentious in the law offline. By “contentious”, I mean that it is a matter of difficulty that requires the full rigour of the criminal justice system, understanding the whole history of established case law. That is all necessary to make a conviction under that law for offences of this nature.
Now we appear to be saying that, without any of that, social media companies should make the decision, which is a nerve-racking situation to be in. We have already heard the slippery phrase “reasonable grounds to infer”. If that was the basis on which you were sent to prison—if they did not have to prove that you were guilty but they had reasonable grounds to infer that you might be, without any evidence—I would be worried, yet reasonable grounds to infer that the content could be illegal is the basis on which we are asking for those decisions to be made. That is significantly below the ordinary burden of proof required to determine that an illegal act has been committed. Under this definition, I fear that platforms will be forced to overremove and censor what ultimately will be entirely lawful speech.
Can the Minister consider what competency social media companies have to determine what is lawful? We have heard some of the dilemmas from somebody who was in that position—let alone the international complications, as was indicated. Will all these big tech companies have to employ lots of ex-policemen and criminal lawyers? How will it work? It seems to me that there is a real lack of qualifications in that sphere— that is not a criticism, because those people decided to work in big tech, not in criminal law, and yet we are asking them to pursue this. That is a concern.
I will also make reference to what I think are the controversies around government Amendments 136A and 136B to indicate the difficulties of these provisions. They concern illegal activity—such as “assisting unlawful immigration”, illegal entry, human trafficking and similar offences—but I am unsure as to how this would operate. While it is the case that certain entrances to the UK are illegal, I suddenly envisage a situation where a perfectly legitimate political debate—for example, about the small boats controversy—would be taken down, and that people advocating for a position against the Government’s new Illegal Migration Bill could be accused of supporting illegality. What exactly will be made illegal in those amendments to the Online Safety Bill?
The noble Baroness, Lady Buscombe, made a fascinating speech about an interesting group of amendments. Because of the way the amendments are grouped, I feel that we have moved to a completely different debate, so I will not go into any detail on this subject. Anonymous trolling, Twitter storms and spreading false information are incredibly unpleasant. I am often the recipient of them—at least once a week—so I know personally that you feel frustrated that people tell lies and your reputation is sullied. However, I do not think that these amendments offer the basis on which that activity should be censored, and I will definitely argue against removing anonymity clauses—but that will be in another group. It is a real problem, but I do not think that the solution is contained in these amendments.
(1 year, 7 months ago)
Lords ChamberI agree with the noble Baroness, which is precisely why I am suggesting that we need to consider whether privacy should be sacrificed totally in relation to the argument around encryption. It is difficult, and I feel awkward saying it. When I mentioned a silver bullet I was not talking about the noble Baroness or any other noble Lords present, but I have heard people say that we need this Bill because it will deal with child abuse. In this group of amendments, I am raising the fact that when I have talked about encryption with people outside of the House they have said that we need to do something to tackle the fact that these messages are being sent around. It is not just child abuse; it is also terrorism. There is a range of difficult situations.
Things can go wrong with this, and that is what I was trying to raise. For example, we have a situation where some companies are considering using, or are being asked to use, machine learning to detect nudity. Just last year, a father lost his Google account and was reported to the police for sending a naked photo of their child to the doctor for medical reasons. I am raising these as examples of the problems that we have to consider.
Child abuse is so abhorrent that we will do anything to protect children, but let me say this to the Committee, as it is where the point on privacy lies: children are largely abused in their homes, but as far as I understand it we are not as yet arguing that the state should put CCTV cameras in every home for 24/7 surveillance to stop child abuse. That does not mean that we are glib or that we do not understand the importance of child abuse; it means that we understand the privacy of your home. There are specialist services that can intervene when they think there is a problem. I am worried about the possibility of putting a CCTV camera in everyone’s phone, which is the danger of going down this route.
My final point is that these services, such as WhatsApp, will potentially leave the UK. It is important to note that. I agree with the noble Lord, Lord Allan: this is not like threatening to storm off. It is not done in any kind of pique in that way. In putting enormous pressure on these platforms to scan communications, we must remember that they are global platforms. They have a system that works for billions of people all around the world. A relatively small market such as the UK is not something for which they would compromise their billions of users around the world. As I have explained, they would not put up with it if the Chinese state said, “We have to see people’s messages”. They would just say, “We are encrypted services”. They would walk out of China and we would all say, “Well done”. There is a real, strong possibility of these services leaving the UK so we must be very careful.
I just want to add to the exchange between the noble Baronesses, Lady Kidron and Lady Fox. The noble Baroness, Lady Fox, referred to WhatsApp’s position. Again, it is important for the public out there also to understand that if someone sends them illegal material—in particular child sexual abuse material; I agree with the noble Baroness, Lady Kidron, that this is a real problem—and they report it to WhatsApp, which has a reporting system, that material is no longer encrypted. It is sent in clear text and WhatsApp will give it to the police. One of the things I am suggesting is that, rather than driving WhatsApp out of the country, because it is at the more responsible end of the spectrum, we should work with it to improve these kinds of reporting systems and put the fear of God into people so that they know that this issue is not cost-free.
As a coda to that, if you ever receive something like that, you should report it to the police straightaway because, once it is on your phone, you are liable and you have a problem. The message from here should be: if you receive it, report it and, if it is reported, make sure that it gets to the police. We should be encouraging services to put those systems in place.
The noble Lord has concluded with my conclusion, which was to say that those services will be driven out, but not because they are irresponsible around horrible, dangerous messages. They do not read our messages because they are private. However, if we ever receive anything that makes us feel uncomfortable, they should be put under pressure to act. Many of them already do and are actually very responsible, but that is different from demanding that they scan our messages and we breach that privacy.