Online Safety Bill Debate
Full Debate: Read Full DebateLord Moylan
Main Page: Lord Moylan (Conservative - Life peer)Department Debates - View all Lord Moylan's debates with the Department for Digital, Culture, Media & Sport
(1 year, 5 months ago)
Lords ChamberMy Lords, I speak to Amendments 56, 58, 63 and 183 in my name in this group. I have some complex arguments to make, but time is pressing, so I shall attempt to do so as briefly as possible. I am assisted in that by the fact that my noble friend on the Front Bench very kindly explained that the Government are not going to accept my worthless amendments, without actually waiting to hear what it is I might have said on their behalf.
None the less, I turn briefly to Amendment 183. The Bill has been described, I think justly, as a Twitter-shaped Bill: it does not take proper account of other platforms that operate in different ways. I return to the question of Wikipedia, but also platforms such as Reddit and other community-driven platforms. The requirement for a user-verification tool is of course intended to lead to the possibility that ordinary, unverified users—people like you and me—could have the option to see only that content which comes from those people who are verified.
This is broadly a welcome idea, but when we combine that with the fact that there are community-driven sites such as Wikipedia where there are community contributions and people who contribute to those sites are not always verified—sometimes there are very good reasons why they would want to preserve their anonymity —we end up with the possibility of whole articles having sentences left out and so on. That is not going to happen; the fact is that nobody such as Wikipedia can operate a site like that, so it is another one of those existential questions that the Government have not properly grappled with and really must address before we come to Third Reading, because this will not work the way it is.
As for my other amendments, they are supportive of and consistent with the idea of user verification, and they recognise—as my noble friend said—that user verification is intended to be a substitute for the abandoned “legal but harmful” clause. I welcome the abandonment of that clause and recognise that this provision is more consistent with individual freedom and autonomy and the idea that we can make choices of our own, but it is still open to the possibility of abuse by the platforms themselves. The amendments that I am put forward address, first, the question of what should be the default position. My argument is that the default position should be that filtering is not on and that one has to opt into it, because that that seems to me the adult proposition, the adult choice.
The danger is that the platforms themselves will either opt you into filtering automatically as the default, so you do not see what might be called the full-fat milk that is available on the internet, or that they harass you to do so with constant pop-ups, which we already get. If you go on the Nextdoor website, you constantly get the pop-up saying, “You should switch on notifications”. I do not want notifications; I want to look at it when I want to look at it. I do not want notifications, but I am constantly being driven into pressing the button that says, “Switch on notifications”. You could have something similar here—constantly being driven into switching on the filters—because the platforms themselves will be very worried about the possibility that you might see illegal content. We should guard against that.
Secondly, on Amendment 58, if we are going to have user verification—as I say, there is a lot to be said for that approach—it should be applied consistently. If the platform decides to filter out racist abuse and you opt in to filtering out racist abuse or some other sort of specified abuse, it has to filter all racist abuse, not simply racist abuse that comes from people they do not like; or, with gender assignment abuse, they cannot filter out stuff from only one side or other of the argument. The word “consistently” that is included here is intended to address that, and to require policies that show that, if you opt in to having something filtered out, it would be done on a proper, consistent and systematic basis and not influenced by the platform’s own particular political views.
Finally, we come to Amendment 63 and the question of how this is communicated to users of the internet. This amendment would force the platforms to make these policies about how user verification will operate a part of their terms and conditions in a public and visible way and to ensure that those provisions are applied consistently. It goes a little further than the other amendments—the others could stand on their own—but would also add a little bit more by requiring public and consistent policies that people can see. This works with the grain of what the Government are trying to do; I do not see that the Government can object to any of this. There is nothing wrecking here. It is trying to make everything more workable, more transparent and more obvious.
I hope, given the few minutes or short period of time that will elapse between my sitting down and the Minister returning to the Dispatch Box, that he will have reflected on the negative remarks that he made in his initial speech and will find it possible to accept these amendments now that he has heard the arguments for them.