Online Safety Bill (Third sitting) Debate
Full Debate: Read Full DebateKim Leadbeater
Main Page: Kim Leadbeater (Labour - Spen Valley)Department Debates - View all Kim Leadbeater's debates with the Department for Digital, Culture, Media & Sport
(2 years, 6 months ago)
Public Bill CommitteesQ
Professor Lorna Woods: I think by an overarching risk assessment rather than one that is broken down into the different types of content, because that, in a way, assumes a certain knowledge of the type of content before you can do a risk assessment, so you are into a certain circular mode there. Rather than prejudging types of content, I think it would be more helpful to look at what is there and what the system is doing. Then we could look at what a proportionate response would be—looking, as people have said, at the design and the features. Rather than waiting for content to be created and then trying to deal with it, we could look at more friction at an earlier stage.
If I may add a technical point, I think there is a gap relating to search engines. The draft Bill excluded paid-for content advertising. It seems that, for user-to-user content, this is now in the Bill, bringing it more into line with the current standards for children under the video-sharing platform provisions. That does not apply to search. Search engines have duties only in relation to search content, and search content excludes advertising. That means, as I read it, that search engines would have absolutely no duties to children under their children safety duty in relation to advertising content. You could, for example, target a child with pornography and it would fall outside the regime. I think that is a bit of a gap.
Q
William Moy: No, no, yes. First, no, it is not fair to put that all on the platforms, particularly because—I think this a crucial thing for the Committee across the Bill as a whole—for anything to be done at internet scale, it has to be able to be done by dumb robots. Whatever the internet companies tell you about the abilities of their technology, it is not magic, and it is highly error-prone. For this duty to be meaningful, it has to be essentially exercised in machine learning. That is really important to bear in mind. Therefore, being clear about what it is going to tackle in a way that can be operationalised is important.
To your second point, it is really important in this day and age to question whether journalistic content and journalists equate to one another. I think this has come up in a previous session. Nowadays, journalism, or what we used to think of as journalism, is done by all kinds of people. That includes the same function of scrutiny and informing others and so on. It is that function that we care about—the passing of information between people in a democracy. We need to protect that public interest function. I think it is really important to get at that. I am sure there are better ways of protecting the public interest in this Bill by targeted protections or specifically protecting freedom of expression in specific ways, rather than these very broad, vague and general duties.
Q
William Moy: No.
William Perrin: At Carnegie, in our earliest work on this in 2018, we were very clear that this Bill should not be a route to regulating the press and media beyond what the social settlement was. Many people are grumpy about that settlement, and many people are happy with it, but it is a classic system intention. We welcome the Government’s attempt to carve journalism out one way or another, but there is still a great problem in defining journalists and journalism.
I think some of the issues around news provider organisations do give a sense in the Bill of a heavy-duty organisation, not some fly-by-night thing that has been set up to evade the rules. As Will was pointing out, the issue then comes down to individual journalists, who are applying their trade in new ways that the new media allows them to do. I remember many years ago, when I ran a media business, having a surreal meeting at DCMS during Leveson, where I had to explain to them what a blogger was. Sadly, we have not quite yet got that precision of how one achieves the intended effect around, in particular, individual journalists.
Professor Lorna Woods: I emphasise what Mr Moy said about the fact that this is going to have to be a system. It is not a decision on every individual item of content, and it is not about a decision on individual speakers. It is going to be about how the characteristics that we care about—the function of journalism—are recognised in an automated systems.
On the drafting of the Bill, I wonder whether there is any overlap between the user-generated content and citizen journalism in clause 16 and the recognition in clause 15 of user-generated content in relation to democratic speech. I am not sure whether one is not a subset of the other.
Q
Professor Lorna Woods: I have to confess that I have not really looked at them in great detail, although I have read them. I do not think they work, but I have not got to a solution because that is actually quite a difficult thing to define.
William Moy: I should declare an interest in clause 15 and the news publisher content exemption, because Full Fact would be covered by that exemption. I do not welcome that; I find it very awkward that we could be fact-checking things and some of the people we are fact-checking would not be covered by the exemption.
It is regrettable that we are asking for those exemptions in the Bill. The Bill should protect freedom of expression for everyone. Given the political reality of that clause, it does not do the job that it tries to do. The reason why is essentially because you can set yourself up to pass the test in that clause very easily. The Minister asked about that in a previous session and recognised that there is probably room to tighten the drafting, and I am very happy to work with his officials and talk about how, if that is Parliament’s political intention, we can do it in as practical a way as possible.
Q
William Perrin: The Bill is a risk-management regime. As part of a risk-management regime, one should routinely identify people who are at high risk and high-risk events, where they intersect and how you assess and mitigate that risk. As someone who was a civil servant for 15 years and has worked in public policy since, I hugely respect the functioning of the election process. At the very extreme end, we have seen hideous events occur in recent years, but there is also the routine abuse of politicians and, to some extent, an attempt to terrorise women politicians off certain platforms, which has been quite grotesque.
I feel that there is a space, within the spirit of the Bill as a risk-management regime, to draw out the particular risks faced by people who participate in elections. They are not just candidates and office holders, as you say, but the staff who administer elections—we saw the terrible abuse heaped on them in recent American elections; let us hope that that does not come across here—and possibly even journalists, who do the difficult job of reporting on elections, which is a fundamental part of democracy.
The best way to address those issues might be to require Ofcom to produce a straightforward code of practice—particularly for large, category 1 platforms—so that platforms regard elections and the people who take part in them as high-risk events and high-harm individuals, and take appropriate steps. One appropriate step would be to do a forward look at what the risks might be and when they might arise. Every year, the BBC produces an elections forward look to help it manage the particular risks of public service broadcasting around elections. Could a platform be asked to produce and publish an elections forward look, discussing with people who take part in elections their experience of the risks that they face and how best to mitigate them in a risk-management regime? That could also involve the National Police Chiefs’ Council, which already produces guidance at each election.
We are sitting here having this discussion in a highly fortified, bomb-proof building surrounded by heavily armed police. I do not think any member of the public would begrudge Members of Parliament and the people who come here that sort of protection. We sometimes hear the argument that MPs should not be recognised as special or get special protection. I do not buy that; no one begrudges the security here. It is a simple step to ask platforms to do a risk assessment that involves potential victims of harm, and to publish it and have a dialogue with those who take part, to ensure that the platforms are safe places for democratic discussion.
Q
William Perrin: The Government have, to their credit, introduced in this Bill offences of sending messages with the intent to harm, but it will take many years for them to work their way through CPS guidance and to establish a body of case law so that it is understood how they are applied. Of course, these cases are heard in magistrates courts, so they do not get reported very well.
One of the reasons we are here discussing this is that the criminal law has failed to provide adequate measures of public protection across social media. If the criminal law and the operation of the police and the CPS worked, we would not need to have this discussion. This discussion is about a civil regulatory regime to make up for the inadequacies in the working of the criminal law, and about making it work a little smoother. We see that in many areas of regulated activity. I would rather get a quicker start by doing some risk assessment and risk mitigation before, in many years’ time, one gets to an effective operational criminal offence. I note that the Government suggested such an offence a few years ago, but I am not quite clear where it got to.
William Moy: To echo Ms Leadbeater’s call for a holistic approach to this, treating as criminal some of the abuse that MPs receive is entirely appropriate. The cost to all of us of women and people of colour being deterred from public life is real and serious. There is also the point that the Bill deals only with personal harms, and a lot of the risk to elections is risk to the democratic system as a whole. You are absolutely right to highlight that that is a gap in what the Bill is doing. We think, certainly from a misinformation point of view, that you cannot adequately address the predictable misinformation and disinformation campaigns around elections simply by focusing on personal harm.
Q
William Moy: Essentially, the tests are such that almost anyone could pass them. Without opening the Bill, you have to have a standards code, which you can make up for yourself, a registered office in the UK and so on. It is not very difficult for a deliberate disinformation actor to pass the set of tests in clause 50 as they currently stand.
Q
Danny Stone: Very much so. You heard earlier about the problems with advertising. I recognise that search services are not the same as user-to-user services, so there does need to be some different thinking. However, at present, they are not required to address legal harms, and the harms are there.
I appeared before the Joint Committee on the draft Bill and talked about Microsoft Bing, which, in its search bar, was prompting people with “Jews are” and then a rude word. You look at “Gays are”, today, and it is prompting people with “Gays are using windmills to waft homosexual mists into your home”. That is from the search bar. The first return is a harmful article. Do the same in Google, for what it’s worth, and you get “10 anti-gay myths debunked.” They have seen this stuff. I have talked to them about it. They are not doing the work to try to address it.
Last night, using Amazon Alexa, I searched “Is George Soros evil?” and the response, was “Yes, he is. According to an Alexa Answers contributor, every corrupt political event.” “Are the White Helmets fake?” “Yes, they are set up by an ex-intelligence officer.” The problem with that is that the search prompts—the things that you are being directed to; the systems here—are problematic, because one person could give an answer to Amazon and that prompts the response. The second one, about the White Helmets, was a comment on a website that led Alexa to give that answer.
Search returns are not necessarily covered because, as I say, they are not the responsibility of the internet companies, but the systems that they design as to how those things are indexed and the systems to prevent them going to harmful sites by default are their responsibility, and at present the Bill does not address that. Something that forces those search companies to have appropriate risk assessments in place for the priority harms that Parliament sets, and to enforce those terms and conditions consistently, would be very wise.
Q
Liron Velleman: These are both pretty dangerous clauses. We are very concerned about what I would probably be kind and call their unintended consequences. They are loopholes that could allow some of the most harmful and hateful actors to spread harm on social media. I will take “journalistic” first and then move on to “democratic”.
A number of companies mentioned in the previous evidence session are outlets that could be media publications just by adding a complaints system to their website. There is a far-right outlet called Urban Scoop that is run by Tommy Robinson. They just need to add a complaints system to their website and then they would be included as a journalist. There are a number of citizen journalists who specifically go to our borders to harass people who are seeking refuge in this country. They call themselves journalists; Tommy Robinson himself calls himself a journalist. These people have been specifically taken off platforms because they have repeatedly broken the terms of service of those platforms, and we see this as a potential avenue for them to make the case that they should return.
We also see mainstream publications falling foul of the terms of service of social media companies. If I take the example of the Christchurch massacre, social media companies spent a lot of time trying to take down both the livestream of the attack in New Zealand and the manifesto of the terrorist, but the manifesto was then put on the Daily Mail website—you could download the manifesto straight from the Daily Mail website—and the livestream was on the Daily Mirror and The Sun’s websites. We would be in a situation where social media companies could take that down from anyone else, but they would not be able to take it down from those news media organisations. I do not see why we should allow harmful content to exist on the platform just because it comes from a journalist.
On “democratic”, it is still pretty unclear what the definition of democratic speech is within the Bill. If we take it to be pretty narrow and just talk about elected officials and candidates, we know that far-right organisations that have been de-platformed from social media companies for repeatedly breaking the terms of service—groups such as Britain First and, again, Tommy Robinson—are registered with the Electoral Commission. Britain First ran candidates in the local elections in 2022 and they are running in the Wakefield by-election, so, by any measure, they are potentially of “democratic importance”, but I do not see why they should be allowed to break terms of service just because they happen to have candidates in elections.
If we take it on a wider scale and say that it is anything of “democratic importance”, anyone who is looking to cause harm could say, “A live political issue is hatred of the Muslim community.” Someone could argue that that or the political debate around the trans community in the UK is a live political debate, and that would allow anyone to go on the platform and say, “I’ve got 60 users and I’ve got something to say on this live political issue, and therefore I should be on the platform,” in order to cause that harm. To us, that is unacceptable and should be removed from the Bill. We do not want a two-tier internet where some people have the right to be racist online, so we think those two clauses should be removed.
Stephen Kinsella: At Clean up the Internet this is not our focus, although the proposals we have made, which we have been very pleased to see taken up in the Bill, will certainly introduce friction. We keep coming back to friction being one of the solutions. I am not wearing this hat today, but I am on the board of Hacked Off, and if Hacked Off were here, I think they would say that the solution—although not a perfect solution—might be to say that a journalist, or a journalistic outlet, will be one that has subjected itself to proper press regulation by a recognised press regulator. We could then possibly take quite a lot of this out of the scope of social media regulation and leave it where I think it might belong, with proper, responsible press regulation. That would, though, lead on to a different conversation about whether we have independent press regulation at the moment.
Q
Danny Stone: I feel quite strongly that they should. I think this is about clauses 39(2) and (5). When they had an exemption last time, we were told they were already regulated, because various newspapers have their own systems, because of IPSO or whatever it might be. There was a written question in the House from Emma Hardy, and the Government responded that they had no data—no assessment of moderator system effectiveness or the harms caused. The Secretary of State said to the DCMS Select Committee that he was confident that these platforms have appropriate moderation policies in place, but was deeply sceptical about IPSO involvement. The Law Commission said that it was not going to give legal exemption to comments boards because they host an abundance of harmful material and abuse, and there are articles in, say, The Times:
“Pro-Kremlin trolls have infiltrated the reader comments on the websites of news organisations, including The Times, the Daily Mail and Fox News, as part of a ‘major influence operation’”.
A number of years ago, we worked—through the all-party parliamentary group against antisemitism, to which we provide the secretariat—on a piece with the Society of Editors on comment moderation on websites, so there have been efforts in the past, but this is a place where there is serious harm caused. You can go on The Sun or wherever now and find comments that will potentially be read by millions of people, so having some kind of appropriate risk assessment, minimum standard or quality assurance in respect of comments boards would seem to be a reasonable step. If it does not get into the Bill, I would in any event urge the Minister to develop some guidance or work with the industry to ensure they have some of those standards in place, but ideally, you would want to lose that carve-out in the Bill.
Yes, sorry. Is there a body that sets a framework around journalistic standards that the Bill could refer to?
Stephen Kinsella: Obviously, there are the regulators. There is IMPRESS and IPSO, at the very least. I am afraid that I do not know the answer; there must also be journalistic trade bodies, but the regulators would probably be the first port of call for me.
Q
Stephen Kinsella: There are a few questions there, obviously. I should say that we are happy with the approach in the Bill. We always felt that focusing on anonymity was the wrong place to start. Instead, we thought that a positive right to be verified, and then a right to screen out replies and posts from unverified accounts, was the way to go.
In terms of who one should make the disclosure to, or who would provide the verification, our concern was always that we did not want to provide another trove of data that the platforms could use to target us with adverts and otherwise monetise. While we have tried to be agnostic on the solution—again, we welcome the approach in the Bill, which is more about principles and systems than trying to pick outcomes—there are third-party providers out there that could provide one-stop verification. Some of them, for instance, rely on the open banking principles. The good thing about the banks is that under law, under the payment services directive and others, we are the owners of our own data. It is a much greyer area whether we are the owners of the data that the social media platforms hold on us, so using that data that the banks have—there is a solution called One ID, for instance—they will provide verification, and you could then use that to open your social media accounts without having to give that data to the platforms.
I saw in the evidence given to you on Tuesday that it was claimed that 80% of users are reluctant to give their data to platforms. We were surprised by that, and so we looked at it. They chose their words carefully. They said users were reluctant to give their data to “certain websites”. What they meant was porn sites. In the polling they were referring to, the question was specifically about willingness to share data with porn sites, and people are, understandably, reluctant to do that. When using open banking or other systems, there are good third-party providers, I would suggest, for verification.