(2 years, 6 months ago)
Public Bill CommitteesWe are now sitting in public and the proceedings are being broadcast. I understand that the Government wish to move a motion to amend the programme order agreed by the Committee on Tuesday. The Football Association is unable to attend and, following the technical difficulties on Tuesday, we will replace it with Barnardo’s.
Ordered,
That the Order of the Committee of 24 May 2022 be amended, in paragraph (2), in the Table, in the entry for Thursday 26 May until no later than 2.55 pm, leaving out “The Football Association” and inserting “Barnardo’s”.—(Chris Philp.)
Before we hear oral evidence, I invite Members to declare any interests in connection with the Bill.
I need to declare an interest, Ms Rees. Danny Stone from the Antisemitism Policy Trust provides informal secretariat in a personal capacity to the all-party parliamentary group on wrestling, which I co-chair.
That is noted. Thank you.
Examination of Witnesses
Mat Ilic, William Moy, Professor Lorna Woods MBE and William Perrin OBE gave evidence.
We will now hear oral evidence from Mat Ilic, chief development officer at Catch22; William May, chief executive at Full Fact; and Professor Lorna Woods and William Perrin of the Carnegie UK Trust. Before calling the first Member, I remind all Members that questions should be limited to matters within the scope of the Bill and that we must stick to the timings in the programme order that the Committee agreed. For this session, we have until 12.15 pm. I call Alex Davies- Jones to begin the questioning.
Q
William Perrin: At Carnegie, we saw this problem coming some time ago, and we worked in the other place with Lord McNally on a private Member’s Bill —the Online Harms Reduction Regulator (Report) Bill—that, had it carried, would have required Ofcom to make a report on a wide range of risks and harms, to inform and fill in the gaps that you have described.
On a point of order, Ms Rees. There is a gentleman taking photographs in the Gallery.
There is no photography allowed here.
William Perrin: Unfortunately, that Bill did not pass and the Government did not quite take the hint that it might be good to do some prep work with Ofcom to provide some early analysis to fill in holes in a framework Bill. The Government have also chosen in the framework not to bring forward draft statutory instruments or to give indications of their thinking in a number of key areas of the Bill, particularly priority harms to adults and the two different types of harms to children. That creates uncertainty for companies and for victims, and it makes the Bill rather hard to scrutinise.
I thought it was promising that the Government brought forward a list of priority offences in schedule 7 —I think that is where it is; I get these things mixed up, despite spending hours reading the thing. That was helpful to some extent, but the burden is on the Government to reduce complexity by filling in some of the blanks. It may well be better to table an amendment to bring some of these things into new schedules, as we at Carnegie have suggested—a schedule 7A for priority harms to adults, perhaps, and a 7B and 7C for children and so on—and then start to fill in some of the blanks in the regime, particularly to reassure victims.
Thank you. Does anybody else want to comment?
William Moy: There is also a point of principle about whether these decisions should be made by Government later or through open, democratic, transparent decision making in Parliament.
Q
William Moy: Sure. I should point out—we will need to get to this later—the fact that the Bill is not seriously trying to address misinformation and disinformation at this point, but in that context, we all know that there will be another information incident that will have a major effect on the public. We have lived through the pandemic, when information quality has been a matter of life and death; we are living through information warfare in the context of Ukraine, and more will come. The only response to that in the Bill is in clause 146, which gives the Secretary of State power to direct Ofcom to use relatively weak media literacy duties to respond.
We think that in an open society there should be an open mechanism for responding to information incidents—outbreaks of misinformation and disinformation that affect people’s lives. That should be set out in the roles of the regulator, the Government and internet companies, so that there is a framework that the public understand and that is open, democratic and transparent in declaring a misinformation and disinformation incident, creating proportionate responses to it, and monitoring the effects of those responses and how the incident is managed. At the moment, it largely happens behind closed doors and it involves a huge amount of restricting what people can see and share online. That is not a healthy approach in an open society.
William Perrin: I should add that as recently as April this year, the Government signed up to a recommendation of the Council of Ministers of the Council of Europe on principles for media and communication governance, which said that
“media and communication governance should be independent and impartial to avoid undue influence…discriminatory treatment and preferential treatment of powerful groups, including those with significant political or economic power.”
That is great. That is what the UK has done for 50 to 60 years in media regulation, where there are very few powers for the Secretary of State or even Parliament to get involved in the day-to-day working of communications regulators. Similarly, we have had independent regulation of cinema by the industry since 1913 and regulation of advertising independent of Government, and those systems have worked extremely well. However, this regime—which, I stress, Carnegie supports—goes a little too far in introducing a range of powers for the Secretary of State to interfere with Ofcom’s day-to-day doing of its business.
Clause 40 is particularly egregious, in that it gives the Secretary of State powers of direction over Ofcom’s codes of practice and, very strangely, introduces an almost infinite ability for the Government to keep rejecting Ofcom’s advice—presumably, until they are happy with the advice they get. That is a little odd, because Ofcom has a long track record as an independent, evidence-based regulator, and as Ofcom hinted in a terribly polite way when it gave evidence to this Committee, some of these powers may go a little too far. Similarly, in clause 147, the Secretary of State can give tactical guidance to Ofcom on its exercise of its powers. Ofcom may ignore that advice, but it is against convention that the Secretary of State can give that advice at all. The Secretary of State should be able to give strategic guidance to Ofcom roughly one or one and a half times per Parliament to indicate its priorities. That is absolutely fine, and is in accordance with convention in western Europe and most democracies, but the ability to give detailed guidance is rather odd.
Then, as Mr Moy has mentioned, clause 146, “Directions in special circumstances”, is a very unusual power. The Secretary of State can direct Ofcom to direct companies to make notices about things and can direct particular companies to do things without a particularly high threshold. There just have to be “reasonable grounds to believe”. There is no urgency threshold, nor is there a strong national security threshold in there, or anyone from whom the Secretary of State has to take advice in forming that judgment. That is something that we think can easily be amended down.
Q
William Moy: Absolutely. It is an extraordinary decision in a context where we are just coming through the pandemic, where information quality was such a universal concern, and we are in an information war, with the heightened risk of attempts to interfere in future elections and other misinformation and disinformation risks. It is also extraordinary because of the Minister’s excellent and thoughtful Times article, in which he pointed out that at the moment, tech companies censor legal social media posts at vast scale, and this Bill does nothing to stop that. In fact, the Government have actively asked internet companies to do that censorship—it has told them to do so. I see the Minister looking surprised, so let me quote from BBC News on 5 April 2020:
“The culture secretary is to order social media companies to be more aggressive in their response to conspiracy theories linking 5G networks to the coronavirus pandemic.”
In that meeting, essentially, the internet companies were asked to make sure they were taking down that kind of content from their services. Now, in the context of a Bill where, I think, the Minister and I completely agree about our goal—tackling misinformation in an open society—there is an opportunity for this Bill to be an example to the free world of how open societies respond to misinformation, and a beacon for the authoritarian world as well.
This is the way to do that. First, set out that the Bill must cover misinformation and disinformation. We cannot leave it to internet companies, with their political incentives, their commercial convenience and their censoring instincts, to do what they like. The Bill must cover misinformation and set out an open society response to it. Secondly, we must recognise that the open society response is about empowering people. The draft Bill had a recognition that we need to modernise the media literacy framework, but we do not have that in this Bill, which is really regrettable. It would be a relatively easy improvement to create a modern, harms and safety-based media literacy framework in this Bill, empowering users to make their own decisions with good information.
Then, the Bill would need to deal with three main threats to freedom of expression that threaten the good information in our landscape. Full Fact as a charity exists to promote informed and improved public debate, and in the long run we do that by protecting freedom of expression. Those three main threats are artificial intelligence, the internet companies and our own Government, and there are three responses to them. First, we must recognise that the artificial intelligence that internet companies use is highly error-prone, and it is a safety-critical technology. Content moderation affects what we can all see and share; it affects our democracy, it affects our health, and it is safety-critical. In every other safety-critical industry, that kind of technology would be subject to independent third-party open testing. Cars are crashed against walls, water samples are taken and tested, even sofas are sat on thousands of times to check they are safe, but internet companies are subject to no third-party independent open scrutiny. The Bill must change that, and the crash test dummy test is the one I would urge Members to apply.
The second big threat, as I said, is the internet companies themselves, which too often reach for content restrictions rather than free speech-based and information-based interventions. There are lots of things you can do to tackle misinformation in a content-neutral way—creating friction in sharing, asking people to read a post before they share it—or you can tackle misinformation by giving people information, rather than restricting what they can do; fact-checking is an example of that. The Bill should say that we prefer content-neutral and free speech-based interventions to tackle misinformation to content-restricting ones. At the moment the Bill does not touch that, and thus leaves the existing system of censorship, which the Minister has warned about, in place. That is a real risk to our open society.
The final risk to freedom of expression, and therefore to tackling misinformation, are the Government themselves. I have just read you an example of a Government bringing in internet companies to order them around by designating their terms and conditions and saying certain content is unacceptable. That content then starts to get automatically filtered out, and people are stopped from seeing it and sharing it online. That is a real risk. Apart from the fact that they press released it, that is happening behind closed doors. Is that acceptable in an open democratic society, or do we think there should be a legal framework governing when Governments can seek to put pressure on internet companies to affect what we can all see and share? I think that should be governed by a clear legislative framework that sets out if those functions need to exist, what they are and what their parameters are. That is just what we would expect for any similarly sensitive function that Government carry out.
Q
Mat Ilic: Thank you so much. The impact of social media in children’s lives has been a feature of our work since 2015, if not earlier; we have certainly researched it from that period. We found that it was a catalyst to serious youth violence and other harms. Increasingly, we are seeing it as a primary issue in lots of the child exploitation and missing cases that we deal with—in fact, in half of the cases we have seen in some of the areas that we work in it featured as the primary reason rather than as a coincidental reason. The online harm is the starting point rather than a conduit.
In relation to the legislation, all our public statements on this have been informed by user research. I would say that is one of the central principles to think through in the primary legislation—a safety-by-design focus. We have previously called this the toy car principle, which means any content or product that is designed with children in mind needs to be tested in a way that is explicitly for children, as Mr Moy talked about. It needs to have some age-specific frameworks built in, but we also need to go further than that by thinking about how we might raise the floor, rather than necessarily trying to tackle explicit harms. Our point is that we need to remain focused on online safety for children and the drivers of online harm and not the content.
The question is, how can that be done? One way is the legal design requirement for safety, and how that might play out, as opposed to having guiding principles that companies might adopt. Another way is greater transparency on how companies make particular decisions, and that includes creating or taking off content that pertains to children. I want to underline the point about empowerment for children who have been exposed to or experience harm online, or offline as a result of online harm. That includes some kind of recourse to be able to bring forward cases where complaints, or other issues, were not taken seriously by the platforms.
If you read the terms and conditions of any given technology platform, which lots of young people do not do on signing up—I am sure lots of adults do not do that either—you realise that even with the current non-legislative frameworks that the companies deploy to self-regulate, there is not enough enforcement in the process. For example, if I experience some kind of abuse and complain, it might never be properly addressed. We would really chime on the enforcement of the regulatory environment; we would try to raise the floor rather than chase specific threats and harms with the legislation.
Q
Professor Lorna Woods: I think by an overarching risk assessment rather than one that is broken down into the different types of content, because that, in a way, assumes a certain knowledge of the type of content before you can do a risk assessment, so you are into a certain circular mode there. Rather than prejudging types of content, I think it would be more helpful to look at what is there and what the system is doing. Then we could look at what a proportionate response would be—looking, as people have said, at the design and the features. Rather than waiting for content to be created and then trying to deal with it, we could look at more friction at an earlier stage.
If I may add a technical point, I think there is a gap relating to search engines. The draft Bill excluded paid-for content advertising. It seems that, for user-to-user content, this is now in the Bill, bringing it more into line with the current standards for children under the video-sharing platform provisions. That does not apply to search. Search engines have duties only in relation to search content, and search content excludes advertising. That means, as I read it, that search engines would have absolutely no duties to children under their children safety duty in relation to advertising content. You could, for example, target a child with pornography and it would fall outside the regime. I think that is a bit of a gap.
Q
William Moy: No, no, yes. First, no, it is not fair to put that all on the platforms, particularly because—I think this a crucial thing for the Committee across the Bill as a whole—for anything to be done at internet scale, it has to be able to be done by dumb robots. Whatever the internet companies tell you about the abilities of their technology, it is not magic, and it is highly error-prone. For this duty to be meaningful, it has to be essentially exercised in machine learning. That is really important to bear in mind. Therefore, being clear about what it is going to tackle in a way that can be operationalised is important.
To your second point, it is really important in this day and age to question whether journalistic content and journalists equate to one another. I think this has come up in a previous session. Nowadays, journalism, or what we used to think of as journalism, is done by all kinds of people. That includes the same function of scrutiny and informing others and so on. It is that function that we care about—the passing of information between people in a democracy. We need to protect that public interest function. I think it is really important to get at that. I am sure there are better ways of protecting the public interest in this Bill by targeted protections or specifically protecting freedom of expression in specific ways, rather than these very broad, vague and general duties.
Q
William Moy: No.
William Perrin: At Carnegie, in our earliest work on this in 2018, we were very clear that this Bill should not be a route to regulating the press and media beyond what the social settlement was. Many people are grumpy about that settlement, and many people are happy with it, but it is a classic system intention. We welcome the Government’s attempt to carve journalism out one way or another, but there is still a great problem in defining journalists and journalism.
I think some of the issues around news provider organisations do give a sense in the Bill of a heavy-duty organisation, not some fly-by-night thing that has been set up to evade the rules. As Will was pointing out, the issue then comes down to individual journalists, who are applying their trade in new ways that the new media allows them to do. I remember many years ago, when I ran a media business, having a surreal meeting at DCMS during Leveson, where I had to explain to them what a blogger was. Sadly, we have not quite yet got that precision of how one achieves the intended effect around, in particular, individual journalists.
Professor Lorna Woods: I emphasise what Mr Moy said about the fact that this is going to have to be a system. It is not a decision on every individual item of content, and it is not about a decision on individual speakers. It is going to be about how the characteristics that we care about—the function of journalism—are recognised in an automated systems.
On the drafting of the Bill, I wonder whether there is any overlap between the user-generated content and citizen journalism in clause 16 and the recognition in clause 15 of user-generated content in relation to democratic speech. I am not sure whether one is not a subset of the other.
Q
Professor Lorna Woods: I have to confess that I have not really looked at them in great detail, although I have read them. I do not think they work, but I have not got to a solution because that is actually quite a difficult thing to define.
William Moy: I should declare an interest in clause 15 and the news publisher content exemption, because Full Fact would be covered by that exemption. I do not welcome that; I find it very awkward that we could be fact-checking things and some of the people we are fact-checking would not be covered by the exemption.
It is regrettable that we are asking for those exemptions in the Bill. The Bill should protect freedom of expression for everyone. Given the political reality of that clause, it does not do the job that it tries to do. The reason why is essentially because you can set yourself up to pass the test in that clause very easily. The Minister asked about that in a previous session and recognised that there is probably room to tighten the drafting, and I am very happy to work with his officials and talk about how, if that is Parliament’s political intention, we can do it in as practical a way as possible.
Q
William Perrin: The Bill is a risk-management regime. As part of a risk-management regime, one should routinely identify people who are at high risk and high-risk events, where they intersect and how you assess and mitigate that risk. As someone who was a civil servant for 15 years and has worked in public policy since, I hugely respect the functioning of the election process. At the very extreme end, we have seen hideous events occur in recent years, but there is also the routine abuse of politicians and, to some extent, an attempt to terrorise women politicians off certain platforms, which has been quite grotesque.
I feel that there is a space, within the spirit of the Bill as a risk-management regime, to draw out the particular risks faced by people who participate in elections. They are not just candidates and office holders, as you say, but the staff who administer elections—we saw the terrible abuse heaped on them in recent American elections; let us hope that that does not come across here—and possibly even journalists, who do the difficult job of reporting on elections, which is a fundamental part of democracy.
The best way to address those issues might be to require Ofcom to produce a straightforward code of practice—particularly for large, category 1 platforms—so that platforms regard elections and the people who take part in them as high-risk events and high-harm individuals, and take appropriate steps. One appropriate step would be to do a forward look at what the risks might be and when they might arise. Every year, the BBC produces an elections forward look to help it manage the particular risks of public service broadcasting around elections. Could a platform be asked to produce and publish an elections forward look, discussing with people who take part in elections their experience of the risks that they face and how best to mitigate them in a risk-management regime? That could also involve the National Police Chiefs’ Council, which already produces guidance at each election.
We are sitting here having this discussion in a highly fortified, bomb-proof building surrounded by heavily armed police. I do not think any member of the public would begrudge Members of Parliament and the people who come here that sort of protection. We sometimes hear the argument that MPs should not be recognised as special or get special protection. I do not buy that; no one begrudges the security here. It is a simple step to ask platforms to do a risk assessment that involves potential victims of harm, and to publish it and have a dialogue with those who take part, to ensure that the platforms are safe places for democratic discussion.
Q
William Perrin: The Government have, to their credit, introduced in this Bill offences of sending messages with the intent to harm, but it will take many years for them to work their way through CPS guidance and to establish a body of case law so that it is understood how they are applied. Of course, these cases are heard in magistrates courts, so they do not get reported very well.
One of the reasons we are here discussing this is that the criminal law has failed to provide adequate measures of public protection across social media. If the criminal law and the operation of the police and the CPS worked, we would not need to have this discussion. This discussion is about a civil regulatory regime to make up for the inadequacies in the working of the criminal law, and about making it work a little smoother. We see that in many areas of regulated activity. I would rather get a quicker start by doing some risk assessment and risk mitigation before, in many years’ time, one gets to an effective operational criminal offence. I note that the Government suggested such an offence a few years ago, but I am not quite clear where it got to.
William Moy: To echo Ms Leadbeater’s call for a holistic approach to this, treating as criminal some of the abuse that MPs receive is entirely appropriate. The cost to all of us of women and people of colour being deterred from public life is real and serious. There is also the point that the Bill deals only with personal harms, and a lot of the risk to elections is risk to the democratic system as a whole. You are absolutely right to highlight that that is a gap in what the Bill is doing. We think, certainly from a misinformation point of view, that you cannot adequately address the predictable misinformation and disinformation campaigns around elections simply by focusing on personal harm.
Q
William Moy: Essentially, the tests are such that almost anyone could pass them. Without opening the Bill, you have to have a standards code, which you can make up for yourself, a registered office in the UK and so on. It is not very difficult for a deliberate disinformation actor to pass the set of tests in clause 50 as they currently stand.
Q
William Moy: This would need a discussion. I have not come here with a draft amendment—frankly, that is the Government’s job. There are two areas of policy thinking over the last 10 years that provide the right seeds and the right material to go into. One is the line of thinking that has been done about public benefit journalism, which has been taken up in the House of Lords Communications and Digital Committee inquiry and the Cairncross review, and is now reflected in recent Charity Commission decisions. Part of Full Fact’s charitable remit is as a publisher of public interest journalism, which is a relatively new innovation, reflecting the Cairncross review. If you take that line of thinking, there might be some useful criteria in there that could be reflected in this clause.
I hate to mention the L-word in this context, but the other line of thinking is the criteria developed in the context of the Leveson inquiry for what makes a sensible level of self-regulation for a media organisation. Although I recognise that that is a past thing, there are still useful criteria in that line of thinking, which would be worth thinking about in this context. As I said, I would be happy to sit down, as a publisher of journalism, with your officials and industry representatives to work out a viable way of achieving your political objectives as effectively as possible.
William Perrin: Such a definition, of course, must satisfy those who are in the industry, so I would say that these definitions need to be firmly industry-led, not simply by the big beasts—for whom we are grateful, every day, for their incredibly incisive journalism—but by this whole spectrum of new types of news providers that are emerging. I have mentioned my experience many years ago of explaining what a blog was to DCMS.
The news industry is changing massively. I should declare an interest: I was involved in some of the work on public-benefit journalism in another capacity. We have national broadcasters, national newspapers, local papers, local broadcasters, local bloggers and local Twitter feeds, all of which form a new and exciting news media ecosystem, and this code needs to work for all of them. I suppose that you would need a very deep-dive exercise with those practitioners to ensure that they fit within this code, so that you achieve your policy objective.
Q
We heard some commentary earlier—I think from Mr Moy—about the need to address misinformation, particularly in the context of a serious situation such as the recent pandemic. I think you were saying that there was a meeting, in March or April 2020, for the then Secretary of State and social media firms to discuss the issue and what steps they might take to deal with it. You said that it was a private meeting and that it should perhaps have happened more transparently.
Do you accept that the powers conferred in clause 146, as drafted, do, in fact, address that issue? They give the Secretary of State powers, in emergency situations—a public health situation or a national security situation, as set out in clause 146(1)—to address precisely that issue of misinformation in an emergency context. Under that clause, it would happen in a way that was statutory, open and transparent. In that context, is it not a very welcome clause?
William Moy: I am sorry to disappoint you, Minister, but no, I do not accept that. The clause basically attaches to Ofcom’s fairly weak media literacy duties, which, as we have already discussed, need to be modernised and made harms-based and safety-based.
However, more to the point, the point that I was trying to make is that we have normalised a level of censorship that was unimaginable in previous generations. A significant part of the pandemic response was, essentially, some of the main information platforms in all of our day-to-day lives taking down content in vast numbers and restricting what we can all see and share. We have started to treat that as a normal part of our lives, and, as someone who believes that the best way to inform debate in an open society is freedom of expression, which I know you believe, too, Minister, I am deeply concerned that we have normalised that. In fact, you referred to it in your Times article.
I think that the Bill needs to step in and prevent that kind of overreach, as well as the triggering of unneeded reactions. In the pandemic, the political pressure was all on taking down harmful health content; there was no countervailing pressure to ensure that the systems did not overreach. We therefore found ridiculous examples, such as police posts warning of fraud around covid being taken down by the internet companies’ automated systems because those systems were set to, essentially, not worry about overreach.
That is why we are saying that we need, in the Bill, a modern, open-society approach to misinformation. That starts with it recognising misinformation in the first place. That is vital, of course. It should then go on to create a modern, harms-based media literacy framework, and to prefer content-neutral and free-speech-based interventions over content-restricting interventions. That was not what was happening during the pandemic, and it is not what will happen by default. It takes Parliament to step in and get away from this habitual, content-restriction reaction and push us into an open-society-based response to misinformation.
William Perrin: Can I just add that it does not say “emergency”? It does not say that at all. It says “reasonable grounds” that “present a threat”—not a big threat—under “special circumstances”. We do not know what any of that means, frankly. With this clause, I get the intent—that it is important for national security, at times, to send messages—but this has not been done in the history of public communication before. If we go back through 50 or 60 years, even 70 years, of Government communication, the Government have bought adverts and put messages transparently in place. Apart from D-notices, the Government have never sought to interfere in the operations of media companies in quite the way that is set out here.
If this clause is to stand, it certainly needs a much higher threshold before the Secretary of State can act—such as who they are receiving advice from. Are they receiving advice from directors of public health, from the National Police Chiefs’ Council or from the national security threat assessment machinery? I should declare an interest; I worked in there a long time ago. It needs a higher threshold and greater clarity, but you could dispense with this by writing to Ofcom and saying, “Ofcom, you should have regard to these ‘special circumstances’. Why don’t you take actions that you might see fit to address them?”
Many circumstances, such as health or safety, are national security issues anyway if they reach a high enough level for intervention, so just boil it all down to national security and be done with it.
Professor Lorna Woods: If I may add something about the treatment of misinformation more generally, I suspect that if it is included in the regime, or if some subset such as health misinformation is included in the regime, it will be under the heading of “harmful to adults”. I am picking up on the point that Mr Moy made that the sorts of interventions will be more about friction and looking at how disinformation is incentivised and spread at an earlier stage, rather than reactive takedown.
Unfortunately, the measures that the Bill currently envisages for “harmful but legal” seem to focus more on the end point of the distribution chain. We are talking about taking down content and restricting access. Clause 13(4) gives the list of measures that a company could employ in relation to priority content harmful to adults.
I suppose that you could say, “Companies are free to take a wider range of actions”, but my question then is this: where does it leave Ofcom, if it is trying to assess compliance with a safety duty, if a company is doing something that is not envisaged by the Act? For example, taking bot networks offline, if that is thought a key factor in the spreading of disinformation—I see that Mr Moy is nodding. A rational response might be, “Let’s get rid of bot networks”, but that, as I read it, does not seem to be envisaged by clause 13(4).
I think that is an example of a more general problem. With “harmful but legal”, we would want to see less emphasis on takedown and more emphasis on friction, but the measures listed as envisaged do not go that far up the chain.
Minister, we have just got a couple of minutes left, so perhaps this should be your last question.
Q
“(b) restricting users’ access to the content;
(c) limiting the recommendation or promotion of the content;
(d) recommending or promoting the content.”
I would suggest that those actions are pretty wide, as drafted.
One of the witnesses—I think it was Mr Moy—talked about what were essentially content-agnostic measures to impede virality, and used the word “friction”. Can you elaborate a little bit on what you mean by that in practical terms?
William Moy: Yes, I will give a couple of quick examples. WhatsApp put a forwarding limit on WhatsApp messages during the pandemic. We knew that WhatsApp was a vector through which misinformation could spread, because forwarding is so easy. They restricted it to, I think, six forwards, and then you were not able to forward the message again. That is an example of friction. Twitter has a note whereby if you go to retweet something but you have not clicked on the link, it says, “Do you want to read the article before you share this?” You can still share it, but it creates that moment of pause for people to make a more informed decision.
Q
William Moy: But that is not what I am suggesting you do. I am suggesting you say that this Parliament prefers interventions that are content-neutral or free speech-based, and that inform users and help them make up their own minds, to interventions that restrict what people can see and share.
Q
William Moy: I do not think it is any more challenging than most of the risk assessments, codes of practice and so on, but I am willing to spend as many hours as it takes to talk through it with you.
Order. I am afraid that we have come to the end of our allotted time for questions. On behalf of the Committee, I thank the witnesses for all their evidence.
Examination of Witnesses
Danny Stone MBE, Stephen Kinsella OBE and Liron Velleman gave evidence.
We will now hear from Danny Stone, chief executive of the Antisemitism Policy Trust; Stephen Kinsella, founder of Clean up the Internet; and Liron Velleman, political organiser at HOPE not hate. We have until 1 pm for this panel.
Q
Danny Stone: First, thank you for having me today. We have made various representations about the problems that we think there are with small, high-harm platforms. The Bill creates various categories, and the toughest risk mitigation is on the larger services. They are defined by their size and functionality. Of course, if I am determined to create a platform that will spread harm, I may look at the size threshold that is set and make a platform that falls just below it, in order to spread harm.
It is probably important to set out what this looks like. The Community Security Trust, which is an excellent organisation that researches antisemitism and produces incident figures, released a report called “Hate Fuel” in June 2020. It looked at the various small platforms and highlighted that, in the wake of the Pittsburgh antisemitic murders, there had been 26 threads, I think, with explicit calls for Jews to be killed. One month prior to that, in May 2020, a man called Payton Gendron found footage of the Christchurch attacks. Among this was legal but harmful content, which included the “great replacement” theory, GIFs and memes, and he went on a two-year journey of incitement. A week or so ago, he targeted and killed 10 people in Buffalo. One of the things that he posted was:
“Every Time I think maybe I shouldn’t commit to an attack I spend 5 min of /pol/”—
which is a thread on the small 4chan platform—
“then my motivation returns”.
That is the kind of material that we are seeing: legal but harmful material that is inspiring people to go out and create real-world harm. At the moment, the small platforms do not have that additional regulatory burden. These are public-facing message boards, and this is freely available content that is promoted to users. The risks of engaging with such content are highest. There is no real obligation, and there are no consequences. It is the most available extremism, and it is the least regulated in respect of the Bill. I know that Members have raised this issue and the Minister has indicated that the Government are looking at it, but I would urge that something is done to ensure that it is properly captured in the Bill, because the consequences are too high if it is not.
Q
Danny Stone: I think there are various options. Either you go for a risk-based approach—categorisation—or you could potentially amend it so that it is not just size and functionality. You would take into account other things—for example, characteristics are already defined in the Bill, and that might be an option for doing it.
Q
Liron Velleman: From the perspective of HOPE not hate, most of our work targeting and looking at far-right groups is spent on some of those smaller platforms. I think that the original intention of the Bill, when it was first written, may have been a more sensible way of looking at the social media ecospace: larger platforms could host some of this content, while other platforms were just functionally not ready to host large, international far-right groups. That has changed radically, especially during the pandemic.
Now, there are so many smaller platforms—whether small means hundreds of thousands, tens of thousands or even smaller than that—that are almost as easy to use as some of the larger platforms we all know so well. Some of the content on those smaller platforms is definitely the most extreme. There are mechanisms utilised by the far-right—not just in the UK, but around the world—to move that content and move people from some of the larger platforms, where they can recruit, on to the smaller platforms. To have a situation in which that harmful content is not looked at as stringently as content on the larger platforms is a miscategorisation of the internet.
Q
Liron Velleman: We have seen this similarly with the proscription of far-right terrorist groups in other legislation. It was originally quite easy to say that, eventually, the Government would proscribe National Action as a far-right terror group. What has happened since is that aliases and very similar organisations are set up, and it then takes months or sometimes years for the Government to be able to proscribe those organisations. We have to spend our time making the case as to why those groups should be banned.
We can foresee a similar circumstance here. We turn around and say, “Here is BitChute” or hundreds of other platforms that should be banned. We spend six months saying to the Government that it needs to be banned. Eventually, it is, but then almost immediately an offshoot starts. We think that Ofcom should have delegated power to make sure that it is able to bring those platforms into category 1 almost immediately, if the categorisations stay as they are.
Danny Stone: It could serve a notice and ensure that platforms prepare for that. There will, understandably, be a number of small platforms that are wary and do not want to be brought into that category, but some of them will need to be brought in because of the risk of harm. Let us be clear: a lot of this content may well—probably will—stay on the platform, but, at the very least, they will be forced to risk assess for it. They will be forced to apply their terms and conditions consistently. It is a step better than what they will be doing without it. Serving a notice to try to bring them into that regime as quickly as possible and ensure that they are preparing measures to comply with category 1 obligations would be helpful.
Q
Danny Stone: Very much so. You heard earlier about the problems with advertising. I recognise that search services are not the same as user-to-user services, so there does need to be some different thinking. However, at present, they are not required to address legal harms, and the harms are there.
I appeared before the Joint Committee on the draft Bill and talked about Microsoft Bing, which, in its search bar, was prompting people with “Jews are” and then a rude word. You look at “Gays are”, today, and it is prompting people with “Gays are using windmills to waft homosexual mists into your home”. That is from the search bar. The first return is a harmful article. Do the same in Google, for what it’s worth, and you get “10 anti-gay myths debunked.” They have seen this stuff. I have talked to them about it. They are not doing the work to try to address it.
Last night, using Amazon Alexa, I searched “Is George Soros evil?” and the response, was “Yes, he is. According to an Alexa Answers contributor, every corrupt political event.” “Are the White Helmets fake?” “Yes, they are set up by an ex-intelligence officer.” The problem with that is that the search prompts—the things that you are being directed to; the systems here—are problematic, because one person could give an answer to Amazon and that prompts the response. The second one, about the White Helmets, was a comment on a website that led Alexa to give that answer.
Search returns are not necessarily covered because, as I say, they are not the responsibility of the internet companies, but the systems that they design as to how those things are indexed and the systems to prevent them going to harmful sites by default are their responsibility, and at present the Bill does not address that. Something that forces those search companies to have appropriate risk assessments in place for the priority harms that Parliament sets, and to enforce those terms and conditions consistently, would be very wise.
Q
Liron Velleman: These are both pretty dangerous clauses. We are very concerned about what I would probably be kind and call their unintended consequences. They are loopholes that could allow some of the most harmful and hateful actors to spread harm on social media. I will take “journalistic” first and then move on to “democratic”.
A number of companies mentioned in the previous evidence session are outlets that could be media publications just by adding a complaints system to their website. There is a far-right outlet called Urban Scoop that is run by Tommy Robinson. They just need to add a complaints system to their website and then they would be included as a journalist. There are a number of citizen journalists who specifically go to our borders to harass people who are seeking refuge in this country. They call themselves journalists; Tommy Robinson himself calls himself a journalist. These people have been specifically taken off platforms because they have repeatedly broken the terms of service of those platforms, and we see this as a potential avenue for them to make the case that they should return.
We also see mainstream publications falling foul of the terms of service of social media companies. If I take the example of the Christchurch massacre, social media companies spent a lot of time trying to take down both the livestream of the attack in New Zealand and the manifesto of the terrorist, but the manifesto was then put on the Daily Mail website—you could download the manifesto straight from the Daily Mail website—and the livestream was on the Daily Mirror and The Sun’s websites. We would be in a situation where social media companies could take that down from anyone else, but they would not be able to take it down from those news media organisations. I do not see why we should allow harmful content to exist on the platform just because it comes from a journalist.
On “democratic”, it is still pretty unclear what the definition of democratic speech is within the Bill. If we take it to be pretty narrow and just talk about elected officials and candidates, we know that far-right organisations that have been de-platformed from social media companies for repeatedly breaking the terms of service—groups such as Britain First and, again, Tommy Robinson—are registered with the Electoral Commission. Britain First ran candidates in the local elections in 2022 and they are running in the Wakefield by-election, so, by any measure, they are potentially of “democratic importance”, but I do not see why they should be allowed to break terms of service just because they happen to have candidates in elections.
If we take it on a wider scale and say that it is anything of “democratic importance”, anyone who is looking to cause harm could say, “A live political issue is hatred of the Muslim community.” Someone could argue that that or the political debate around the trans community in the UK is a live political debate, and that would allow anyone to go on the platform and say, “I’ve got 60 users and I’ve got something to say on this live political issue, and therefore I should be on the platform,” in order to cause that harm. To us, that is unacceptable and should be removed from the Bill. We do not want a two-tier internet where some people have the right to be racist online, so we think those two clauses should be removed.
Stephen Kinsella: At Clean up the Internet this is not our focus, although the proposals we have made, which we have been very pleased to see taken up in the Bill, will certainly introduce friction. We keep coming back to friction being one of the solutions. I am not wearing this hat today, but I am on the board of Hacked Off, and if Hacked Off were here, I think they would say that the solution—although not a perfect solution—might be to say that a journalist, or a journalistic outlet, will be one that has subjected itself to proper press regulation by a recognised press regulator. We could then possibly take quite a lot of this out of the scope of social media regulation and leave it where I think it might belong, with proper, responsible press regulation. That would, though, lead on to a different conversation about whether we have independent press regulation at the moment.
Q
Danny Stone: I feel quite strongly that they should. I think this is about clauses 39(2) and (5). When they had an exemption last time, we were told they were already regulated, because various newspapers have their own systems, because of IPSO or whatever it might be. There was a written question in the House from Emma Hardy, and the Government responded that they had no data—no assessment of moderator system effectiveness or the harms caused. The Secretary of State said to the DCMS Select Committee that he was confident that these platforms have appropriate moderation policies in place, but was deeply sceptical about IPSO involvement. The Law Commission said that it was not going to give legal exemption to comments boards because they host an abundance of harmful material and abuse, and there are articles in, say, The Times:
“Pro-Kremlin trolls have infiltrated the reader comments on the websites of news organisations, including The Times, the Daily Mail and Fox News, as part of a ‘major influence operation’”.
A number of years ago, we worked—through the all-party parliamentary group against antisemitism, to which we provide the secretariat—on a piece with the Society of Editors on comment moderation on websites, so there have been efforts in the past, but this is a place where there is serious harm caused. You can go on The Sun or wherever now and find comments that will potentially be read by millions of people, so having some kind of appropriate risk assessment, minimum standard or quality assurance in respect of comments boards would seem to be a reasonable step. If it does not get into the Bill, I would in any event urge the Minister to develop some guidance or work with the industry to ensure they have some of those standards in place, but ideally, you would want to lose that carve-out in the Bill.
Yes, sorry. Is there a body that sets a framework around journalistic standards that the Bill could refer to?
Stephen Kinsella: Obviously, there are the regulators. There is IMPRESS and IPSO, at the very least. I am afraid that I do not know the answer; there must also be journalistic trade bodies, but the regulators would probably be the first port of call for me.
Q
Stephen Kinsella: There are a few questions there, obviously. I should say that we are happy with the approach in the Bill. We always felt that focusing on anonymity was the wrong place to start. Instead, we thought that a positive right to be verified, and then a right to screen out replies and posts from unverified accounts, was the way to go.
In terms of who one should make the disclosure to, or who would provide the verification, our concern was always that we did not want to provide another trove of data that the platforms could use to target us with adverts and otherwise monetise. While we have tried to be agnostic on the solution—again, we welcome the approach in the Bill, which is more about principles and systems than trying to pick outcomes—there are third-party providers out there that could provide one-stop verification. Some of them, for instance, rely on the open banking principles. The good thing about the banks is that under law, under the payment services directive and others, we are the owners of our own data. It is a much greyer area whether we are the owners of the data that the social media platforms hold on us, so using that data that the banks have—there is a solution called One ID, for instance—they will provide verification, and you could then use that to open your social media accounts without having to give that data to the platforms.
I saw in the evidence given to you on Tuesday that it was claimed that 80% of users are reluctant to give their data to platforms. We were surprised by that, and so we looked at it. They chose their words carefully. They said users were reluctant to give their data to “certain websites”. What they meant was porn sites. In the polling they were referring to, the question was specifically about willingness to share data with porn sites, and people are, understandably, reluctant to do that. When using open banking or other systems, there are good third-party providers, I would suggest, for verification.
Q
Stephen Kinsella: Very much not. We have conducted polling using YouGov. Compassion in Politics did polling using Opinium. The figures vary slightly, but at a minimum, two in three citizens—often four out of five citizens—are very willing to be verified and would like the opportunity to be verified if it meant that they could then screen out replies from unverified accounts. I would say there is a weight of evidence on this from the polling. By the way, we would be very happy to conduct further polling, and we would be very happy to consult with the Committee on the wording of the questions that should be put, if that would be helpful, but I think we are quite confident what the response would be.
Liron Velleman: We set two clear tests for the situation on anonymity on platforms. First, will it harm the ability of some groups in society to have freedom of speech online? We are concerned that verification could harm the ability of LGBT people and domestic abuse survivors to use the platforms in the full ways they wish to. For example, if a constituent who is, say, a domestic abuse survivor or LGBT, wished to get in touch with you but was not verified on the platform, it would be one restriction that you would not be able to get around if you chose to change your settings.
Q
Liron Velleman: That could be very possible. One of our key questions is whether verification would mean that you had to use your real name on the platform or whether you had to verify that you were a person who was using a platform, but could then use a pseudonym on the front face of the website. I could sign up and say, “Here is my ID for the platform verification”, but if I did not wish to use my name, in order to protect my actual identity publicly on the platform, I could choose not to but still be verified as a real person. It would be different to having to have my name, Liron Velleman, as the user for Facebook or Twitter or any other platform.
The second test for us is whether it is going to make a real difference to reducing online harm. With a lot of the harm we see, people are very happy to put their names to the racism, misogyny and sexism and homophobia that they put online. We would not want to see a huge focus on anonymity, whereby we “ended” anonymity online, and yet online harm continued to propagate. We believe it would still continue, and we would not want people to be disappointed that that had not completely solved the issue. Of course, there are a huge number of anonymous accounts online that carry out abuse. Anything we can do to reduce that is welcome, but we do not see it as the silver bullet that could end racism online.
Stephen Kinsella: Obviously, we have not suggested that there is a silver bullet. We are talking about responding to what users want. A lot of users want the ability to say that they do not want to interact with people who are not using their real name. That does not mean that one could not envisage other levels of filter. You could have a different filter that said, “I am happy to interact with people who are verified to be real, but I don’t require that they have given their name”. The technology exists there, certainly to provide a menu of solutions. If you could only have one, we happen to think ours is the best, and that the evidence shows it would reduce a significant amount of disinformation spread and, certainly, abuse.
Danny Stone: I think one issue will be Ofcom’s ability to ensure consistency in policing. It is very difficult, actually, to find out where crimes have happened and who an individual is. Sometimes, the police have the power to compel the revelation of identity. The way the platforms respond is, I think, patchy, so Ofcom’s position in its guidance here will be pretty important.
Thank you. We have time for a question from Navendu Mishra before we bring the Minister in.
Q
Danny Stone: If we are talking about the “legal but harmful” provisions, I would reflect what the witnesses from the Carnegie Trust—who are brilliant—were saying earlier. There is a principle that has been established in the Bill to list priority illegal harms, and there is no reason why priority harms against adults should not be listed. Racism and misogyny are not going anywhere. The Joint Committee suggested leaning into existing legislation, and I think that is a good principle. The Equality Act established protected characteristics, so I think that is a start—it is a good guide. I think there could be further reference to the Equality Act in the Bill, including in relation to anonymity and other areas.
Q
Stephen Kinsella: Yes. We think they are extremely helpful. We welcome what we see in clause 14 and clause 57. There is thus a very clear right to be verified, and an ability to screen out interactions with unverified accounts, which is precisely what we asked for. The Committee will be aware that we have put forward some further proposals. I would really hesitate to describe them as amendments; I see them as shading-in areas—we are not trying to add anything. We think that it would be helpful, for instance, when someone is entitled to be verified, that verification status should also be visible to other users. We think that should be implicit, because it is meant to act as a signal to others as to whether someone is verified. We hope that would be visible, and we have suggested the addition of just a few words into clause 14 on that.
We think that the Bill would benefit from a further definition of what it means by “user identity verification”. We have put forward a proposal on that. It is such an important term that I think it would be helpful to have it as a defined term in clause 189. Finally, we have suggested a little bit more precision on the things that Ofcom should take into account when dealing with platforms. I have been a regulatory lawyer for nearly 40 years, and I know that regulators often benefit from having that sort of clarity. There is going to be negotiation between Ofcom and the platforms. If Ofcom can refer to a more detailed list of the factors it is supposed to take into account, I think that will speed the process up.
One of the reasons we particularly welcomed the structure of the Bill is that there is no wait for detailed codes of conduct because these are duties that we will be executing immediately. I hope Ofcom is working on the guidance already, but the guidance could come out pretty quickly. Then there would be the process of—maybe negotiating is the wrong word—to-and-fro with the platforms. I would be very reluctant to take too much on trust. I do not mean on trust from the Government; I mean on trust from the platforms—I saw the Minister look up quickly then. We have confidence in Government; it is the platforms we are little bit wary of. I heard the frustration expressed on Tuesday.
indicated assent.
Stephen Kinsella: I think you said, “If platforms care about the users, why aren’t they already implementing this?” Another Member, who is not here today, said, “Why do they have to be brought kicking and screaming?” Yet, every time platforms were asked, we heard them say, “We will have to wait until we see the detail of—”, and then they would fill in whatever thing is likely to come last in the process. So we welcome the approach. Our suggestions are very modest and we are very happy to discuss them with you.
Q
Danny, we have had some fairly extensive discussions on the question of small but toxic platforms such as 4chan and BitChute—thank you for coming to the Department to discuss them. I heard your earlier response to the shadow Minister, but do you accept that those platforms should be subject to duties in the Bill in relation to content that is illegal and content that is already harmful to children?
Danny Stone: Yes, that is accurate. My position has always been that that is a good thing. The extent and the nature of the content that is harmful to adults on such platforms—you mentioned BitChute but there are plenty of others—require an additional level of regulatory burden and closer proximity to the regulator. Those platforms should have to account for it and say, “We are the platforms; we are happy that this harm is on our platform and”—as the Bill says—“we are promoting it.” You are right that it is captured to some degree; I think it could be captured further.
Q
“proportionate systems and processes…to ensure that…content of democratic”—
or journalistic—
“importance is taken into account”.
That is not an absolute protection; it is simply a requirement to take into account and perform a proportionate and reasonable balancing exercise. Is that not reasonable?
Liron Velleman: I have a couple of things to say on that. First, we and others in civil society have spent a decade trying to de-platform some of the most harmful actors from mainstream social media companies. What we do not want to see after the Bill becomes an Act are massive test cases where we do not know which way they will go and where it will be up to either the courts or social media companies to make their own decisions on how much regard they place in those exemptions at the same time as all the other clauses.
Secondly, one of our main concerns is the time it takes for some of that content to be removed. If we have a situation in which there is an expediated process for complaints to be made, and for journalistic content to remain on the platform for an announced time until the platform is able take it down, that could move far outside the realms of that journalistic or democratically important content. Again, using the earlier examples, it does not take long for content such as a livestream of a terrorist attack to be up on the Sun or the Daily Mirror websites and for lots of people to modify that video and bypass content, which can then be shared and used to recruit new terrorists and allow copycat attacks to happen, and can go into the worst sewers of the internet. Any friction that is placed on stopping platforms being able to take down some of that harm is definitely of particular concern to us.
Finally, as we heard on Tuesday, social media platforms—I am not sure I would agree with much of what they would say about the Bill, but I think this is true—do not really understand what they are meant to do with these clauses. Some of them are talking about flowcharts and whether this is a point-scoring system that says, “You get plus one for being a journalist, but minus two for being a racist.” I am not entirely sure that platforms will exercise the same level of regard. If, with some of the better-faith actors in the social media space, we have successfully taken down huge reams of the most harmful content and moved it away from where millions of people can see it to where only tens of thousands can see it, we do not want in any way the potential to open up the risk that hundreds of people could argue that they should be back on platforms when they are currently not there.
Q
Danny Stone: My take on this—I think people have misunderstood the Bill—is that it ultimately creates a regulated marketplace of harm. As a user, you get to determine how harmful a platform you wish to engage with—that is ultimately what it does. I do not think that it enforces content take-downs, except in relation to illegal material. It is about systems, and in some places, as you have heard today, it should be more about systems, introducing friction, risk-assessing and showing the extent to which harm is served up to people. That has its problems.
The only other thing on free speech is that we sometimes take too narrow a view of it. People are crowded out of spaces, particularly minority groups. If I, as a Jewish person, want to go on 4chan, it is highly unlikely that I will get a fair hearing there. I will be threatened or bullied out of that space. Free speech has to apply across the piece; it is not limited. We need to think about those overlapping harms when it comes to human rights—not just free speech but freedom from discrimination. We need to be thinking about free speech in its widest context.
Q
Stephen Kinsella: I agree entirely with what Danny was saying. Of course, we would say that our proposals have no implications for free speech. What we are talking about is the freedom not to be shouted at—that is really what we are introducing.
On disinformation, we did some research in the early days of our campaign that showed that a vast amount of the misinformation and disinformation around the 5G covid conspiracy was spread and amplified by anonymous or unverified accounts, so they play a disproportionate role in disseminating that. They also play a disproportionate role in disseminating abuse, and I think you may have a separate session with Kick It Out and the other football bodies. They have some very good research that shows the extent to which abusive language is from unverified or anonymous accounts. So, no, we do not have any free speech concerns at Clean up the Internet.
Q
Liron Velleman: We are satisfied that the Bill adequately protects freedom of speech. Our key view is that, if people are worried that it does not, beefing up the universal protections for freedom of speech should be the priority, instead of what we believe are potentially harmful exemptions in the Bill. We think that freedom of speech for all should be protected, and we very much agree with what Danny said—that the Bill should be about enhancing freedom of speech. There are so many communities that do not use social media platforms because of the harm that exists currently on platforms.
On children, the Bill should not be about limiting freedom of speech, but a large amount of our work covers the growth of youth radicalisation, particularly in the far right, which exists primarily online and which can then lead to offline consequences. You just have to look at the number of arrests of teenagers for far-right terrorism, and so much of that comes from the internet. Part of the Bill is about moderating online content, but it definitely serves to protect against some of the offline consequences of what exists on the platform. We would hope that if people are looking to strengthen freedom of speech, that is a universalist principle in the Bill, and not for some groups but not others.
Good. Thank you. I hope the Committee is reassured by those comments on the freedom of speech question.
Q
Danny Stone: I think that a media literacy strategy is really important. There is, for example, UCL data on the lack of knowledge of the word “antisemitism”: 68% of nearly 8,000 students were unfamiliar with the term’s meaning. Dr Tom Harrison has discussed cultivating cyber-phronesis—this was also in an article by Nicky Morgan in the “Red Box” column some time ago—which is a method of building practical knowledge over time to make the right decisions when presented with a moral challenge. We are not well geared up as a society—I am looking at my own kids—to educate young people about their interactions, about what it means when they are online in front of that box and about to type something, and about what might be received back. I have talked about some of the harms people might be directed to, even through Alexa, but some kind of wider strategy, which goes beyond what is already there from Ofcom—during the Joint Committee process, the Government said that Ofcom already has its media literacy requirements—and which, as you heard earlier, updates it to make it more fit for purpose for the modern age, would be very appropriate.
Stephen Kinsella: I echo that. We also think that that would be welcome. When we talk about media literacy, we often find ourselves with the platforms throwing all the obligation back on to the users. Frankly, that is one of the reasons why we put forward our proposal, because we think that verification is quite a strong signal. It can tell you quite a lot about how likely it is that what you are seeing or reading is going to be true if someone is willing to put their name to it. Seeing verification is just one contribution. We are really talking about trying to build or rebuild trust online, because that is what is seriously lacking. That is a system and design failure in the way that these platforms have been built and allowed to operate.
Q
Liron Velleman: If the Bill is seeking to make the UK the safest place to be on the internet, it seems to be the obvious place to put in something about media literacy. I completely agree with what Danny said earlier: we would also want to specifically ensure—although I am sure this already exists in some other parts of Ofcom and Government business—that there is much greater media literacy for adults as well as children. There are lots of conversations about how children understand use of the internet, but what we have seen, especially during the pandemic, is the proliferation of things like community Facebook groups, which used to be about bins and a fair that is going on this weekend, becoming about the worst excesses of harmful content. People have seen conspiracy theories, and that is where we have seen some of the big changes to how the far-right and other hateful groups operate, in terms of being able to use some of those platforms. That is because of a lack of media literacy not just among children, but among the adult population. I definitely would encourage that being in the Bill, as well as anywhere else, so that we can remove some of those harms.
Danny Stone: I think it will need further funding, beyond what has already been announced. That might put a smile on the faces of some Department for Education officials, who looked so sad during some of the consultation process—trying to ensure that there is proper funding. If you are going to roll this out across the country and make it fit for purpose, it is going to cost a lot of money.
Thank you. As there are no further questions from Members, I thank the witnesses for their evidence. That concludes this morning’s sitting.
Ordered, That further consideration be now adjourned. —(Steve Double.)