Online Safety Bill (Second sitting) Debate
Full Debate: Read Full DebateKim Leadbeater
Main Page: Kim Leadbeater (Labour - Spen Valley)Department Debates - View all Kim Leadbeater's debates with the Department for Digital, Culture, Media & Sport
(2 years, 7 months ago)
Public Bill CommitteesQ
Becky Foreman: We have a range of strategies. One thing I would point to is research that we conduct every year and have done for a number of years called the digital civility index. It is a set of research that speaks to teens and adults in a number of countries around the world to understand what harms they are concerned about online and to ascertain whether those harms are increasing or decreasing and how they vary between different geographies. That is one way in which we are trying to make more data and information available to the general public about the type of harms they might come across online and whether they are increasing or decreasing.
Richard Earley: We have a range of different organisations that we work with in the UK and internationally. One that I would like to draw attention to is the Economist Educational Foundation’s Burnet News Club. We have supported them to increase their funding to be able to aim to reach 10% of all state schools with a really incredibly immersive and impressive programme that enables young people to understand digital literacy and digital numeracy and the media. We are also members of the media literacy taskforce of the Department for Digital, Culture, Media and Sport at the moment, which has been working to build on the strategy that the Government published.
Overall, there is a really important role for us as platforms to play here. We regularly commission and start new programmes in this space. What is also really important is to have more guidance from Government and civil society organisations that we work with on what is effective, so that we can know where we can put our resources and boost the greatest work.
Katie O'Donovan: Thank you for the question. It is really important. We were disappointed to see the literacy focus lost in the Bill.
We really take the issue seriously. We know there is an absolute responsibility for us when it comes to product, and an absolute responsibility when it comes to policy. Even within the safest products and with the most impressive and on-it parents, people can be exposed in content in ways that are surprising and shocking. That is why you need this holistic approach. We have long invested in a programme that we run with the non-governmental organisation Parent Zone called “Be internet legends”. When we developed that, we did it with the PSHE Association to make sure it was totally compliant with the national curriculum. We regularly review that to check that it is actually making a difference. We did some recent research with MORI and got some really good results back.
We used to deliver that programme face to face in schools up and down the country. Obviously, the pandemic stopped that. We went online and while we did not enjoy it quite as much, we were able to reach real scale and it was really effective. Along with doing the assemblies, which are now back in person, we deliver a pack for teachers so they can also take that up at scale. We run similar programmes through YouTube with teenagers. It is absolutely incumbent on us to do more, but it must be part of the debate, because if you rely just on technological solutions, you will end up reducing access to lawful information, with some of the harms still being prevalent and people not having the skills to navigate them.
I am sorry, but I must move on. Minister, I am afraid you only have five minutes.
Q
Professor Clare McGlynn: That is quite challenging not only for pornography platforms but for sex workers, in that if you could participate in pornography but at any time thereafter withdraw your consent, it is difficult to understand how a pornography company and the sex worker would be able to make a significant amount of money. The company would be reluctant to invest because it might have to withdraw the material at any time. In my view, that is a quite a challenge. I would not go down that route, because what it highlights is that the industry can be exploitative and that is where the concern comes from. I think there are other ways to deal with an exploitative porn industry and other ways to ensure that the material online has the full consent of participants. You could put some of those provisions into the Bill—for example, making the porn companies verify the age and consent of those who are participating in the videos for them to be uploaded. I think that is a better way to deal with that, and it would ensure that sex workers themselves can still contract to perform in porn and sustain their way of life.
Q
Janaya Walker: A lot of what we have discussed in terms of naming violence against women and girls on the face of the Bill includes children. We know that four in five offences of sexual communications with a child involved girls, and a lot of child abuse material is targeted at girls specifically. The Bill as a whole takes a very gender-neutral approach, which we do not think is helpful; in fact, we think it is quite harmful to trying to reduce the harm that girls face online.
This goes against the approach taken in the Home Office violence against women and girls strategy and its domestic abuse plan, as well as the gold-standard treaties the UK has signed up to, such as the Istanbul convention, which we signed and have recently committed to ratifying. The convention states explicitly that domestic laws, including on violence against women and girls online, need to take a very gendered approach. Currently, it is almost implied, with references to specific characteristics. We think that in addressing the abuse that girls, specifically, experience, we need to name girls. To clarify, the words “women”, “girls”, “gender” and “sex” do not appear in the Bill, and that is a problem.
Jessica Eagelton: May I add a point that is slightly broader than your question? Another thing that the Bill does not do at the moment is provide for specialist victim support for girls who are experiencing online abuse. There has been some discussion about taking a “polluter pays” approach; where platforms are not compliant with the duties, for example, a percentage of the funds that go to the regulator could go towards victim support services, such as the revenge porn helpline and Refuge’s tech abuse team, that provide support to victims of abuse later on.
Professor Clare McGlynn: I can speak to pornography. Do you want to cover that separately, or shall I do that now?
That is fine.
Professor Clare McGlynn: I know that there was a discussion this morning about age assurance, which obviously targets children’s access to pornography. I would emphasise that age assurance is not a panacea for the problems with pornography. We are so worried about age assurance only because of the content that is available online. The pornography industry is quite happy with age verification measures. It is a win-win for them: they get public credibility by saying they will adopt it; they can monetise it, because they are going to get more data—especially if they are encouraged to develop age verification measures, which of course they have been; that really is putting the fox in charge of the henhouse—and they know that it will be easily evaded.
One of the most recent surveys of young people in the UK was of 16 and 17-year-olds: 50% of them had used a VPN, which avoids age verification controls, and 25% more knew about that, so 75% of those older children knew how to evade age assurance. This is why the companies are quite happy—they are going to make money. It will stop some people stumbling across it, but it will not stop most older children accessing pornography. We need to focus on the content, and when we do that, we have to go beyond age assurance.
You have just heard Google talking about how it takes safety very seriously. Rape porn and incest porn are one click away on Google. They are freely and easily accessible. There are swathes of that material on Google. Twitter is hiding in plain sight, too. I know that you had a discussion about Twitter this morning. I, like many, thought, “Yes, I know there is porn on Twitter,” but I must confess that until doing some prep over the last few weeks, I did not know the nature of that porn. For example, “Kidnapped in the wood”; “Daddy’s little girl comes home from school; let’s now cheer her up”; “Raped behind the bin”—this is the material that is on Twitter. We know there is a problem with Pornhub, but this is what is on Twitter as well.
As the Minister mentioned this morning, Twitter says you have to be 13, and you have to be 18 to try to access much of this content, but you just put in whatever date of birth is necessary—it is that easy—and you can get all this material. It is freely and easily accessible. Those companies are hiding in plain sight in that sense. The age verification and age assurance provisions, and the safety duties, need to be toughened up.
To an extent, I think this will come down to the regulator. Is the regulator going to accept Google’s SafeSearch as satisfying the safety duties? I am not convinced, because of the easy accessibility of the rape and incest porn I have just talked about. I emphasise that incest porn is not classed as extreme pornography, so it is not a priority offence, but there are swathes of that material on Pornhub as well. In one of the studies that I did, we found that one in eight titles on the mainstream pornography sites described sexually violent material, and the incest material was the highest category in that. There is a lot of that around.
Q
Professor Clare McGlynn: In many ways, it is going to be up to the regulator. Is the regulator going to deem that things such as SafeSearch, or Twitter’s current rules about sensitive information—which rely on the host to identify their material as sensitive—satisfy their obligations to minimise and mitigate the risk? That is, in essence, what it will all come down to.
Are they going to take the terms and conditions of Twitter, for example, at face value? Twitter’s terms and conditions do say that they do not want sexually violent material on there, and they even say that it is because they know it glorifies violence against women and girls, but this material is there and does not appear to get swiftly and easily taken down. Even when you try to block it—I tried to block some cartoon child sexual abuse images, which are easily available on there; you do not have to search for them very hard, it literally comes up when you search for porn—it brings you up five or six other options in case you want to report them as well, so you are viewing them as well. Just on the cartoon child sexual abuse images, before anyone asks, they are very clever, because they are just under the radar of what is actually a prohibited offence.
It is not necessarily that there is more that the Bill itself could do, although the code of practice would ensure that they have to think about these things more. They have to report on their transparency and their risk assessments: for example, what type of content are they taking down? Who is making the reports, and how many are they upholding? But it is then on the regulator as to what they are going to accept as acceptable, frankly.
Q
Ian Stevenson: I think you have to look at the change you are trying to effect. For many people in the sector, there is a lack of awareness about what happens when the need to consider safety in building features is not put first. Even when you realise how many bad things can happen online, if you do not know what to do about it, you tend not to be able to do anything about it.
If we want to change culture—it is the same for individual organisations as for the sector as a whole—we have to educate people on what the problem is and give them the tools to feel empowered to do something about it. If you educate and empower people, you remove the barrier to change. In some places, an extremely ethical people-centric and safety-focused culture very naturally emerges, but in others, less so. That is precisely where making it a first-class citizen in terms of risk assessment for boards and management becomes so important. When people see management caring about things, that gets pushed out through the organisations.
Q
“the safest place in the world to be online”?
Lulu Freemont: First, I want to outline that there are some strong parts in the Bill that the sector really supports. I think the majority of stakeholders would agree that the objectives are the right ones. The Bill tries to strike a balance between safety, free speech and encouraging innovation and investment in the UK’s digital economy. The approach—risk-based, systems-led and proportionate—is the right one for the 25,000 companies that are in scope. As it does not focus on individual pieces of content, it has the potential to be future-proof and to achieve longer-term outcomes.
The second area in the Bill that we think is strong is the prioritisation of illegal content. We very much welcome the clear definitions of illegal content on the face of the Bill, which are incredibly useful for businesses as they start to think about preparing for their risk assessment on illegal content. We really support Ofcom as the appropriate regulator.
There are some parts of the Bill that need specific focus and, potentially, amendments, to enable it to deliver on those objectives without unintended consequences. I have already mentioned a few of those areas. The first is defining harmful content in primary legislation. We can leave it to codes to identify the interpretations around that, but we need definitions of harmful content so that businesses can start to understand what they need to do.
Secondly, we need clarity that businesses will not be required to monitor every piece of content as a result of the Bill. General monitoring is prohibited in other regions, and we have concerns that the Online Safety Bill is drifting away from those norms. The challenges of general monitoring are well known: it encroaches on individual rights and could result in the over-removal of content. Again, we do not think that the intention is to require companies of all sizes to look at every piece of content on their site, but it might be one of the unintended consequences, so we would like an explicit prohibition of general monitoring on the face of the Bill.
We would like to remove the far-reaching amendment powers of the Secretary of State. We understand the need for technical powers, which are best practised within regulation, but taking those further so that the Secretary of State can amend the regime in such an extreme way to align with public policy is of real concern, particularly to smaller businesses looking to confidently put in place systems and processes. We would like some consideration of keeping senior management liability as it is. Extending that further is only going to increase the chilling impact that it is having and the environment it is creating within UK investment. The final area, which I have just spoken about, is clarifying the scope. The business-to-business companies in our membership need clarity that they are not in scope and for that intention to be made clear on the face of the Bill.
We really support the Bill. We think it has the potential to deliver. There are just a few key areas that need to be changed or amended slightly to provide businesses with clarity and reassurances that the policy intentions are being delivered on.
Adam Hildreth: To add to that—Lulu has covered absolutely everything, and I agree—the critical bit is not monitoring individual pieces of content. Once you have done your risk assessment and put in place your systems, processes, people and technology, that is what people are signing up for. They are not signing up for this end assessment where, because you find that one piece of harmful content exists, or maybe many, you have failed to abide by what you are really signing up to.
That is the worry from my perspective: that people do a full risk assessment, implement all the systems, put in place all the people, technology and processes that they need, do the best job they can and have understood what investment they are putting in, and someone comes along and makes a report to a regulator—Ofcom, in this sense—and says, “I found this piece of content there.” That may expose weaknesses, but the very best risk assessments are ongoing ones anyway, where you do not just put it away in a filing cabinet somewhere and say, “That’s done.” The definitions of online harms and harmful content change on a daily basis, even for the biggest social media platforms; they change all the time. There was talk earlier about child sexual abuse material that appears as cartoons, which would not necessarily be defined by certain legislation as illegal. Hopefully the legislation will catch up, but that is where that risk assessment needs to be made again, and policies may need to be changed and everything else. I just hope we do not get to the point where the individual monitoring of content, or content misses, is the goal of the Bill—that the approach taken to online safety is this overall one.
Q
Dr Rachel O'Connell: There is a history of parental controls, and only 36% of parents use them. Ofcom research consistently says that it is 70%, but in reality, it is lower. When using age verification, the parents are removing the ability to watch everything. It is a platform; they are providing the digital playground. In the same way, when you go on swings and slides, there is bouncy tarmac because you know the kids are going to use them. It is like creating that health and safety environment in a digital playground.
When parents receive a notification that their child wants to access something, there could be a colour-coded nutrition-style thing for social media, livestreaming and so on, and the parents could make an informed choice. It is then up to the platform to maintain that digital playground and run those kinds of detection systems to see if there are any bad actors in there. That is better than parental controls because the parent is consenting and it is the responsibility of the platform to create the safer environment. It is not the responsibility of the parent to look over the child’s shoulder 24/7 when they are online.
Q
Jared Sine: I do not know the specific provisions but I am familiar with the general concept of them. Any time you put something in law, it can either be criminalised or have enforcement behind it, and I think that helps. Ultimately, it will be up to the platforms to come up with innovative technologies or systems such as “Are You Sure?” and “Does This Bother You?” which say that although the law says x, we are going to go beyond that to find tools and systems that make it happen on our platform. Although I think it is clearly a benefit to have those types of provisions in law, it will really come down to the platforms taking those extra steps in the future. We work with our own advisory council, which includes the founder of the #MeToo movement, REIGN and others, who advise us on how to make platforms safer for those things. That is where the real bread gets buttered, so to speak.
Q
Jared Sine: We are proactive about it, and I know our colleagues and friends over at Bumble are proactive about it as well. Our heads of trust and safety both came from the same company—Uber—before coming to us, so I know that they compare notes quite regularly. Because of the way the legislation is set up, there can be codes of conduct applying specifically to online dating, and to the extent that that technology exists, you need to deploy it.
Q
Rhiannon-Faye McDonald: It is incredibly important that we have this education piece. Like Susie said, we cannot rely on technology or any single part of this to solve child sexual abuse, and we cannot rely on the police to arrest their way out of the problem. Education really is the key. That is education in all areas—educating the child in an appropriate way and educating parents. We hold parenting workshops. Parents are terrified; they do not know what to do, what platforms are doing what, or what to do when things go wrong. They do not even know how to talk to children about the issue; it is embarrassing for them and they cannot bring it up. Educating parents is a huge thing. Companies have a big responsibility there. They should have key strategies in place on how they are going to improve education.
Q
I would like to pick up on a point that has arisen in the discussion so far—the point that Susie raised about the risks posed by Meta introducing end-to-end encryption, particularly on the Facebook Messenger service. You have referenced the fact that huge numbers of child sexual exploitation images are identified by scanning those communications, leading to the arrests of thousands of paedophiles each year. You also referenced the fact that when this was temporarily turned off in Europe owing to the privacy laws there—briefly, thankfully—there was a huge loss of information. We will come on to the Bill in a minute, but as technology stands now, if Meta did proceed with end-to-end encryption, would that scanning ability be lost?
Susie Hargreaves: Yes. It would not affect the Internet Watch Foundation, but it would affect the National Centre for Missing and Exploited Children. Facebook, as a US company, has a responsibility to do mandatory reporting to NCMEC, which will be brought in with the Bill in this country. Those millions of images would be lost, as of today, if they brought end-to-end encryption in now.
We are playing “Beat the clock”. I am going to ask for brief answers and brief questions, please. I will take one question from Kim Leadbeater and one from Barbara Keeley.
Q
Ellen Judson: There is absolutely a risk of over-moderation, and of the Bill incentivising over-moderation, particularly because of the very heavy content focus. Even with illegal content, there is a very broad range of content that companies are expected proactively to monitor for, even when the technical systems to identify that content reliably at scale are perhaps not in place. I absolutely understand and share the concern about over-moderation.
Our response would be that we should look to strengthen the freedom of expression duties currently in the Bill. At the moment, there is a quite vague duty to have regard to the importance of freedom of expression, but it is not at all clear what that would actually mean, and what would be expected from the platforms. One change we would want would be for rights—including freedom of expression and privacy—to be included in the online safety objectives, and to establish that part of the purpose of this regime is to ensure that services are being designed to protect and promote human rights, including freedom of expression. We think that would be a way to bring freedom of expression much more into the centre of the regime and the focus of the Bill, without having to have those add-on exemptions after the fact.
Kyle Taylor: And it creates a level playing field—it says, “These rules apply to everyone equally.”
On the second point, authoritarian—absolutely—but the other area that is really important is fragile democracies. For example, if you look at Hungary, just last week Viktor Orbán said, “You know what you need? Your own media.” If we are setting a standard that says it is totally fine to exempt people in politics and media, then for those fragile democracies that control most aspects of information sharing, we are explicitly saying that it is okay to privilege them over others. That is a very dangerous precedent to set when we have the opportunity to set best global standards here with the Bill.