None Portrait The Chair
- Hansard -

One moment, please. I am conscious of the fact that we are going to run out of time. I am not prepared to allow witnesses to leave without feeling they have had a chance to say anything. Ms Foreman, Ms O’Donovan, is there anything you want to comment on from what you have heard so far? If you are happy, that is fine, I just want to make sure you are not being short-changed.

Becky Foreman: No.

Katie O'Donovan: No, I look forward to the next question.

Kirsty Blackman Portrait Kirsty Blackman (Aberdeen North) (SNP)
- Hansard - -

Q Given the size of Facebook, a lot of our questions will be focused towards it—not that you guys do not have very large platforms, but the risks with social media are larger. You mentioned, Richard, that three in every 10,000 views are hate speech. If three in every 10,000 things I said were hate speech, I would be arrested. Do you not think that, given the incredibly high number of views there are on Facebook, there is much more you need to do to reduce the amount of hate speech?

Richard Earley: So, reducing that number—the prevalence figure, as we call it—is the goal that we set our engineers and policy teams, and it is what we are devoted to doing. On whether it is a high number, I think we are quite rare among companies of our size in providing that level of transparency about how effective our systems are, and so to compare whether the amount is high or low, you would require additional transparency from other companies. That is why we really welcome the part of the Bill that allows Ofcom to set standards for what kinds of transparency actually are meaningful for people.

We have alighted on the figure of prevalence, because we think it is the best way for you and the public to hold us to account for how we are doing. As I said, that figure of three in every 10,000 has declined from six in every 10,000 about 12 months ago. I hope the figure continues to go down, but it is not just a matter of what we do on our platform. It is about how all of us in society function and the regulations you will all be creating to help support the work we do.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q I would like to follow up with a question about responding to complaints. The complaints process is incredibly important. Reports need to be made and Facebook needs to respond to those reports. The Centre for Countering Digital Hate said that it put in 100 complaints and that 51 did not get any response from Facebook. It seems as though there is a systemic issue with a lack of response to complaints.

Richard Earley: I do not know the details of that methodological study. What I can tell you is that every time anyone reports something on Facebook or Instagram, they get a response into their support inbox. We do not put the response directly into your Messenger inbox or IG Direct inbox, because very often when people report something, they do not want to be reminded of what they have seen among messages from their friends and family. Unfortunately, sometimes people do not know about the support inbox and so they miss the response. That could be what happened there, but every time somebody reports something on one of our platforms, they get a response.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Does the response just say, “Thanks for your report”?

Richard Earley: No. I want to be very constructive here. I should say that some of the concerns that are raised around this date from several years ago. I will accept that five or 10 years ago, the experience on our platforms was not this comprehensive, but in the last few years, we have really increased the transparency we give to people. When you submit something and report it for a particular violation, we give you a response that explains the action we took. If we removed it, we would explain what piece of our community standards it broke. It also gives you a link to see that section of our policy so you can understand it.

That is one way we have tried to increase the transparency we give to users. I think there is a lot more we could be doing. I could talk about some of the additional transparency steps we are taking around the way that our algorithms recommend content to people. Those are, again, all welcome parts of the Bill that we look forward to discussing further with Ofcom.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q One of the things that has been recommended by a number of charities is increasing cross-platform and cross-company work to identify and take action on emerging threats. Do you think there would be the level of trust necessary for cross-platform co-operation with your competitors in the light of reports that, for example, Facebook employed somebody to put out negative things about TikTok in the US? Do you think that cross-platform working will work in that environment?

Richard Earley: Yes; I think it is already working, in fact. Others on the panel mentioned a few areas in which we have been collaborating in terms of open-sourcing some of the technology we have produced. A few years ago, we produced a grooming classifier—a technology that allows people to spot potentially inappropriate interactions between adults and children—and we open-sourced that and enabled it to be used and improved on by anyone else who is building a social media network.

A number of other areas, such as PhotoDNA, have already been mentioned. An obvious one is the Global Internet Forum to Counter Terrorism, which is a forum for sharing examples of known terrorist content so that those examples can be removed from across the internet. All those areas have been priorities for us in the past. A valuable piece of the Bill is that Ofcom—from what I can see from the reports that it has been asked to make—will do a lot of work to understand where there are further opportunities for collaboration among companies. We will be very keen to continue being involved in that.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q I have a question for Katie on the algorithms that produce suggestions when you begin to type. It has been raised with me and in the evidence that we have received that when you begin to type, you might get a negative suggestion. If somebody types in, “Jews are”, the algorithm might come up with some negative suggestions. What has Google done about that?

Katie O'Donovan: We are very clear that we want the auto-suggestion, as we call it, to be a helpful tool that helps you find the information that you are looking for quicker—that is the core rationale behind the search—but we really do not want it to perpetuate hate speech or harm for protected individuals or wider groups in society. We have changed the way that we use that auto-complete, and it will not auto-complete to harmful suggestions. That is a live process that we review and keep updated. Sometimes terminology, vernacular or slang change, or there is a topical focus on a particular group of people, so we keep it under review. But by our policy and implementation, those auto-suggestions should very much not be happening on Google search.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Would it be technically possible for all of the protected characteristics, for example, to have no auto-complete prompts come up?

Katie O'Donovan: That is an excellent question on where you do not want protections and safety to minimise user or individual impact. If you wanted a protected characteristic for Jewish people, for example, we see that as really important, and we should remove the ability for hate speech. If you wanted to do that for a Jewish cookbook, Jewish culture or Jewish history, and we removed everything, you would really minimise the amount of content that people could access.

The Bill is totally vital and will be incredibly significant on UK internet access, but that is where it is really important to get the balance and nuance right. Asking an algorithm to do something quite bluntly might look at first glance like it will really improve safety, but when you dig into it, you end up with the available information being much less sophisticated, less impactful and less full, which I think nobody really wants—either for the user or for those protected groups.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Would it not be easier to define all the protected characteristics and have a list of associated words than to define every possible instance of hate speech in relation to each?

Katie O'Donovan: The way we do it at the moment is through algorithmic learning. That is the most efficient way to do it because we have millions of different search terms, a large number of which we see for the very first time every single day on Google search. We rarely define things with static terms. We use our search rater guidelines—a guide of about 200 pages—to determine how those algorithms work and make sure that we have a dynamic ability to restrict them.

That means that you do not achieve perfection, and there will be changes and new topical uses that we perhaps did not anticipate—we make sure that we have enough data incoming to adjust to that. That is the most efficient way of doing it, and making sure that it has the nuance to stop the bad autocomplete but give access to the great content that we want people to get to.

None Portrait The Chair
- Hansard -

Thank you very much. Ms Foreman, do you want to add anything to that? You do not have to.

Becky Foreman: I do not have anything to add.

--- Later in debate ---
None Portrait The Chair
- Hansard -

Ms Walker?

Janaya Walker: Some of these discussions almost reiterate what I was saying earlier about the problematic nature of this, in that so much of what companies are going to be directed to do will be tied only to the specific schedule 7 offences. There have been lots of discussions about how you respond to some harms that reach a threshold of criminality and others that do not, but that really contrasts with the best practice approach to addressing violence against women and girls, which is really trying to understand the context and all of the ways that it manifests. There is a real worry among violence against women and girls organisations about the minimal response to content that is harmful to adults and children, but will not require taking such a rigorous approach.

Having the definition of violence against women and girls on the face of the Bill allows us to retain those expectations on providers as technology changes and new forms of abuse emerge, because the definition is there. It is VAWG as a whole that we are expecting the companies to address, rather than a changing list of offences that may or may not be captured in criminal law.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Why is it important that we have this? Is this a big thing? What are you guys actually seeing here?

Jessica Eagelton: I can respond to that in terms of what we are seeing as a provider. Technology-facilitated domestic abuse is an increasing form of domestic abuse: technology is providing perpetrators with increasing ways to abuse and harass survivors. What we are seeing on social media is constant abuse, harassment, intimate image abuse, monitoring and hacking of accounts, but when it comes to the responses we are getting from platforms at the moment, while I acknowledge that there is some good practice, the majority experience of survivors is that platforms are not responding sufficiently to the tech abuse they are experiencing.

Our concern is that the Bill could be a really good opportunity for survivors of domestic abuse to have greater protections online that would mean that they are not forced to come offline. At the moment, some of the options being given to survivors are to block the perpetrator—which in some cases has a minimal impact when they can easily set up new fake accounts—or to come offline completely. First, that is not a solution to that person being able to maintain contact, stay online and take part in public debate. But secondly, it can actually escalate risk in some cases, because a perpetrator could resort to in-person forms of abuse. If we do not make some of these changes—I am thinking in particular about mandating a VAWG code of practice, and looking at schedule 7 and including controlling and coercive behaviour—the Bill is going to be a missed opportunity. Women and survivors have been waiting long enough, and we need to take this opportunity.

Janaya Walker: If I could add to that, as Jessica has highlighted, there is the direct harm to survivors in terms of the really distressing experience of being exposed to these forms of harm, or the harm they experience offline being exacerbated online, but this is also about indirect harm. We need to think about the ways in which the choices that companies are making are having an impact on the extent to which violence against women and girls is allowed to flourish.

As Jessica said, it impacts our ability to participate in online discourse, because we often see a mirroring online of what happens offline, in the sense that the onus is often on women to take responsibility for keeping themselves safe. That is the status quo we see offline, in terms of the decisions we make about what we are told to wear or where we should go as a response to violence against women and girls. Similarly, online, the onus is often on us to come offline or put our profiles on private, to take all those actions, or to follow up with complaints to various different companies that are not taking action. There is also something about the wider impact on society as a whole by not addressing this within the Bill.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q How does the proposed code of practice—or, I suppose, how could the Bill—tackle intersectionality of harms?

Janaya Walker: This is a really important question. We often highlight the fact that, as I have said, violence against women and girls often intersects with other forms of discrimination. For example, we know from research that EVAW conducted with Glitch during the pandemic that black and minoritised women and non-binary people experience a higher proportion of abuse. Similarly, research done by Amnesty International shows that black women experience harassment at a rate 84% higher than that experienced by their white counterparts. It is a real focal point. When we think about the abuse experienced, we see the ways that people’s identities are impacted and how structural discrimination emerges online.

What we have done with the code of practice is try to introduce requirements for the companies to think about things through that lens, so having an overarching human rights and equalities framework and having the Equality Act protected characteristics named as a minimum. We see in the Bill quite vague language when it comes to intersectionality; it talks about people being members of a certain group. We do not have confidence that these companies, which are not famed for their diversity, will interpret that in a way that we regard as robust—thinking very clearly about protected characteristics, human rights and equalities legislation. The vagueness in the Bill is quite concerning. The code of practice is an attempt to be more directive on what we want to see and how to think through issues in a way that considers all survivors, all women and girls.

Professor Clare McGlynn: I wholly agree. The code of practice is one way by which we can explain in detail those sorts of intersecting harms and what companies and platforms should do, but I think it is vital that we also write it into the Bill. For example, on the definitions around certain characteristics and certain groups, in previous iterations reference was made to protected characteristics. I know certain groups can go wider than that, but naming those protected characteristics is really important, so that they are front and centre and the platforms know that that is exactly what they have to cover. That will cover all the bases and ensure that that happens.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

I have a quite specific question on something that is a bit tangential.

None Portrait The Chair
- Hansard -

Last one, please.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q If someone has consented to take part in pornography and they later change their mind and would like it to be taken down, do you think they should have the right to ask a porn website, for example, to take it down?

Professor Clare McGlynn: That is quite challenging not only for pornography platforms but for sex workers, in that if you could participate in pornography but at any time thereafter withdraw your consent, it is difficult to understand how a pornography company and the sex worker would be able to make a significant amount of money. The company would be reluctant to invest because it might have to withdraw the material at any time. In my view, that is a quite a challenge. I would not go down that route, because what it highlights is that the industry can be exploitative and that is where the concern comes from. I think there are other ways to deal with an exploitative porn industry and other ways to ensure that the material online has the full consent of participants. You could put some of those provisions into the Bill—for example, making the porn companies verify the age and consent of those who are participating in the videos for them to be uploaded. I think that is a better way to deal with that, and it would ensure that sex workers themselves can still contract to perform in porn and sustain their way of life.

Kim Leadbeater Portrait Kim Leadbeater
- Hansard - - - Excerpts

Q Thank you very much—this is extremely interesting and helpful. You have covered a lot of ground already, but I wonder whether there is anything specific you think the Bill should be doing more about, to protect girls—under-18s or under-16s—in particular?

Janaya Walker: A lot of what we have discussed in terms of naming violence against women and girls on the face of the Bill includes children. We know that four in five offences of sexual communications with a child involved girls, and a lot of child abuse material is targeted at girls specifically. The Bill as a whole takes a very gender-neutral approach, which we do not think is helpful; in fact, we think it is quite harmful to trying to reduce the harm that girls face online.

This goes against the approach taken in the Home Office violence against women and girls strategy and its domestic abuse plan, as well as the gold-standard treaties the UK has signed up to, such as the Istanbul convention, which we signed and have recently committed to ratifying. The convention states explicitly that domestic laws, including on violence against women and girls online, need to take a very gendered approach. Currently, it is almost implied, with references to specific characteristics. We think that in addressing the abuse that girls, specifically, experience, we need to name girls. To clarify, the words “women”, “girls”, “gender” and “sex” do not appear in the Bill, and that is a problem.

Jessica Eagelton: May I add a point that is slightly broader than your question? Another thing that the Bill does not do at the moment is provide for specialist victim support for girls who are experiencing online abuse. There has been some discussion about taking a “polluter pays” approach; where platforms are not compliant with the duties, for example, a percentage of the funds that go to the regulator could go towards victim support services, such as the revenge porn helpline and Refuge’s tech abuse team, that provide support to victims of abuse later on.

Professor Clare McGlynn: I can speak to pornography. Do you want to cover that separately, or shall I do that now?

--- Later in debate ---
Dean Russell Portrait Dean Russell
- Hansard - - - Excerpts

Thank you.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q My first question is for Lulu. Do small tech companies have enough staff with technical expertise to be able to fulfil their obligations under the Bill?

Lulu Freemont: It is a great question. One of the biggest challenges is capacity. We hear quite a lot from the smaller tech businesses within our membership that they will have to divert their staff away from existing work to comply with the regime. They do not have compliance teams, and they probably do not have legal counsel. Even at this stage, to try to understand the Bill as it is currently drafted—there are lots of gaps—they are coming to us and saying, “What does this mean in practice?” They do not have the answers, or the capability to identify that. Attendant regulatory costs—thinking about the staff that you have and the cost, and making sure the regulation is proportionate to the need to divert away from business development or whatever work you might be doing in your business—are really fundamental.

Another real risk, and something in the Bill that smaller businesses are quite concerned about, is the potential proposal to extend the senior management liability provisions. We can understand them being in there to enable the regulators to do their job—information requests—but if there is any extension into individual pieces of content, coupled with a real lack of definitions, those businesses might find themselves in the position of restricting access to their services, removing too much content or feeling like they cannot comply with the regime in a proportionate way. That is obviously a very extreme case study. It will be Ofcom’s role to make sure that those businesses are being proportionate and understand the provisions, but the senior management liability does have a real, chilling impact on the smaller businesses within our membership.

Adam Hildreth: One of the challenges that we have seen over the last few years is that you can have a business that is small in revenue but has a huge global user base, with millions of users, so it is not really a small business; it just has not got to the point where it is getting advertisers and getting users to pay for it. I have a challenge on the definition of a small to medium-sized business. Absolutely, for start-ups with four people in a room—or perhaps even still just two—that do not have legal counsel or anything else, we need to make it simple for those types of businesses to ingest and understand what the principles are and what is expected of them. Hopefully they will be able to do quite a lot early on.

The real challenge comes when someone labels themselves as a small business but they have millions of users across the globe—and sometimes actually quite a lot of people working for them. Some of the biggest tech businesses in the world that we all use had tens of people working for them at one point in time, when they had millions of users. That is the challenge, because there is an expectation for the big-tier providers to be spending an awful lot of money, when the small companies are actually directly competing with them. There is a challenge to understanding the definition a small business and whether that is revenue-focused, employee-focused or about how many users it has—there may be other metrics.

Ian Stevenson: One of the key questions is how much staffing this will actually take. Every business in the UK that processes data is subject to GDPR from day one. Few of them have a dedicated data protection officer from day one; it is a role or responsibility that gets taken on by somebody within the organisation, or maybe somebody on the board who has some knowledge. That is facilitated by the fact that there are a really clear set of requirements there, and there are a lot of services you can buy and consume that help you deliver compliance. If we can get to a point where we have codes of practice that make very clear recommendations, then even small organisations that perhaps do not have that many staff to divert should be able to achieve some of the basic requirements of online safety by buying in the services and expertise that they need. We have seen with GDPR that many of those services are affordable to small business.

If we can get the clarity of what is required right, then the staff burden does not have to be that great, but we should all remember that the purpose of the Bill is to stop some of the egregiously bad things that happen to people as a result of harmful content, harmful behaviours and harmful contact online. Those things have a cost in the same way that implementing data privacy has a cost. To come back to Lulu’s point, it has to be proportionate to the business.

Maria Miller Portrait Mrs Miller
- Hansard - - - Excerpts

Q Adam, you said a few moments ago that companies are starting to put safety at the core of what they do, which will be welcome to us all—maybe it should have happened a lot earlier. I know you have worked a lot in that area. Regulators and company owners will have to depend on an ethical culture in their organisations if they are going to abide by the new regulations, because they cannot micromanage and regulators cannot micromanage. Will the Bill do enough to drive that ethical culture? If not, what more could it do or could the industry do? I would be really interested in everybody’s answer to this one, but I will start with Adam.

Adam Hildreth: What we are seeing from the people that are getting really good at this and that really understand it is that they are treating this as a proper risk assessment, at a very serious level, across the globe. When we are talking about tier 1s, they are global businesses. When they do it really well, they understand risk and how they are going to roll out systems, technology, processes and people in order to address that. That can take time. Yes, they understand the risk, who it is impacting and what they are going to do about it, but they still need to train people and develop processes and maybe buy or build technology to do it.

We are starting to see that work being done really well. It is done almost in the same way that you would risk assess anything else: corporate travel, health and safety in the workplace—anything. It should really become one of those pillars. All those areas I have just gone through are regulated. Once you have regulation there, it justifies why someone is doing a risk assessment, and you will get businesses and corporates going through that risk assessment process. We are seeing others that do not do the same level of risk assessment and they do not have that same buy-in.

--- Later in debate ---
None Portrait The Chair
- Hansard -

I have three Members and the Minister to get in before 5 o’clock, so I urge brief questions and answers please.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Is it technically possible—I do not need to know how—to verify the age of children who are under 16, for example?

Dr Rachel O'Connell: Yes.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q So technology exists out there for that to happen.

Dr Rachel O'Connell: Yes.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Once we have the verification of those ages, do you think it would be possible or desirable to limit children’s interactions to only with other children? Is that the direction you were going in?

Dr Rachel O'Connell: I will give an example. If you go to an amusement park, kids who are below four feet, for example, cannot get on the adult rides, so the equivalent would be that they should not be on an 18-plus dating site. The service can create it at a granular level so the kids can interact with kids in the same age group or a little bit older, but they can also interact with family. You can create circles of trust among verified people.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

For a game like Roblox, which is aimed at kids—it is a kids platform—if you had the age verification and if that worked, you could have a situation where a 13-year-old on Roblox could only interact with children who are between 12 and 14. Does the technology exist to make that work?

Dr Rachel O'Connell: You could do. Then if you were using it in esports or there was a competition, you could broaden it out. The service can set the parameters, and you can involve the parents in making decisions around what age bands their child can play with. Also, kids are really into esports and that is their future, so there are different circumstances and contexts that the technology could enable.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Finally, do you think it would be desirable for Ofcom to consider a system with more consistency in parental controls, so that parents can always ensure that their children cannot talk to anybody outside their circle? Would that be helpful?

Dr Rachel O'Connell: There is a history of parental controls, and only 36% of parents use them. Ofcom research consistently says that it is 70%, but in reality, it is lower. When using age verification, the parents are removing the ability to watch everything. It is a platform; they are providing the digital playground. In the same way, when you go on swings and slides, there is bouncy tarmac because you know the kids are going to use them. It is like creating that health and safety environment in a digital playground.

When parents receive a notification that their child wants to access something, there could be a colour-coded nutrition-style thing for social media, livestreaming and so on, and the parents could make an informed choice. It is then up to the platform to maintain that digital playground and run those kinds of detection systems to see if there are any bad actors in there. That is better than parental controls because the parent is consenting and it is the responsibility of the platform to create the safer environment. It is not the responsibility of the parent to look over the child’s shoulder 24/7 when they are online.

Kim Leadbeater Portrait Kim Leadbeater
- Hansard - - - Excerpts

Q The age verification stuff is really interesting, so thank you to our witnesses. On violence against women and girls, clauses 150 to 155 set out three new communications offences. Do you think those offences will protect women from receiving offensive comments, trolling and threats online? What will the Bill mean for changing the way you manage those risks on your platforms?

Jared Sine: I do not know the specific provisions but I am familiar with the general concept of them. Any time you put something in law, it can either be criminalised or have enforcement behind it, and I think that helps. Ultimately, it will be up to the platforms to come up with innovative technologies or systems such as “Are You Sure?” and “Does This Bother You?” which say that although the law says x, we are going to go beyond that to find tools and systems that make it happen on our platform. Although I think it is clearly a benefit to have those types of provisions in law, it will really come down to the platforms taking those extra steps in the future. We work with our own advisory council, which includes the founder of the #MeToo movement, REIGN and others, who advise us on how to make platforms safer for those things. That is where the real bread gets buttered, so to speak.

--- Later in debate ---
Maria Miller Portrait Mrs Miller
- Hansard - - - Excerpts

Q So what needs to change in the Bill to make sure that happens? I am not clear.

Susie Hargreaves: We just want to make sure that the ability to scan in an end-to-end encrypted environment is included in the Bill in some way.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q The ability to scan is there right now—we have got that—so you are just trying to make sure we are standing still, basically. Am I correct in my understanding?

Susie Hargreaves: I think with technology you can never stand still. We do not know what is coming down the line. We have to deal with the here and now, but we also need to be prepared to deal with whatever comes down the line. The answer, “Okay, we will just get people to report,” is not a good enough replacement for the ability to scan for images.

When the privacy directive was introduced in Europe and Facebook stopped scanning for a short period, we lost millions of images. What we know is that we must continue to have those safety mechanisms in place. We need to work collectively to do that, because it is not acceptable to lose millions of images of child sexual abuse and create a forum where people can safely share them without any repercussions, as Rhiannon says. One survivor we talked to in this space said that one of her images had been recirculated 70,000 times. The ability to have a hash of a unique image, go out and find those duplicates and make sure they are removed means that people are not re-victimised on a daily basis. That is essential.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Focusing on thinking about how to prevent grooming behaviour, does the Bill have enough in place to protect children from conversations that they may have adults, or from facing grooming behaviour online?

Rhiannon-Faye McDonald: There is one specific point that I would like to raise about this. I am concerned about private communications. We know that many offenders identify and target children on more open platforms, and then very quickly move them to more private platforms to continue the grooming and abuse. We were very pleased to see that private communications were brought in scope. However, there is a difficulty in the code of practice. When that is drafted, Ofcom is not going to be able to require proactive tools to be used to identify. That includes things like PhotoDNA and image and text-based classifiers.

So although we have tools that we can use currently, which can identify conversations where grooming is happening, we are not going to be using those immediately on private platforms, on private communications where the majority of grooming is going to happen. That means there will be a delay while Ofcom establishes that there is a significant problem with grooming on the platform, and then issues are noticed to require those tools to be used.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q You mentioned the reporting mechanisms that are in place, Susie. Yes, they are not the only tool, and should not be the only tool—many more things should be happening—but are the reporting mechanisms that will be in place, once the Bill has come in and is being embedded, sufficient, or do they need to be improved as well; as requirements for platforms to have reporting mechanisms?

Susie Hargreaves: An awful lot of work has already gone into this over the past few years. We have been working closely with Departments on the draft code of practice. We think that, as it stands, it is in pretty good shape. We need to work more closely with Ofcom as those codes are developed—us and other experts in the field. Again, it needs to be very much not too directing, in the sense that we do not want to limit people, and to be available for when technology changes in the future. It is looking in the right shape, but of course we will all be part of the consultation and of the development of those practices as they go. It requires people to scan their networks, to check for child sexual abuse and—I guess for the first time, the main thing—to report on it. It is going to be a regulated thing. In itself, that is a huge development, which we very much welcome.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q I have one last question. Rhiannon, a suggestion was made earlier by Dr Rachel O’Connell about age verification and only allowing children to interact with other children whose age is verified within a certain area. Do you think that would help to prevent online grooming?

Rhiannon-Faye McDonald: It is very difficult. While I am strongly about protecting children from encountering perpetrators, I also recognise that children need to have freedoms and the ability to use the internet in the ways that they like. I think if that was implemented and it was 100% certain that no adult could pose as a 13-year-old and therefore interact with actual 13-year-olds, that would help, but I think it is tricky.

Susie Hargreaves: One of the things we need to be clear about, particularly where we see children groomed —we are seeing younger and younger children—is that we will not ever sort this just with technology; the education piece is huge. We are now seeing children as young as three in self-generated content, and we are seeing children in bedrooms and domestic settings being tricked, coerced and encouraged into engaging in very serious sexual activities, often using pornographic language. Actually, a whole education piece needs to happen. We can put filters and different technology in place, but remember that the IWF acts after the event—by the time we see this, the crime has been committed, the image has been shared and the child has already been abused. We need to bump up the education side, because parents, carers, teachers and children themselves have to be able to understand the dangers of being online and be supported to build their resilience online. They are definitely not to be blamed for things that happen online. From Rhiannon’s own story, how quickly it can happen, and how vulnerable children are at the moment—I don’t know.

Rhiannon-Faye McDonald: For those of you who don’t know, it happened very quickly to me, within the space of 24 hours, from the start of the conversation to the perpetrator coming to my bedroom and sexually assaulting me. I have heard other instances where it has happened much more quickly than that. It can escalate extremely quickly.

Just to add to Susie’s point about education, I strongly believe that education plays a huge part in this. However, we must be very careful in how we educate children, so that the focus is not on how to keep themselves safe, because puts the responsibility on them, which in turn increases the feelings of responsibility when things do go wrong. That increased feeling of responsibility makes it less likely that they will disclose that something has happened to them, because they feel that they will be blamed. It will decrease the chance that children will tell us that something has happened.

Baroness Keeley Portrait Barbara Keeley
- Hansard - - - Excerpts

Q Just to follow up on a couple of things, mainly with Susie Hargreaves. You mentioned reporting mechanisms and said that reporting will be a step forward. However, the Joint Committee on the draft Bill recommended that the highest-risk services should have to report quarterly data to Ofcom on the results of their child sexual exploitation and abuse removal systems. What difference would access to that kind of data make to your work?

Susie Hargreaves: We already work with the internet industry. They currently take our services and we work closely with them on things such as engineering support. They also pay for our hotline, which is how we find child sexual abuse. However, the difference it would make is that we hope then to be able to undertake work where we are directly working with them to understand the level of their reports and data within their organisations.

At the moment, we do not receive that information from them. It is very much that we work on behalf of the public and they take our services. However, if we were suddenly able to work directly with them—have information about the scale of the issue within their own organisations and work more directly on that— then that would help to feed into our work. It is a very iterative process; we are constantly developing the technology to deal with the current threats.

It would also help us by giving us more intelligence and by allowing us to share that information, on an aggregated basis, more widely. It would certainly also help us to understand that they are definitely tackling the problem. We do believe that they are tackling the problem, because it is not in their business interests not to, but it just gives a level of accountability and transparency that does not exist at the moment.

--- Later in debate ---
Alex Davies-Jones Portrait Alex Davies-Jones
- Hansard - - - Excerpts

Q So as a result of these exemptions, the Bill as it stands could make the internet less safe than it currently is.

Kyle Taylor: The Bill as it stands could absolutely make the internet less safe than it currently is.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q You have done a really good job of explaining the concerns about journalistic content. Thinking about the rest of the Bill for a moment, do you think the balance between requiring the removal of content and the prioritisation of content is right? Do you think it will be different from how things are now? Do you think there is a better way it could be done in the Bill?

Ellen Judson: The focus at the moment is too heavily on content. There is a sort of tacit equation of content removal—sometimes content deprioritisation, but primarily content removal—as the way to protect users from harm, and as the threat to freedom of expression. That is where the tension comes in with how to manage both those things at once. What we would want from a Bill that was taking more of a systems approach is thinking: where are platforms making decisions about how they are designing their services, and how they are operating their services at all levels? Content moderation policy is certainly included, but it goes back to questions of how a recommendation algorithm is designed and trained, who is involved in that process, and how human moderators are trained and supported. It is also about what functionality users are given and what behaviour is incentivised and encouraged. There is a lot of mitigation that platforms can put in place that does not talk about directly affecting user content.

I think we should have risk assessments that focus on the risks of harms to users, as opposed to the risk of users encountering harmful content. Obviously there is a relationship, but one piece of content may have very different effects when it is encountered by different users. It may cause a lot of harm to one user, whereas it may not cause a lot of harm to another. We know that when certain kinds of content are scaled and amplified, and certain kinds of behaviour are encouraged or incentivised, we see harms at a scale that the Bill is trying to tackle. That is a concern for us. We want more of a focus on some things that are mentioned in the Bill—business models, platform algorithms, platform designs and systems and processes. They often take a backseat to the issues of content identification and removal.

Kyle Taylor: I will use the algorithm as an example, because this word flies around a lot when we talk about social media. An algorithm is a calculation that is learning from people’s behaviour. If society is racist, an algorithm will be racist. If society is white, an algorithm will be white. You can train an algorithm to do different things, but you have to remember that these companies are for-profit businesses that sell ad space. The only thing they are optimising for in an algorithm is engagement.

What we can do, as Ellen said, through a system is force optimisation around certain things, or drive algorithms away from certain types of content, but again, an algorithm is user-neutral. An algorithm does not care what user is saying what; it is just “What are people clicking on?”, regardless of what it is or who said it. An approach to safety has to follow the same methodology and say, “We are user-neutral. We are focused entirely on propensity to cause harm.”

The second piece is all the mitigation measures you can take once a post is up. There has been a real binary of “Leave it up” and “Take it down”, but there is a whole range of stuff—the most common word used is “friction”—to talk about what you can do with content once it is in the system. You have to say to yourself, “Okay, we absolutely must have free speech protections that exceed the platform’s current policies, because they are not implemented equally.” At the same time, you can preserve someone’s free expression by demonetising content to reduce the incentive of the company to push that content or user through its system. That is a way of achieving both a reduction in harm and the preservation of free expression.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

May I just ask one more question, Chair?

None Portrait The Chair
- Hansard -

Briefly, because there are two other Members and the Minister wishing to ask questions.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - -

Q Thanks. On the propensity to cause harm, we heard earlier that a company might create a great new feature and put it out, but then there is a period—a lag, if you like—before they realise the harm that is being caused. Do you trust that companies would have the ability to understand in advance of doing something what harm it may cause, and adequately to assess that?

Ellen Judson: I think there are a lot of things that companies could be doing. Some of these things are in research that they probably are conducting. As we have seen from the Facebook files, companies are conducting that sort of research, but we aren’t privy to the results. I think there a couple of things we want to see. First, we want companies to have to be more transparent about what kind of testing they have done, or, if not testing, about who they have consulted when designing these products. Are they consulting human rights experts? Are they consulting people who are affected by identity-based harm, or are they just consulting their shareholders? Even that would be a step in the right direction, and that is why it is really important.

We feel that there need to be stronger provisions in the Bill for independent researcher and civil society access to data. Companies will be able to do certain amounts of things, and regulators will have certain powers to investigate and do their own research, but it requires the added efforts of civil society properly to hold companies to account for the effects of certain changes they have made—and also to help them in identifying what the effects of those changes to design have been. I think that is really crucial.

None Portrait The Chair
- Hansard -

We are playing “Beat the clock”. I am going to ask for brief answers and brief questions, please. I will take one question from Kim Leadbeater and one from Barbara Keeley.