Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

That is absolutely the case. We are talking about cats because I chose them to illustrate the situation, but people may look at content about healthy eating, and that moves on to content that encourages them to be sick. The way the algorithms step it up is insidious; they get more and more extreme, so that the linger time is increased and people do not get bored. It is important that platforms look specifically at their habit-forming features.

Charlotte Nichols Portrait Charlotte Nichols (Warrington North) (Lab)
- Hansard - -

A specific case on the platform TikTok relates to a misogynist who goes by the name of Andrew Tate, who has been banned from a number of social media platforms. However, because TikTok works by making clips shorter, which makes it more difficult for the company to identify some of this behaviour among users, young boys looking for videos of things that might interest them were very quickly shown misogynist content from Andrew Tate. Because they watched one video of him, they were then shown more and more. It is easy to see how the habit-forming behaviours built into platforms’ algorithms, which the hon. Lady identifies, can also be a means of quickly radicalising children into extreme ideologies.

None Portrait The Chair
- Hansard -

Order. I think we have the message. I have to say to all hon. Members that interventions are interventions, not speeches. If Members wish to make speeches, there is plenty of time.

--- Later in debate ---
Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

It is a pleasure to serve with you in the Chair, Sir Roger. I rise in support of amendments 99, and 96 and 97, as my hon. Friend the Member for Pontypridd did. I have an issue with the vagueness and ambiguity in the Bill. Ministerial direction is incredibly helpful, not only for Ofcom, but for the companies and providers that will use the Bill to make technologies available to do what we are asking them to do.

As the hon. Member for Aberdeen North said, if the Bill provided for that middle ground, that would be helpful for a number of purposes. Amendment 97 refers to livestreaming; in a number of cases around the world, people have livestreamed acts of terror, such as the shooting at the Christchurch mosque. Those offences were watched in real time, as they were perpetuated, by potentially hundreds of thousands of people. We have people on watch lists—people we are aware of. If we allowed them to use a social media platform but not the livestreaming parts, that could go some way to mitigating the risk of their livestreaming something like that. Their being on the site is perhaps less of a concern, as their general use of it could be monitored in real time. Under a risk analysis, we might be happy for people to be on a platform, but consider that the risk was too great to allow them to livestream. Having such a provision would be helpful.

My hon. Friend the Member for Luton North mentioned the onus always being on the victim. When we discuss online abuse, I really hate it when people say, “Well, just turn off your messages”, “Block them” or “Change your notification settings”, as though that were a panacea. Turning off the capacity to use direct messages is a much more effective way of addressing abuse by direct message than banning the person who sent it altogether—they might just make a new account—or than relying on the recipient of the message to take action when the platform has the capacity to take away the option of direct messaging. The adage is that sunlight is the best disinfectant. When people post in public and the post can be seen by anyone, they can be held accountable by anyone. That is less of a concern to me than what they send privately, which can be seen only by the recipient.

This group of amendments is reasonable and proportionate. They would not only give clear ministerial direction to Ofcom and the technology providers, and allow Ofcom to take the measures that we are discussing, but would pivot us away from placing the onus on the recipients of abusive behaviour, or people who might be exposed to it. Instead, the onus would be on platforms to make those risk assessments and take the middle ground, where that is a reasonable and proportionate step.

--- Later in debate ---
Paul Scully Portrait Paul Scully
- Hansard - - - Excerpts

I will come to that in a second. The hon. Member for Luton North talked about putting the onus on the victim. Any element of choice is there for adults; the children will be protected anyway, as I will outline in a second. We all agree that the primary purpose of the Bill is to be a children’s protection measure.

Ofcom will set out in codes of practice the specific steps that providers can take to protect children who are using their service, and the Government expect those to include steps relating to children’s access to high-risk features, such as livestreaming or private messaging. Clause 11(4)(d) sets out that that providers may be required to take measures in the following areas:

“policies on user access to the service or to particular content present on the service, including blocking users from accessing the service or particular content”.

The other areas listed are intentionally broad categories that allow for providers to take specific measures. For example, a measure in the area of blocking user access to particular content could include specific measures that restrict children’s access to parts of a service, if that is a proportionate way to stop users accessing that type of content. It can also apply to any of the features of a service that enable children to access particular content, and could therefore include children’s access to livestreaming and private messaging features. In addition, the child safety duties make it clear that providers need to use proportionate systems and processes that prevent children from encountering primary priority content that is harmful to them, and protect children and age groups at risk of harm from other content that is harmful to them.

While Ofcom will set out in codes of practice the steps that providers can take to meet these duties, we expect those steps, as we have heard, to include the use of age verification to prevent children accessing content that poses the greatest risk of harm to them. To meet that duty, providers may use measures that restrict children from accessing parts of the service. The Bill therefore allows Ofcom to require providers to take that step where it is proportionate. I hope that that satisfies the hon. Member for Aberdeen North, and gives her the direction that she asked for—that is, a direction to be more specific that Ofcom does indeed have the powers that she seeks.

Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

The Bill states that we can expect little impact on child protection before 2027-28 because of the enforcement road map and when Ofcom is planning to set that out. Does the Minister not think that in the meantime, that sort of ministerial direction would be helpful? It could make Ofcom’s job easier, and would mean that children could be protected online before 2027-28.

Paul Scully Portrait Paul Scully
- Hansard - - - Excerpts

The ministerial direction that the various platforms are receiving from the Dispatch Box, from our conversations with them and from the Bill’s progress as it goes through the House of Lords will be helpful to them. We do not expect providers to wait until the very last minute to implement the measures. They are starting to do so now, but we want them to go them further, quicker.

Government amendment 4 will require providers who already have a minimum age requirement for access to their service, or parts of it, to give details of the measures that they use to restrict access in their terms of service and apply them consistently. Providers will also need to provide age-appropriate protections for children using their service. That includes protecting children from harmful content and activity on their service, as well as reviewing children’s use of higher-risk features, as I have said.

To meet the child safety risk assessment duties in clause 10, providers must assess: the risk of harm to children from functionalities that facilitate the presence or dissemination of harmful content; the level of risk from different kinds of harmful content, giving separate consideration to children in different age groups; the different ways in which the service is used, and the impact of such use on the level of risk of harm; and how the design and operation of the service may increase the risks identified.

The child safety duties in clause 11 apply across all areas of the service, including the way it is operated and used by children, as well as the content present on the service. For the reasons I have set out, I am not able to accept the amendments, but I hope that the hon. Member for Aberdeen North will take on board my assurances.

--- Later in debate ---
Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

I cannot help but see the Government’s planned removal of clauses 12 and 13 as essentially wrecking amendments to the Bill. Taking those provisions out of the Bill makes it a Bill not about online safety, but about child protection. We have not had five years or so of going backwards and forwards, and taken the Bill through Committee and then unprecedentedly recommitted it to Committee, in order to fundamentally change what the Bill set out to do. The fact that, at this late stage, the Government are trying to take out these aspects of the Bill melts my head, for want of a better way of putting it.

My hon. Friend the Member for Batley and Spen was absolutely right when she talked about what clauses 12 and 13 do. In effect, they are an acknowledgement that adults are also harmed online, and have different experiences online. I strongly agree with the hon. Member for Aberdeen North about this not being the protect MPs from being bullied on Twitter Bill, because obviously the provisions go much further than that, but it is worth noting, in the hope that it is illustrative to Committee members, the very different experience that the Minister and I have in using Twitter. I say that as a woman who is LGBT and Jewish—and although I would not suggest that it should be a protected characteristic, the fact that I am ginger probably plays a part as well. He and I could do the same things on Twitter on the same day and have two completely different experiences of that platform.

The risk-assessment duties set out in clause 12, particularly in subsection (5)(d) to (f), ask platforms to consider the different ways in which different adult users might experience them. Platforms have a duty to attempt to keep certain groups of people, and categories of user, safe. When we talk about free speech, the question is: freedom of speech for whom, and at what cost? Making it easier for people to perpetuate, for example, holocaust denial on the internet—a category of speech that is lawful but awful, as it is not against the law in this country to deny that the holocaust happened—makes it much less likely that I, or other Jewish people, will want to use the platform.

--- Later in debate ---
Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

The hon. Member makes a powerful point about the different ways in which people experience things. That tips over into real-life abusive interactions, and goes as far as terrorist incidents in some cases. Does she agree that protecting people’s freedom of expression and safety online also protects people in their real, day-to-day life?

Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

I could not agree more. I suppose that is why this aspect of the Bill is so important, not just to me but to all those categories of user. I mentioned paragraphs (d) to (f), which would require platforms to assess exactly that risk. This is not about being offended. Personally, I have the skin of a rhino. People can say most things to me and I am not particularly bothered by it. My concern is where things that are said online are transposed into real-life harms. I will use myself as an example. Online, we can see antisemitic and conspiratorial content, covid misinformation, and covid misinformation that meets with antisemitism and conspiracies. When people decide that I, as a Jewish Member of Parliament, am personally responsible for George Soros putting a 5G chip in their arm, or whatever other nonsense they have become persuaded by on the internet, that is exactly the kind of thing that has meant people coming to my office armed with a knife. The kind of content that they were radicalised by on the internet led to their perpetrating a real-life, in-person harm. Thank God—Baruch Hashem—neither I nor my staff were in the office that day, but that could have ended very differently, because of the sorts of content that the Bill is meant to protect online users from.

Damian Collins Portrait Damian Collins
- Hansard - - - Excerpts

The hon. Lady is talking about an incredibly important issue, but the Bill covers such matters as credible threats to life, incitement to violence against an individual, and harassment and stalking—those patterns of behaviour. Those are public order offences, and they are in the Bill. I would absolutely expect companies to risk-assess for that sort of activity, and to be required by Ofcom to mitigate it. On her point about holocaust denial, first, the shield will mean that people can protect themselves from seeing stuff. The further question would be whether we create new offences in law, which can then be transposed across.

Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

I accept the points that the hon. Member raised, but he is fundamentally missing the point. The categories of information and content that these people had seen and been radicalised by would not fall under the scope of public order offences or harassment. The person was not sending me harassing messages before they turned up at my office. Essentially, social media companies and other online platforms have to take measures to mitigate the risk of categories of offences that are illegal, whether or not they are in the Bill. I am talking about what clauses 12 and 13 covered, whether we call it the “legal but harmful” category or “lawful but awful”. Whatever we name those provisions, by taking out of the Bill clauses relating to the “legal but harmful” category, we are opening up an area of harm that already exists, that has a real-world impact, and that the Bill was meant to go some way towards addressing.

The provisions have taken out the risk assessments that need to be done. The Bill says,

“(e) the level of risk of functionalities of the service facilitating the presence or dissemination of priority content that is harmful to adults, identifying and assessing those functionalities that present higher levels of risk;

(f) the different ways in which the service is used, and the impact of such use on the level of risk of harm that might be suffered by adults;

(g) the nature, and severity, of the harm that might be suffered by adults”.

Again, the idea that we are talking about offence, and that the clauses need to be taken out to protect free speech, is fundamentally nonsense.

I have already mentioned holocaust denial, but it is also worth mentioning health-related disinformation. We have already seen real-world harms from some of the covid misinformation online. It led to people including Piers Corbyn turning up outside Parliament with a gallows, threatening to hang hon. Members for treason. Obviously, that was rightly dealt with by the police, but the kind of information and misinformation that he had been getting online and that led him to do that, which is legal but harmful, will now not be covered by the Bill.

I will also raise an issue I have heard about from a number of people dealing with cancer and conditions such as multiple sclerosis. People online try to discourage them from accessing the proper medical interventions for their illnesses, and instead encourage them to take more vitamin B or adopt a vegan diet. There are people who have died because they had cancer but were encouraged online to not access cancer treatment because they were subject to lawful but awful categories of harm.

Kirsty Blackman Portrait Kirsty Blackman
- Hansard - - - Excerpts

I wonder if the hon. Member saw the story online about the couple in New Zealand who refused to let their child have a life-saving operation because they could not guarantee that the blood used would not be from vaccinated people? Is the hon. Member similarly concerned that this has caused real-life harm?

Charlotte Nichols Portrait Charlotte Nichols
- Hansard - -

I am aware of the case that the hon. Member mentioned. I appreciate that I am probably testing the patience of everybody in the Committee Room, but I want to be clear just how abhorrent I find it that these provisions are coming out of the Bill. I am trying to be restrained, measured and reasonably concise, but that is difficult when there are so many parts of the change that I find egregious.

My final point is on self-harm and suicide content. For men under the age of 45, suicide is the biggest killer. In the Bill, we are doing as much as we can to protect young people from that sort of content. My real concern is this: many young people are being protected by the Bill’s provisions relating to children. They are perhaps waiting for support from child and adolescent mental health services, which are massively oversubscribed. The minute they tick over into 18, fall off the CAMHS waiting list and go to the bottom of the adult mental health waiting list—they may have to wait years for treatment of various conditions—there is no requirement or duty on the social media companies and platforms to do risk assessments.