All 1 Debates between Damian Collins and Kim Leadbeater

Tue 13th Dec 2022
ONLINE SAFETY BILL (Second sitting)
Public Bill Committees

Committee stage (re-committed clauses and schedules): 2nd sitting

ONLINE SAFETY BILL (Second sitting)

Debate between Damian Collins and Kim Leadbeater
Committee stage (re-committed clauses and schedules)
Tuesday 13th December 2022

(1 year, 11 months ago)

Public Bill Committees
Read Full debate Online Safety Act 2023 View all Online Safety Act 2023 Debates Read Hansard Text Read Debate Ministerial Extracts Amendment Paper: Public Bill Committee Amendments as at 13 December 2022 - (13 Dec 2022)
Damian Collins Portrait Damian Collins
- Hansard - -

I totally appreciate the point that the hon. Lady makes, which is a different one. For gambling, the inducement to act straightaway often comes in the form of advertising. It usually comes in the form of free bets and immediate inducements to act. People who have self-excluded should not be targeted in that way. We need to ensure that that is rigorously enforced on online platforms too.

Kim Leadbeater Portrait Kim Leadbeater
- Hansard - - - Excerpts

It is a pleasure to serve under your chairship, Dame Angela. It is lovely to be back in a Public Bill Committee with many familiar faces—and a few new ones, including the Minister. However, after devoting many weeks earlier this year to the previous Committee, I must admit that it is with some frustration that we are back here with the Government intent on further weakening their Bill.

Throughout the passage of the Bill, I have raised a number of specific concerns, from democratic and journalistic exemptions, to age verification, recognised news publishers, advocacy bodies and media literacy. On clause 14, while I support the principles of Government amendments 15 and 16, I draw the Minister’s attention to the importance of amendment (a) to amendment 15 and amendment (a) to amendment 16. He has already said that he is sympathetic to those amendments. Let me try to convince him to turn that sympathy into action.

I will focus primarily on an issue that is extremely important to me and to many others: extremism and radicalisation. However, while I will focus on the dangers of extremism and radicalisation, be it right-wing, Islamist, incel or other, the dangers that I am about to set out—the chain of events that leads to considerable harm online—are the same for self-harm content, eating disorder content, health disinformation, climate change disinformation or any dangerous, hateful material directed at people based on their sex, sexual orientation, ethnicity, religion or other characteristics.

Such content is not just deeply offensive and often wholly inaccurate; it is dangerous and vile and serves only to spread harm, misinformation and conspiracy. To be clear, such content is not about a social media user stating how upset and angry they are about the football result, or somebody disagreeing legitimately and passionately about a political issue. It is not the normal, everyday social media content that most people see on their feeds.

This is content that is specifically, carefully and callously designed to sit just below the criminal threshold, yet that can still encourage violence, self-harm or worse. It is content used by extremists of all types that lures vulnerable people in, uses social media likes and comments to create the illusion of legitimacy and popularity, and then directly targets those most likely to be susceptible, encouraging them either to commit harm or to move on to smaller but high-harm platforms that may fall out of the scope of the Bill. This is not free speech; it is content that can act as a dangerous gateway to radicalisation and extremism. The Government know how dangerous it is because their own report from His Majesty’s Prison and Probation Service last year found:

“The Internet appears to be playing an increasingly prominent role in radicalisation processes of those convicted of extremist offences in England and Wales.”

Hon. Members will understand my deep and personal interest in this matter. Since the murder of my sister, a Member of this House, six and a half years ago by a far-right extremist, I have worked hard to bring communities and people together in the face of hatred. Some of that work has included meeting former extremists and discussing how they were radicalised. Those conversations were never easy, but what became very clear to me was that such people are not born extremists. Their radicalisation starts somewhere, and it is often somewhere that appears to be completely innocent, such as a Facebook group about issues or problems in their community, a Twitter discussion about current affairs or the state of the country, or even a page for supporters of their football team.

One day, a comment is posted that is not illegal and is not hate speech, but that references a conspiracy or a common trope. It is an ideological remark placed there to test the water. The conversation moves on and escalates. More disturbing or even violent comments start to be made. They might be accompanied by images or videos, leading those involved down a more sinister path. Nothing yet is illegal, but clearly—I hope we would all agree—it is unacceptable.

The number of contributors reduces, but a few remain. No warnings are presented, no flags are raised and it appears like normal social media content. However, the person reading it might be lonely or vulnerable, and now feels that they have found people to listen to them. They might be depressed or unhappy and looking to blame their situation on something or someone. They might feel that nobody understands them, but these people seem to.

The discussion is then taken to a more private place, to the smaller but more harmful platforms that may fall outside the scope of the Bill, but that will now become the go-to place for spreading extremism, misinformation and other harmful content. The radicalisation continues there—harder to track, harder to monitor and harder to stop. Let us remember, however, that all of that started with those legal but harmful comments being witnessed. They were clearly unacceptable, but mainstream social media give them legitimacy. The Online Safety Bill will do nothing to stop that.

Unfortunately, that chain of events occurs far too often. It is a story told many times, about how somebody vulnerable is lured in by those wishing to spread their hatred. It is hosted by major social media platforms. Hon. Members may remember the case of John, a teenager radicalised online and subsequently sentenced. His story was covered by The Guardian last year. John was feeling a sense of hopelessness, which left him susceptible to the messaging of the far right. Aged 15, he felt “written off”: he was in the bottom set at school, with zero exam expectations, and feeling that his life opportunities would be dismal. The far right, however, promised him a future. John became increasingly radicalised by an online barrage of far-right disinformation. He said:

“I was relying on the far right for a job. They were saying that when they got power they would be giving jobs to people like me”.

John now says:

“Now I know the posts were all fake, but the 15-year-old me didn’t bother to fact-check.”

For some people in the room, that might seem like a totally different world. Thankfully, for most of us, it is. However, if Members take the time to see some of that stuff online, it is extremely disturbing and alarming. It is a world that we do not understand, but we have to be aware that it exists. The truth, as we can see, is that such groups use popular online platforms to lure in young people and give them a sense of community. One white nationalist group actively targets younger recruits and recently started Call of Duty warcraft gaming tournaments for its supporters. Let us be clear: John was 15, but he could easily have been 18, 19 or indeed significantly older.

John was radicalised by the far right, but we know that similar methods are used by Islamist extremists. A 2020 report from New York University’s Centre for Global Affairs stated:

“The age of social media has allowed ISIS to connect with a large-scale global audience that it would not be able to reach without it...Through strategic targeting, ISIS selects those who are most vulnerable and susceptible to radicalization”.

That includes those who are

“searching for meaning or purpose in their life, feeling anger and…alienated from society”.

The ages that are most vulnerable are 15 to 25.

Social media platforms allow ISIS to present its propaganda as mainstream news at little to no cost. Preventing that harm and breaking those chains of radicalisation is, however, possible, and the Bill could go much further to put the responsibility not on the user, but on the platforms. I believe that those platforms need unique regulation, because social media interaction is fundamentally different from real-life social interaction.

Social media presents content to us as if it is the only voice and viewpoint. On social media, people are far more likely to say things that they never would in person. On social media, those views spread like wildfire in a way that they would not in real life. On social media, algorithms find such content and pump it towards us, in a way that can become overwhelming and that can provide validity and reassurance where doubt might otherwise set in.

Allowing that content to remain online without warnings, or allowing it to be visible to all users unless they go searching through their settings to turn it off—which is wholly unrealistic—is a dereliction of duty and a missed opportunity to clean up the platforms and break the chains of radicalisation. As I set out, the chain of events is not unique to one form of radicalisation or hateful content. The same online algorithms that present extremist content to users also promote negative body image, eating disorders, and self-harm and suicide content.

I hope the Committee realises why I am so impassioned about “legal but harmful” clauses, and why I am particularly upset that a few Conservative Members appear to believe that such content should remain unchecked online because of free speech, with full knowledge that it is exactly that content that serves as the gateway for people to self-harm and to be radicalised. That is not free speech.