ONLINE SAFETY BILL (Second sitting) Debate
Full Debate: Read Full DebateKim Leadbeater
Main Page: Kim Leadbeater (Labour - Spen Valley)Department Debates - View all Kim Leadbeater's debates with the Department for Digital, Culture, Media & Sport
(2 years ago)
Public Bill CommitteesI have been clear about where I set the line. [Interruption.] I have said that if something is illegal, it is illegal. The terms of service of the platforms largely cover the list that we are talking about. As my hon. Friend the Member for Folkestone and Hythe and I have both said, the terms of service of the vast majority of platforms—the big category 1 platforms—set a higher bar than was in our original Bill. The hon. Member for Luton North talked about whether we should have more evidence. I understand that the pre-legislative scrutiny committee heard evidence and came to a unanimous conclusion that the “legal but harmful” conditions should not be in the Bill.
A few moments ago, the Minister compared the online world to the real world. Does he agree that they are not the same? Sadly, the sort of thing that someone says in the pub on a Friday night to two or three of their friends is very different from someone saying something dangerously harmful online that can reach millions and billions of people in a very short space of time. The person who spoke in the pub might get up the following morning and regret what they said, but no harm was done. Once something is out there in the online world, very serious damage can be done very quickly.
The hon. Lady makes a good point. I talked about the offline world rather than the real world, but clearly that can happen. That is where the balance has to be struck, as we heard from my hon. Friend the Member for Don Valley. It is not black and white; it is a spectrum of greys. Any sensible person can soon see when they stray into areas that we have talked about such as holocaust denial and extremism, but we do not want to penalise people who invariably are testing their freedom of expression.
It is a fine balance, but I think that we have reached the right balance between protecting freedom of expression and protecting vulnerable adults by having three layers of checks. The first is illegality. The second is enforcing the terms of service, which provide a higher bar than we had in the original Bill for the vast majority of platforms, so that we can see right at the beginning how they will be enforced by the platforms. If they change them and do not adhere them, Ofcom can step in. Ofcom can step in at any point to ensure that they are being enforced. The third is a safety net.
I totally appreciate the point that the hon. Lady makes, which is a different one. For gambling, the inducement to act straightaway often comes in the form of advertising. It usually comes in the form of free bets and immediate inducements to act. People who have self-excluded should not be targeted in that way. We need to ensure that that is rigorously enforced on online platforms too.
It is a pleasure to serve under your chairship, Dame Angela. It is lovely to be back in a Public Bill Committee with many familiar faces—and a few new ones, including the Minister. However, after devoting many weeks earlier this year to the previous Committee, I must admit that it is with some frustration that we are back here with the Government intent on further weakening their Bill.
Throughout the passage of the Bill, I have raised a number of specific concerns, from democratic and journalistic exemptions, to age verification, recognised news publishers, advocacy bodies and media literacy. On clause 14, while I support the principles of Government amendments 15 and 16, I draw the Minister’s attention to the importance of amendment (a) to amendment 15 and amendment (a) to amendment 16. He has already said that he is sympathetic to those amendments. Let me try to convince him to turn that sympathy into action.
I will focus primarily on an issue that is extremely important to me and to many others: extremism and radicalisation. However, while I will focus on the dangers of extremism and radicalisation, be it right-wing, Islamist, incel or other, the dangers that I am about to set out—the chain of events that leads to considerable harm online—are the same for self-harm content, eating disorder content, health disinformation, climate change disinformation or any dangerous, hateful material directed at people based on their sex, sexual orientation, ethnicity, religion or other characteristics.
Such content is not just deeply offensive and often wholly inaccurate; it is dangerous and vile and serves only to spread harm, misinformation and conspiracy. To be clear, such content is not about a social media user stating how upset and angry they are about the football result, or somebody disagreeing legitimately and passionately about a political issue. It is not the normal, everyday social media content that most people see on their feeds.
This is content that is specifically, carefully and callously designed to sit just below the criminal threshold, yet that can still encourage violence, self-harm or worse. It is content used by extremists of all types that lures vulnerable people in, uses social media likes and comments to create the illusion of legitimacy and popularity, and then directly targets those most likely to be susceptible, encouraging them either to commit harm or to move on to smaller but high-harm platforms that may fall out of the scope of the Bill. This is not free speech; it is content that can act as a dangerous gateway to radicalisation and extremism. The Government know how dangerous it is because their own report from His Majesty’s Prison and Probation Service last year found:
“The Internet appears to be playing an increasingly prominent role in radicalisation processes of those convicted of extremist offences in England and Wales.”
Hon. Members will understand my deep and personal interest in this matter. Since the murder of my sister, a Member of this House, six and a half years ago by a far-right extremist, I have worked hard to bring communities and people together in the face of hatred. Some of that work has included meeting former extremists and discussing how they were radicalised. Those conversations were never easy, but what became very clear to me was that such people are not born extremists. Their radicalisation starts somewhere, and it is often somewhere that appears to be completely innocent, such as a Facebook group about issues or problems in their community, a Twitter discussion about current affairs or the state of the country, or even a page for supporters of their football team.
One day, a comment is posted that is not illegal and is not hate speech, but that references a conspiracy or a common trope. It is an ideological remark placed there to test the water. The conversation moves on and escalates. More disturbing or even violent comments start to be made. They might be accompanied by images or videos, leading those involved down a more sinister path. Nothing yet is illegal, but clearly—I hope we would all agree—it is unacceptable.
The number of contributors reduces, but a few remain. No warnings are presented, no flags are raised and it appears like normal social media content. However, the person reading it might be lonely or vulnerable, and now feels that they have found people to listen to them. They might be depressed or unhappy and looking to blame their situation on something or someone. They might feel that nobody understands them, but these people seem to.
The discussion is then taken to a more private place, to the smaller but more harmful platforms that may fall outside the scope of the Bill, but that will now become the go-to place for spreading extremism, misinformation and other harmful content. The radicalisation continues there—harder to track, harder to monitor and harder to stop. Let us remember, however, that all of that started with those legal but harmful comments being witnessed. They were clearly unacceptable, but mainstream social media give them legitimacy. The Online Safety Bill will do nothing to stop that.
Unfortunately, that chain of events occurs far too often. It is a story told many times, about how somebody vulnerable is lured in by those wishing to spread their hatred. It is hosted by major social media platforms. Hon. Members may remember the case of John, a teenager radicalised online and subsequently sentenced. His story was covered by The Guardian last year. John was feeling a sense of hopelessness, which left him susceptible to the messaging of the far right. Aged 15, he felt “written off”: he was in the bottom set at school, with zero exam expectations, and feeling that his life opportunities would be dismal. The far right, however, promised him a future. John became increasingly radicalised by an online barrage of far-right disinformation. He said:
“I was relying on the far right for a job. They were saying that when they got power they would be giving jobs to people like me”.
John now says:
“Now I know the posts were all fake, but the 15-year-old me didn’t bother to fact-check.”
For some people in the room, that might seem like a totally different world. Thankfully, for most of us, it is. However, if Members take the time to see some of that stuff online, it is extremely disturbing and alarming. It is a world that we do not understand, but we have to be aware that it exists. The truth, as we can see, is that such groups use popular online platforms to lure in young people and give them a sense of community. One white nationalist group actively targets younger recruits and recently started Call of Duty warcraft gaming tournaments for its supporters. Let us be clear: John was 15, but he could easily have been 18, 19 or indeed significantly older.
John was radicalised by the far right, but we know that similar methods are used by Islamist extremists. A 2020 report from New York University’s Centre for Global Affairs stated:
“The age of social media has allowed ISIS to connect with a large-scale global audience that it would not be able to reach without it...Through strategic targeting, ISIS selects those who are most vulnerable and susceptible to radicalization”.
That includes those who are
“searching for meaning or purpose in their life, feeling anger and…alienated from society”.
The ages that are most vulnerable are 15 to 25.
Social media platforms allow ISIS to present its propaganda as mainstream news at little to no cost. Preventing that harm and breaking those chains of radicalisation is, however, possible, and the Bill could go much further to put the responsibility not on the user, but on the platforms. I believe that those platforms need unique regulation, because social media interaction is fundamentally different from real-life social interaction.
Social media presents content to us as if it is the only voice and viewpoint. On social media, people are far more likely to say things that they never would in person. On social media, those views spread like wildfire in a way that they would not in real life. On social media, algorithms find such content and pump it towards us, in a way that can become overwhelming and that can provide validity and reassurance where doubt might otherwise set in.
Allowing that content to remain online without warnings, or allowing it to be visible to all users unless they go searching through their settings to turn it off—which is wholly unrealistic—is a dereliction of duty and a missed opportunity to clean up the platforms and break the chains of radicalisation. As I set out, the chain of events is not unique to one form of radicalisation or hateful content. The same online algorithms that present extremist content to users also promote negative body image, eating disorders, and self-harm and suicide content.
I hope the Committee realises why I am so impassioned about “legal but harmful” clauses, and why I am particularly upset that a few Conservative Members appear to believe that such content should remain unchecked online because of free speech, with full knowledge that it is exactly that content that serves as the gateway for people to self-harm and to be radicalised. That is not free speech.
I will speak briefly in favour of amendments 102 and 103. As I mentioned a few moments ago, legal but harmful content can act as the gateway to dangerous radicalisation and extremism. Such content, hosted by mainstream social media platforms, should not be permitted unchecked online. I appreciate tható for children the content will be banned, but I strongly believe that the default position should be for such content to be hidden by default to all adult users, as the amendments would ensure.
The chain of events that leads to radicalisation, as I spelt out, relies on groups and individuals reaching people unaware that they are being radicalised. The content is posted in otherwise innocent Facebook groups, forums or Twitter threads. Adding a toggle, hidden somewhere in users’ settings, which few people know about or use, will do nothing to stop that. It will do nothing to stop the harmful content from reaching vulnerable and susceptible users.
We, as legislators, have an obligation to prevent at root that harmful content reaching and drawing in those vulnerable and susceptible to the misinformation and conspiracy spouted by vile groups and individuals wishing to spread their harm. The only way that we can make meaningful progress is by putting the responsibility squarely on platforms, to ensure that by default users do not come across the content in the first place.
In the previous debate, I talked about amendment 15, which brought in a lot of protections against content that encourages and promotes, or provides instruction for, self-harm, suicide or eating disorders, and against content that is abusive or incites hate on the base of race, religion, disability, sex, gender reassignment or sexual orientation. We have also placed a duty on the largest platforms to offer adults the option to filter out unverified users if they so wish. That is a targeted approach that reflects areas where vulnerable users in particular could benefit from having greater choice and control. I come back to the fact that that is the third shield and an extra safety net. A lot of the extremes we have heard about, which have been used as debating points, as important as they are, should very much be wrapped up by the first two shields.
We have a targeted approach, but it is based on choice. It is right that adult users have a choice about what they see online and who they interact with. It is right that this choice lies in the hands of those adults. The Government mandating that these tools be on by default goes against the central aim of users being empowered to choose for themselves whether they want to reduce their engagement with some kinds of legal content.
We have been clear right from the beginning that it is not the Government’s role to say what legal content adults should or should not view online or to incentivise the removal of legal content. That is why we removed the adult legal but harmful duties in the first place. I believe we are striking the right balance between empowering adult users online and protecting freedom of expression. For that reason, I am not able to accept the amendments from the hon. Member for Pontypridd.
As we know, this clause requires providers of relevant services to publish annual transparency reports and sets out Ofcom’s powers in relation to those reports. The information set out in transparency reports is intended to help users to understand the steps that providers are taking to help keep them safe and to provide Ofcom with the information required to hold them to account.
These duties on regulated services are very welcome indeed. Labour has long held the view that mandatory transparency reporting and reporting mechanisms are vital to hold platforms to account, and to understand the true nature of how online harm is driven and perpetuated on the internet.
I will reiterate the points that were made in previous Committee sittings about our concerns about the regularity of these transparency reports. I note that, sadly, those reports remain unchanged and therefore they will only have to be submitted to Ofcom annually. It is important that the Minister truly considers the rapid rate at which the online world can change and develop, so I urge him to reconsider this point and to make these reports a biannual occurrence. Labour firmly believes that increasing the frequency of the transparency reports will ensure that platforms and services remain on the pulse, and are forced to be aware of and act on emergent risks. In turn, that would compel Ofcom to do the same in its role as an industry regulator.
I must also put on the record some of our concerns about subsections (12) and (13), which state that the Secretary of State of the day could amend by regulation the frequency of the transparency reporting, having consulted Ofcom first. I hope that the Minister can reassure us that this approach will not result in our ending up in a position where, perhaps because of Ofcom’s incredible workload, transparency reporting becomes even less frequent than an annual occurrence. We need to see more transparency, not less, so I really hope that he can reassure me on this particular point.
Does my hon. Friend agree that transparency should be at the heart of this Bill and that the Government have missed an opportunity to accelerate the inclusion of a provision in the Bill, namely the requirement to give researchers and academics access to platform data? Data access must be prioritised in the Bill and without such prioritisation the UK will fall behind the rest of Europe in safety, research and innovation. The accessibility and transparency of that data from a research perspective are really important.
I completely agree with my hon. Friend. We both made the point at length in previous sittings of the Committee about the need to ensure transparency, access to the data, and access to reporting for academics, civil society and researchers.
That also goes to the point that it is not for this Committee or this Minister—it is not in his gift—to determine something that we have all discussed in this place at length, which is the potential requirement for a standalone Committee specifically to consider online harm. Such a Committee would look at whether this legislation is actively doing what we need it to do, whether it needs to be reviewed, whether it could look at the annual reports from Ofcom to determine the length and breadth of harm on the internet, and whether or not this legislation is actually having an impact. That all goes to the heart of transparency, openness and the review that we have been talking about.
I want to go further and raise concerns about how public the reports will be, as we have touched on. The Government claim that their so-called triple shield approach will give users of platforms and services more power and knowledge to understand the harms that they may discover online. That is in direct contradiction to the Bill’s current approach, which does not provide any clarity about exactly how the transparency reports will be made available to the public. In short, we feel that the Government are missing a significant opportunity. We have heard many warnings about what can happen when platforms are able to hide behind a veil of secrecy. I need only point to the revelations of whistleblowers, including Frances Haugen, to highlight the importance of that point.
As the Bill stands, once Ofcom has issued a notice, companies will have to produce a transparency report that
“must…be published in the manner and by the date specified in the notice”.
I want to press the Minister on that and ask him to clarify the wording. We are keen for the reports to be published publicly and in an accessible way, so that users, civil society, researchers and anyone else who wants to see them can make sense of them. The information contained in the transparency reports is critical to analysing trends and harms, so I hope that the Minister will clarify those points in his response.
Does my hon. Friend agree that if the Government are to achieve their objective—which we all share—for the Bill to be world-leading legislation, we cannot rely on whistleblowers to tell us what is really going on in the online space? That is why transparency is vital. This is the perfect opportunity to provide that transparency, so that we can do some proper research into what is going on out there. We cannot rely on whistleblowers to give us such information.
My hon. Friend is absolutely right. We want the Bill to work. We have always wanted the Bill to work. We want it to achieve its aim of keeping children, adults and everyone who uses the internet safe from the harms that are perpetuated there. If there is no transparency, how will we know that the platforms are breaking the rules covertly, and whether they are hiding content and getting round the rules? That is what they do; we know it, because we have heard it from whistleblowers, but we cannot rely on whistleblowers alone to highlight exactly what happens behind the closed doors of the platforms.
We need the transparency and the reports to be made public, so that we can see whether the legislation is working. If that does not happen, although we have waited five years, we will need another piece of legislation to fix it. We know that the Bill is not perfect, and the Minister knows that—he has said so himself—but, ultimately, we need to know that it works. If it does not, we have a responsibility as legislators to put something in place that does. Transparency is the only way in which we will figure that out.
This may be the only place in the Bill where I do not necessarily agree wholeheartedly with the Labour Front Benchers. I agree with the vast majority of what was said, but I have some concerns about making mandatory the requirement for transparency reports to be public in all circumstances, because there are circumstances in which that would simply highlight loopholes, allowing people to exploit them in a way that we do not want them to do.
Specifically on the regularity of reporting and some level of transparency, given that the Minister is keen on the commercial imperative and ensuring that people are safe, we need a higher level of transparency than we currently see among the platforms. There is a very good case to be made for some of the transparency reporting to be made public, particularly for the very largest platforms to be required to make it public, or to make sections of it public.
I want to talk about the speed of change to the terms of service and about proportionality. If Ofcom could request transparency reporting only annually, imagine that it received transparency information three days before Elon Musk took over Twitter. Twitter would be a completely different place three days later, and Ofcom would be unable to ask for more transparency information for a whole year, by which point a significant amount of damage could have been done. We have seen that the terms of service can change quickly. Ofcom would not have the flexibility to ask for an updated transparency report, even if drastic changes were made to the services.
Another thing slightly concerns me about doing this annually and not allowing a bit more flexibility. Let us say that a small platform that none of us has ever heard of, such as Mastodon, shoots to prominence overnight. Let us also say that, as a small platform, Mastodon was previously regulated, and Ofcom had made a request for transparency information shortly before Elon Musk took over Twitter and people had migrated to Mastodon. Mastodon would now be suffering from very different issues than those it had when it had a small number of users, compared with the significant number that it has now. It would have changed dramatically, yet Ofcom would not have the flexibility to seek that information. We know that platforms in the online world have sudden stellar increases in popularity overnight. Some have been bubbling along for ages with nobody using them. Not all of them are brand-new platforms that suddenly shoot to prominence. The lack of flexibility is a problem.
Lastly, I agree about researchers being able to access the transparency information provided. It is really important that we recognise that Ofcom is not the only expert. Ofcom has a huge amount of expertise, and it is massively increasing its staff numbers to cope with these issues, but the reality is that those staff are not academic researchers. They are unable to look at the issues and are not necessarily the most prominent experts in the field of child protection, for example. That is not to take away from the expertise in Ofcom, but we could allow it to ask a regulated group of researchers to look at the information and point out any issues that may not have been spotted, particularly given the volume of transparency reports that there are likely to be.
The hon. Lady makes an important point. In terms of transparency, the question for me is, what are the Government worried about? Surely part of the Bill is about finding out what is really going on, and the only way that we will do that is by having access to the information. The more transparency, the better. The hon. Lady is right that having experts who can research what is going on is fundamental. If there is a concern around the workload for Ofcom, that is a separate issue that the Minister needs to address, but surely the more work that is done in terms of research and transparency, the better.
We have seen that just from the people from external organisations who have contacted us about the Bill. The amount of expertise that we do not have that they have brought to the table has significantly improved the debate and hopefully the Bill. Even prior to the consultations that have happened, that encouraged the Minister to make the Bill better. Surely that is why the pre-legislative scrutiny Committee looked at the Bill—in order to improve it and to get expert advice. I still think that having specific access to expertise in order to analyse the transparency report has not been covered adequately.