Online Abuse Debate
Full Debate: Read Full DebateRoger Gale
Main Page: Roger Gale (Conservative - Herne Bay and Sandwich)Department Debates - View all Roger Gale's debates with the Department for Digital, Culture, Media & Sport
(2 years, 9 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I remind Members to observe social distancing and to wear masks as appropriate, please.
Before I call the hon. Member for Newcastle upon Tyne North to move the motion, I wish to make a short statement about the sub judice measures. There are issues pertinent to the debate that may be relevant to specific cases. I remind Members that, under the terms of the House’s sub judice resolution, no reference should be made to legal proceedings that are currently live before UK courts. For clarification, that applies to coroners’ courts as well as to law courts.
I beg to move,
That this House has considered e-petitions 272087 and 575833, relating to online abuse.
Thank you for chairing this incredibly important and timely debate, Sir Roger. The way that we view social media in this country has changed dramatically in the 12 years that I have been in Parliament. In the early days, people saw it as a force for good, whether they were activists using Twitter and Facebook to organise when the Arab spring began in 2011 or just groups of likeminded people sharing photos of their cats and clips from their favourite video games. Many people took the anonymity it offered to be an unqualified positive, allowing people to share deeply personal things and be themselves in a way that, perhaps, they felt they could not be in the real world. All that potential is still there.
Social media has been an invaluable tool in keeping us connected with friends and family during these incredibly challenging two years, but the dark side of social media has also become depressingly familiar to us all. We now worry about what exactly these giant corporations have been doing with our personal information. We read research describing echo chambers that fuel political polarisation and we see the unfolding mental health impact, particularly on young girls, of the heavily edited celebrity images that always seem to be just one or two swipes away.
As the Putin regime’s disastrous invasion of Ukraine proceeds, Ukrainians do not just face the Russian troops who have entered their country. Moscow has, predictably, stepped up its misinformation campaigns on social media, with the intention of sowing confusion and undermining Ukrainian morale. Meanwhile, the ease of creating, editing and sharing videos that appear to document events has left many uncertain what to believe or where to turn for reliable information. The Ukrainian people are seeing their worst nightmare unfold in a tragic, bloody war that they did not want. Before I make my main comments, I am sure I speak for us all here in saying that our thoughts, our solidary and our resolve are with Ukraine today. I know that many Members wanted to be in this important debate, but events and issues in the main Chamber have rather taken over.
What prompted the Petitions Committee inquiry into online abuse and the report we are debating is the growing concern expressed by petitioners about the abuse that people receive on social media platforms and the painfully slow progress in making online spaces less toxic. The scale of public concern is shown by the popularity of the e-petitions that our Committee has received on this subject in recent years, particularly the petitions created by television personalities Bobby Norris and Katie Price.
Bobby, who is sitting in the Public Gallery, is a powerful advocate and has started two petitions on this issue. The first called for the online homophobia of which he has been a target to be made a specific criminal offence. The second, which prompted our inquiry, was created in September 2019 and called on the Government to require online trolls to be blocked from accessing social media platforms via their IP addresses. It received over 133,000 signatures before closing early due to the 2019 general election.
Members will also be aware that Katie Price has spoken movingly about the vile abuse to which her son Harvey has been subjected. She and her mum Amy told the Committee that platforms fail to take down racist and anti-disability abuse aimed at Harvey and continue to respond poorly to complaints and reports about abusive posts. Katie’s petition was created in March 2021 and called on the Government to require social media users to link their accounts to a verified form of ID, so that abusive users can be more easily identified and prosecuted. The petition said:
“We have experienced the worst kind of abuse towards my disabled son and want to make sure that no one can hide behind their crime.”
It received almost 700,000 signatures by the time it closed in September 2021, with more than 500,000 coming in the weeks after the deplorable racist abuse aimed at England footballers last summer.
The inquiry that we have just concluded took place in the context of the Government’s draft Online Safety Bill. There is not time for me to talk through the Bill in detail today, but I know it will be the subject of intense scrutiny and debate in the coming months, and it is expected to impose new legal requirements on social media and other online platforms, including any platform that enables UK users to post content such as comments, images or videos, or to talk to others via messaging or forums online. I look forward to hearing the Government’s comments on that, although I appreciate that we await the publication of the Bill.
Online abuse is not something that just affects people in the public eye; it is something that most of us have at least witnessed, if not been subjected to ourselves. Ofcom’s pilot online harms survey in 2020-21 found that over a four-week period, 13% of respondents had experienced trolling, 10% offensive or upsetting language, and 9% hate speech or threats of violence. It is not an unfortunate side-effect of social media that victims can just shrug off. Although the abuse takes place online, we know that it can have a significant and devastating impact on the lives of victims offline. Glitch, which we spoke to as part of our inquiry, collated testimonies of online abuse as part of its report, “The Ripple Effect: Covid-19 and the Epidemic of Online Abuse”. One woman said:
“I shared a post on the rise in domestic abuse during lockdown and received dozens of misogynistic messages telling me I deserved to be abused, calling me a liar and a feminazi. My scariest experience, however, was when I shared a photograph of my young son and me. It was picked up by someone I assume to be on the far right, who retweeted it. Subsequently, throughout the day I received dozens of racist messages and threats directed at my son, at his father, and at me. It was terrifying.”
Sadly, distressing accounts of fear, isolation, difficulty sleeping, anxiety and depression are alarmingly familiar for people who are targeted for online abuse and harassment. However, the abuse is not directed equally, and the online world does not stand apart from real-world inequalities. Our inquiry found that women, disabled people, those from lesbian, gay, bisexual and transgender communities, and people from ethnic minority backgrounds are not only disproportionately targeted for abuse; often it is their very identities that are attached. International research conducted by the Pew Research Centre found that 70% of lesbian, gay and bisexual adults have encountered online harassment, compared with about 40% of straight adults.
We heard not only that incidents of antisemitic abuse have increased, but that Jewish women are disproportionately singled out for abuse. Similarly, although women are generally subjected to more online bullying than men are, ethnicity further influences a woman’s vulnerability. Amnesty International’s research suggests that black women are around 84% more likely than white women to be abused online. In this way, online abuse can reflect and amplify the inequalities that exist offline. It also reinforces marginalisation, discouraging the participation of such communities in online spaces. Demos, which we spoke to as part of our inquiry, catalogued the effect of misogynistic abuse on women’s mental health as part of its 2021 report, “Silence, Woman”. Many women quoted in the report talked of wanting to stop their social media presence altogether and leave activities that they otherwise enjoy. One said:
“At the moment, it makes me want to quit everything I do online.”
Another said:
“I can’t even look at social media because I’m so scared that I’ll see more sexism. It’s really affecting my mental health.”
It is essential that any measures to tackle online abuse also recognise and respond to inequalities in the volume and severity of that abuse. Therefore, our report makes several recommendations to Government. First, we recommend that a statutory duty be placed on the Government to consult with civil society organisations representing communities most affected by online harassment. These organisations best understand the needs of victims, and such consultation will ensure that legislation is truly effective in tackling online harms. Their involvement is an important counterbalance to the lobbying efforts of social media companies.
Secondly, we believe that the draft Online Safety Bill should align with protections already established in the Equality Act 2010 and hate crime laws, and should include abuse based on protected characteristics as priority harmful content. It should list hate crime and violence against women and girls offences as specific relevant offences within the scope of the Bill’s illegal content safety duties and specify the offences covered under those headings.
Finally, platforms should be required in their risk assessments to consider the differential risks faced by certain groups of users. Those requirements should be made explicit in the risk assessment duties set out in the draft Online Safety Bill. The evidence is clear: if someone is female, from an ethnic minority or from the LGBT community, they are much more likely to be abused online. Any legislation that assumes online abuse affects everybody equally, separate from real-world inequalities, does not address the problem. For the draft Online Safety Bill to be effective, it must require platforms to assess the vulnerability of certain communities online and tackle the unequal character of online abuse.
The related issues of online anonymity and identification of users also emerged as important and controversial issues, not only in our inquiry and the petitions that prompted it, but in the wider public and policy discussion about online abuse. The evidence we heard on the role of anonymity in facilitating abuse was mixed. Danny Stone of the Antisemitism Policy Trust, with whom I have worked closely as chair of the all-party parliamentary group against antisemitism, told us that the ability to post anonymously enables abusive behaviour and pointed to research demonstrating disinhibition effects from anonymity that can lead to increased hateful behaviour. Danny cited a figure suggesting that 40% of online antisemitic incidents over the course of a month originated from anonymous accounts. Nancy Kelley from Stonewall and Stephen Kinsella from Clean Up The Internet also argued that anonymity should be seen as a factor that increases the risk of users posting abuse and other harmful content.
However, other witnesses took different views, arguing that evidence of a causal link between anonymity and abusive behaviour is unclear. Chara Bakalis from Oxford Brookes University argued that
“focusing so much on anonymity and trying to make people say who they are online”
risks misconstruing the problem as a question of individual behaviour, rather than the overall toxicity of online spaces. We also heard that the ability to post anonymously can be important for vulnerable users. Ruth Smeeth from Index on Censorship told us how valuable it is for victims of domestic abuse to be able to share their stories without fear of being identified, and Ellen Judson from Demos warned that there is no way to reduce anonymity in a way that only hurts abusers.
Tackling the abuse itself, whether or not it comes from anonymous users, should therefore be the focus of efforts to resolve this problem. Allowing users to post anonymously always entails a risk. We recommend that online platforms should be required to specifically evaluate the links between anonymity and abusive content on their platforms, in order to consider what steps should be taken in response to it.
A related question is whether users should be required to identify themselves if they want to use social media, as a way of preventing online abuse. On Friday, the Government announced that the draft Online Safety Bill would require the largest social media companies to allow users to verify their identities on a voluntary basis, and users will therefore have the option to block other users who choose not to verify their identity. This is a positive forward, giving users control over who they interact with and what they see online.
However, that would not be a silver bullet and should not be presented as such. It is an extra layer of protection, but it should not be the main focus for tackling online abuse. It absolutely does not absolve social media companies of their responsibility to make online spaces less toxic, which must be our focus, nor is it without risks. The Committee heard counter-arguments about users having to choose to use the option to block unverified users, which could mean that domestic abuse victims and other vulnerable users might be less likely to want to verify themselves, and therefore their voices will not be heard by other users. When Ofcom draws up its guidance, it must therefore offer ways to verify identity that are as robust but as inclusive as possible.
Bobby Norris’s petition argues that it is “far too easy” for social media users who have been banned to simply create a new account and continue the abuse. Katie Price and her mum Amy also raised the issue of banned users who seemingly have no problem returning and behaving in the same appalling way. The major social media platforms told us that they already have rules against previously banned users returning, as well as the tools and data to identify users and prevent them from starting new accounts. However, the evidence that we heard does not support that.
Our inquiry found that preventing the return of abusive banned users is not a priority for social media companies, and some users are taking advantage of the lax enforcement of bans to continue abusing their victims. That is a significant failing, and platforms must be held accountable for it. Robust measures must be put in place to require social media platforms to demonstrate that they can identify previously banned users when they try to create new accounts and must discourage—or, even better, prevent—unstable accounts from posting abusive content.
Where a platform’s rules prohibit users from returning to the platform, they should be able to show that they are adequately enforcing those rules. The regulations must have teeth, so we also recommend that Ofcom should have the power to issue fines or take other enforcement action if a platform cannot demonstrate that.
We also took evidence from the Law Commission, which has recommended the creation of two new offences covering online communications. The proposed introduction of a harm-based offence would criminalise communications
“likely to cause harm to a likely audience”,
with harm defined as
“psychological harm, amounting at least to serious distress”.
An additional new offence covering threatening communications would criminalise communications that convey
“a threat of serious harm”,
such as grievous bodily harm or rape.
We also heard that if the proposed new offences were introduced, some abusive content may be treated as
“a more serious offence with a more serious penalty”
than if it were prosecuted under existing law. The Committee believes that is a positive step forward that would better capture the context-dependent nature of some online abuse. A photograph of someone’s front door, for example, may be entirely innocent in some contexts, but can take on quite sinister connotations in others, where it quite clearly implies a threat to a person’s safety.
The Government should also monitor how effectively any new communications offences, particularly the Law Commission’s proposed harm-based offence, protect people and provide redress for victims of online abuse, while also protecting freedom of expression online. The Government should publish an initial review of the workings and impact of any new communications offences within the first two years after they come into force. We have to make sure we take this opportunity to get this right and review it within two years to make sure it is as effective as it can be.
The Law Commission also recently concluded a review of hate crime law. It acknowledges two points highlighted in the Petitions Committee’s 2019 report: the unequal treatment of protected characteristics in hate crime law, and the failure to classify abuse of disabled people as a hate crime in cases where the offence may have been motivated by a sense that disabled people are easy targets, rather than being clearly motivated by hostility to disabled people.
The commission recommended extending existing aggravated hate crime offences to cover all characteristics currently protected under hate crime law, and reforming the motivation test for an offence to be treated as a hate crime, proposing an alternate test of motivation on the grounds of “hostility or prejudice”. The Government have stated that hate crime offences will be listed in the draft Online Safety Bill as priority illegal content. That means that the legislation will require platforms to take steps to proactively prevent users from encountering hate crime content.
There is some confusion, however, as we do not yet know if this will be limited to the existing stand-alone racially and religiously aggravated and stirring up hatred offences, or if the intention is to require platforms to proactively prevent users from encountering, for example, communications that involve hostility based on a protected characteristic such as disability. When the Minister responds, will he tell us what the Government expect the practical impact to be on how platforms are required to deal with, for example, the abuse of disabled people online?
Our inquiry heard again and again that changes to the law on online abuse risk becoming irrelevant, when we lack the capacity to even enforce the law as it stands. The uncomfortable truth is that, despite the dedication of our officers, police resources have been diminished to the point where even relatively simple crimes in the offline world go unsolved more often than not, according to Home Office statistics. Meanwhile, the proportion of reported crimes leading to successful prosecutions has reached an all-time low.
It is not surprising that we found such scepticism about the state’s capacity to enforce a law against criminal online abuse, which, in many cases, will be complex and time-consuming to investigate. Ruth Smeeth gave the following evidence to the Committee:
“When I got my first death threat in 2014, at that point the police did not have access to Facebook. It was banned…Although they can now see it, they do not have the resources available to help them prosecute. Whether the legislation is amended or not, it is so incredibly important that the criminal justice system can do its work. To do that, they need resources.”
While we believe the Law Commission’s proposals are eminently sensible, we are deeply concerned that the inadequate resourcing of our police and criminal justice system is the real elephant in the room. It could prevent us from dealing with the most serious forms of online abuse, such as death threats, the sending of indecent images and illegal hate speech.
I suspect that the Treasury is unlikely to look favourably on this resourcing issue any time soon, but the Committee would be neglecting its duty if we failed to draw attention to it. Resources in the police and criminal justice system have to be an essential part of the conversation on tackling online harms. If the Government are serious about tackling the most serious forms of online abuse, they must ensure that our police and courts have the resources to enforce the laws against it.
Although we talk a lot about Twitter, Facebook and TikTok in these discussions, abusive content hosted on smaller platforms also plays a significant role in encouraging prejudicial attitudes and real-world harm. Some of these platforms have become safe havens for some of the most troubling material available online, including holocaust denial, terrorist propaganda films and covid-19 disinformation. From an internet browser today, anyone can easily access videos that show graphic footage of real-world violence and allege the attacks are part of a Jewish plot, or find an entire channel dedicated to the idea that black people are a biological weapon designed to wipe out western civilisation—I could go on. Danny Stone of the Antisemitism Policy Trust told the Committee:
“It is not just the Twitters and Facebooks of this world; there are a range of harms occurring across a range of different platforms. It is sinister, we have a problem and, at the moment, it is completely unregulated. Something needs to be done.”
We have heard no evidence to suggest that the negative effects of abuse on people’s wellbeing or freedom of expression are any less serious because the abuse comes from a smaller platform. Failure to address such content would risk significantly undermining the impact of the legislation. The duties set out in the draft Online Safety Bill relating to content that is “legal but harmful” to adults must apply to a wide range of platforms to ensure that abusive content is removed from the online sphere, not merely shifted from the larger platforms to darker corners of the internet.
The Committee therefore recommends that the draft Online Safety Bill require smaller platforms to take steps to protect users from content that is legal but harmful to adults, with a particular focus on ensuring that such platforms cannot be used to host content that has the potential to encourage hate or prejudice towards individuals or communities. They do not get a free pass just because they are smaller platforms.
The Minister has previously indicated that the Government have considered amending the conditions for classing a platform as category 1, so that it covers platforms with either a high number of users or posing a high risk of harm to users, rather than both conditions having to be met, as is the case in the draft Bill. We would welcome an update on whether the Government are minded to take that forward.
Legislators have a way of making the debate around online safety sound incredibly complicated and inaccessible. However, the fundamental issue is simple: too many people are exploiting online platforms to abuse others, and not enough has been done to protect the victims and create online spaces where people are free to express themselves in a constructive way. In the offline world, there are rules on acceptable behaviour and how we treat other people. We invest huge amounts of time and energy into ensuring that those rules are followed as much as possible. The same simply cannot be said of the digital sphere.
The online world has changed dramatically in such a short time. Our laws and regulations have not kept up. They have allowed a culture to develop where abuse has become normalised. It was deeply troubling to hear in every single one of the Committee’s school engagement sessions that pupils believe that abuse is just a normal part of the online experience. Is that really what we want our children to grow up believing? We can do so much better than that.
Social media companies make so much money. It is not too much to ask that they invest some of that in ensuring that their platforms are safe, and that people cannot inflict enormous harm on others without consequences. Of course, there will always be some abuse and inappropriate behaviour online, and nobody expects any Government to prevent it all, just as no home security system could stop every clever and determined burglar, but we can certainly do a lot better.
The Committee welcomes the opportunity provided by the draft Online Safety Bill, and our report sets out several ways the Government can improve the legislation. Ministers must recognise the disproportionate way that women, ethnic minorities, people with disabilities and LGBT people are targeted, so that nobody feels they cannot express themselves or engage with others online. We need to hold the platforms accountable if they fail to prevent banned users from rejoining, and we must ensure our police have the resources they need to tackle the most dangerous forms of online abuse. We look forward to the Government addressing our recommendations when their formal response to our report is published, and to the Minister’s response today.
Social media offers such fantastic opportunities to connect with others and is a real source of positivity and enjoyment for so many people. If we get the Bill right, we will be taking the first step towards bringing some much-needed light to the dark side of social media and amplifying the benefits of the unprecedented connectivity of the world we live in. Our report and today’s debate are important steps in bringing to Parliament the concerns of hundreds of thousands of members of the public who want a safer and more equal online world. We will continue to hold Ministers to account on behalf of the petitioners, so that the draft Online Safety Bill makes its way through Parliament and achieves what we know petitioners want.