Online Abuse Debate

Full Debate: Read Full Debate

Online Abuse

Liz McInnes Excerpts
Thursday 7th July 2016

(8 years ago)

Commons Chamber
Read Full debate Read Hansard Text
Liz McInnes Portrait Liz McInnes (Heywood and Middleton) (Lab)
- Hansard - -

I, too, thank the right hon. Member for Basingstoke (Mrs Miller) for initiating the debate. I also thank the Backbench Business Committee. I think it very important for us to raise these issues. I have been shocked by some of the examples that have been given today, but I am afraid I am going to add to them.

Online abuse is not a technological problem; it is a social problem that just happens to be powered by technology. I will not deny that social media can be a force for good, disseminating information and allowing people to share jokes or simply keep in contact with friends and relatives. As has already been pointed out, we, as MPs, are encouraged to be as accessible as possible—to be out there with websites and our Facebook and Twitter pages, staying connected to our constituents and keeping them as well informed as possible—but more and more, especially in the case of female MPs, our “out-thereness” makes us a target for online abuse. Indeed, most prominent women in any field will have stories of vile comments posted to or about them, usually by anonymous sources. When it is allowed to rampage unchecked and unmoderated, social media becomes much more accurately titled “unsocial media”.

There is, of course, the “free speech” argument, which unfortunately appears to many people to be the divine right to say whatever is on one’s mind without any regard for the consequences. With free speech, however, comes the responsibility to deal with the consequences of one’s words. What concerns me, particularly in the case of Twitter and Facebook, is the apparent lack of a coherent policy on what constitutes “online abuse”. Let me give a few examples.

Twitter policy states:

“We do not tolerate behaviour that crosses the line into abuse, including behaviour that harasses, intimidates, or uses fear to silence another user’s voice.”

With that in mind, when I received a threat on Twitter during the referendum debate—

“We’ll see what you say when an immigrant rapes you or one of your kids”—

I reported it to Twitter, using its online pro forma. Surely this racist, violent and targeted abuse crossed the line into behaviour that harasses and intimidates, which Twitter policy claims to be against. But no; the response that I received from Twitter was

“it’s not currently violating the Twitter rules”.

The killers of Lee Rigby, who was from Middleton in my constituency, posted explicitly on Facebook what they were planning, yet that was never picked up and investigated. I recently reported a vile and misogynistic comment made about another female MP on Facebook. It read—and I quote selectively—

“She looks like”

an effing

“mutant and should be burnt at the stake”.

That comment, with its foul language and its violent categorisation of women as “witches” who need to be disposed of, received the following comment from Facebook:

“We’ve reviewed the comment you reported for promoting graphic violence and found that it doesn’t violate our community standards.”

The reply continued:

“Please let us know if you see anything else that concerns you. We want to keep Facebook safe and welcoming for everyone.”

Well, if that is Facebook’s idea of a safe and welcoming environment, I would not like to see what it considers to be a no-go area.

Seriously—and I am being 100% serious—the responsible thing for Twitter and Facebook to do is to use algorithms to identify hate speech. Words such as “Islamophobe”, “murder” and “rape” could then be picked up, and the accounts in question could be investigated. It is totally irresponsible of social media platforms to allow unchecked and unregulated discourse. That would not happen in any other walk of life.

Twitter and Facebook appear to rely solely on reports by users of abuse and hate speech. They place the responsibility entirely on the user, and even then the pro-forma reporting procedure is often too simplistic to allow the actual problems and concerns to be accurately conveyed. Yes, the police can be notified, but we are all aware of the diminution in police numbers that has taken place under this Government and the previous coalition. I call on the Government to make funds available for training, and to increase police numbers in order to deal with online abuse. I was interested by my right hon. Friend’s suggestion that social media platforms should be asked to provide a levy to pay for those measures.

I have concentrated on abuse directed at female politicians, although I accept that online abuse takes many other forms and that many other groups are targeted, because this does seem to be a gender issue. Abuse is directed more towards female politicians than towards our male counterparts, and studies have shown that, in the United Kingdom, 82% of the abuse that is recorded comes from male sources. Social networks could take a strong and meaningful stance against harassment simply by applying the standards that we already apply in our public and professional lives. Wishing rape or other violence on women, or using derogatory slurs, would be unacceptable in most workplaces or communities, and those who engaged in such vitriol would be reprimanded or asked to leave. Why should that not be the response in our online lives?

Let us never forget that words carry weight, and that language has a consequence. Once it has been said, it cannot be unsaid. Whether it be uttered face to face or typed from behind a social media avatar, there is no hiding from meaning, and we should confront now the ever-spreading plague of misogyny, abuse and threats online.