(11 years, 10 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
It is a pleasure to serve under your chairmanship, Mr Dobbin.
On 19 December 2012, the Director of Public Prosecutions, Keir Starmer, issued interim guidelines on prosecuting cases involving communications sent via social media. It was a welcome move in the right direction and I hope that Parliament and the judiciary will study internet abuse more closely and begin, as I have been urging Ministers to do for some time, to distinguish between the different degrees of online abuse.
As I explained in my Adjournment debate in September last year, trolling first came to my attention following the tragic death of Liverpool teenager, Georgia Varley, in October 2011. Since then, it has become clear that there is no clear-cut definition of trolling. Too often, this is confused with cyber-bullying, cyber-stalking or even child grooming. Trolling is something very different. I would characterise it as something said online that carries online consequences and poses no offline, real-world risk to the individual in receipt of the message. Trolls demonstrate immoral and unethical behaviour and, quite often, as in the case of Georgia, they trade entertainment on the back of an individual’s personal grief. In essence, the victims of trolls suffer psychological, not physical, abuse.
It is a growing problem in British society and one that Parliament and the legal process have been slow to recognise. I want to focus on concerns regarding the advice given to prosecutors, suggesting that messages sent that are of a grossly offensive, indecent, obscene or false nature do not meet the public interest test, inasmuch as they are unlikely to lead to prosecutions. Indeed, Crown Prosecution Service rules state:
“Just because the content expressed in the communication is in bad taste, controversial or unpopular, and may cause offence to individuals or a specific community, this is not in itself sufficient reason to engage the criminal law”.
That should concentrate the minds of parliamentarians. There are rightly concerns because the guidelines fail to articulate exactly what trolling is or identify what kind of people are commonly victims of it; instead, the guidelines attempt to issue a one-size-fits-all solution that is not contextually bound and takes no consideration of personal relationships between the sender, the recipient and/or the subject matter.
The guidelines set out by the DPP that most relate to that issue are referenced in section 127 of the Communications Act 2003. The interim guidelines attempt to make a clear distinction between the different degree of abuse sent via social media, and so instruct prosecutors accordingly. For example, because of the seriousness of the potential offence threatened in a message as outlined in paragraphs 12(1), 12(2) and 12(3), such misdemeanours would be prosecuted robustly using, it has to be said, mainly legislation designed for offline offences.
We are advised that offences deemed to have been committed in accordance with paragraph 12(4), which are likely to be grossly offensive, indecent, obscene, menacing or false, are unlikely to lead to a prosecution despite the distress, hurt and needless anxiety that such contraventions can cause. That is where the guidelines have failed adequately to address the growing problem. Indeed, the directive highlights one of the major problems for prosecutors, because, in accordance with the guidelines, something said online is not punishable in law in the same way as something said offline. In essence, the guidelines fail to address the increasing grey area of trolling: the difficulty of proving what a troll intends and what a victim interprets the troll’s intention as being. That, coupled with the ease with which anonymity is afforded to social media users, has led to deliberately manipulative and deceptive behaviours with which prosecutors have not been able to get to grips. Put differently, there is a fundamental failure to grasp the intention of trolls: namely, it is their sole purpose on the internet grossly to offend with obscene messages. The DPP’s justification for what I perceive to be leniency is that it is not in the public interest to prosecute such people, which is germane to my critique of the guidelines because, in my opinion, they misinterpret the public interest.
In describing the dilemma, I am aware of a quote from an illustrious former Prime Minister and Merseyside MP, Harold Wilson. He said:
“I doubt whether any Member could provide a legal definition of what he means by the public interest, capable of covering changing national conditions and of being applied to all…cases”.—[Official Report, 22 April 1948; Vol. 449, c. 2035-2036.]
I do not profess to being legally trained, so I would no doubt fail the Wilson test, but as the use of social media increases exponentially, the exposure to such gratuitous activity increases in proportion. It stands to reason, therefore, that the public’s propensity to want to see such crimes dealt with by the criminal justice system will have increased consequently.
Paragraph 39 of the guidelines specifically addresses the question of public interest. The guidelines advise that if a suspect has taken swift action to remove the communication or has expressed genuine remorse, he or she should not face prosecution. I broadly welcome that clarification, but swift removal does not necessarily mean someone has not already been grossly offended. The internet allows individuals to build an audience of tens, hundreds or even thousands within a very short space of time. Under the guidelines, an individual troll could post a series of grossly offensive messages that are seen by many people, but simply removing the posts within a short space of time—and “swift removal” is not defined in the guidelines—makes it hard for action to be taken against that troll.
Similarly, deletion does not take away the possible psychological impact on someone who has already seen the message. Deleting a message from the internet does not delete it from someone’s mind. Additionally, the guidelines advise that if swift and effective action has been taken by others, such as a service provider, to remove the communication in question, or otherwise to block access to it, prosecution is not in the public interest. Surely that would depend on the particular type and frequency of such transgressions. The guidelines are a “get out of jail free” card that offers virtually no deterrent whatsoever. To all intents and purposes, prosecution can be avoided because of the discretion of others, which is something we should not endorse.
I congratulate the hon. Gentleman on securing this important debate. I have two points. First, he talks about people retweeting a message on Twitter. Does he agree that, whether someone is the first or the fifty-thousandth person to retweet a message, there should be equal liability? Otherwise some people would not be prosecuted because they retweeted later than others. Does he also agree that it is good to have a review of the guidelines? We need to make the public aware of how defamation laws apply to social media, otherwise people will say, “Well, I did not know.” The message has to go out to the country: “If you commit a crime or breach the defamation law, you will have to face the consequences.” We need new guidance, but, equally, people must be aware of the existing guidelines.
The hon. Gentleman will know that, in law, ignorance is no excuse. So someone could be prosecuted for defamation if they transgress the guidelines. On the first point, I believe that anyone who engages in social media should be aware of the social consequences of posting such tweets or Facebook statuses, as my assistant, who is a regular Facebook user, tells me they are called.
The guidelines advise that if a communication is not intended for a wide audience, nor is that the obvious consequence of sending the communication, the offender should not face prosecution, particularly where the intended audience did not include the victim or target of the communication in question. That is weak and, with respect, misunderstands social media. In the case of an RIP memorial page on Facebook, for example, a troll’s message on a status is not directed solely at the person who authored the status but is also directed at other people who have commented on the status and all those who have visual access to it. In the case of Georgia Varley, more than 4,500 people had liked her page and were therefore able to see a whole host of comments, unfortunately including those posted by trolls. That calls into question how the DPP uses the term “wide audience.” Does a prosecutor have to investigate the computer literacy of a suspect to determine whether they knew the exact figure of the audience in receipt of their post? Additionally, the subject of an RIP memorial page on Facebook would, of course, be deceased. The intended victim of the troll, therefore, is not necessarily the deceased person but the reader of the message.
The guidelines also advise that if the content of a communication does not obviously go beyond what could conceivably be tolerable or acceptable in an open and diverse society that upholds and respects freedom of expression, no prosecution is necessary. Of course I agree with that, but I also believe that greater consideration must be given to enforcing the law when grossly offensive comments have been made. There should be some online equivalent to offences committed offline. Only two people in England have been successfully prosecuted and jailed for sending messages considered to be grossly offensive, indecent, obscene or menacing. Is that really an effective deterrent to the people who are sitting at their computers right now, contemplating sending a disgusting message that might cause gross offence?
I understand that questions have been raised about a person’s right to freedom of speech offline versus their right to freedom of expression online, and I accept that it is about proportionality, but the reality is that anyone who knows anything about trolling will say that the problem is that too much grossly offensive material exists, and it would be far too resource-intensive for the criminal justice system to investigate each and every case.
I agree with paragraph 29 of the guidelines, which suggests that any parliamentary proposal would have to ensure that it did not have
“a chilling effect on free speech”.
We must take into consideration the European Court of Human Rights directive, which protects an individual’s right to speech that is offensive, shocking or disturbing. There is still a debate to be had about whether free speech even applies to the sending of communications via social media, or whether it is classed as freedom of expression, which is not an absolute right.
We are talking about vile, insulting and unacceptable behaviour, such as the comments that I have seen posted on RIP memorial pages on Facebook and that were revealed in a BBC “Panorama” documentary. We are not talking about someone’s legitimate right to express themselves freely. There is a world of difference between a fair comment and a wilful denigration without validity that aims simply to cause as much hurt and offence as possible. If we try to protect trolls’ freedom to offend grossly, we are essentially defending the indefensible.
The guidelines clearly give preference to physical abuse or the risk of physical abuse over psychological abuse. When I met the Crown Prosecution Service, its view was that, given the complexity of online abuse, the police are not afforded enough time to compile the evidence necessary to take a case to court. Often, a maximum of six months is not long enough to gather sufficient proof of the alleged offence for a successful prosecution. In any such investigation, the police must routinely combat fake accounts, fake identities, fake e-mail addresses and issues with mobile communications, such as pay-as-you-go devices. Deception makes it difficult for officers to know where to start when looking for a troll hiding behind the anonymity of a computer.
However, that should not prevent us from trying to rectify the problem and eradicate the grey area that I have described. In fact, as I have said, I believe that granting the police and the CPS additional time to gather evidence for court cases would allow them to obtain evidence that meets the test of what is grossly offensive and even expose patterns of behaviour in some individuals that could lead to criminal prosecution.
As well as additional time to compile evidence, the police need innovative approaches to assist them. For example, the university of Central Lancashire is in the early stages of looking at ways to identify trolls through written word patterns. Dr Claire Hardaker, a lecturer in linguistics and English language at the university, said:
“Everyone has a unique way of writing, of putting certain words together, which is subconscious. Many teenagers say they are able to identify who sent a text to them just by the style of writing and word habits or the way the words are written. Someone might be pretending to be someone else, but by analysing the way they write online, we can determine a probable, age, gender, even a probable region from where they come from.”
Such creative approaches could be invaluable in convicting trolls. However, it is also true to suggest that any such invention will be for nothing until the DPP can adequately provide prosecutors with a definition of trolling that is separate from cyber-bullying, cyber-stalking or grooming and that can be robustly prosecuted where appropriate.
One conclusion that I reached early in my investigations into trolling is that a multi-agency approach is needed to tackle the problem effectively. As I have repeated, I am of the firm belief that the way to deter individuals from sending grossly offensive comments on social media is to change the culture of online users. That in turn requires a clear lead from the judicial system. That does not necessarily mean changing the law, but it does mean changing the application of the law, which the guidelines fail to do. The final part of Harold Wilson’s quote in the Commons is:
“In the last resort this House is, and must be, the authority which decides whether or not any particular practice is in the public interest.”—[Official Report, 22 April 1948; Vol. 449, c. 2037.]
He was, of course, right.
I conclude by thanking hon. Members for taking part and by asking the Solicitor-General the following questions. How does he define internet trolling? Does he agree with the DPP’s assessment that messages sent via social media that are grossly offensive, indecent, obscene or false are unlikely to warrant prosecutions because it is not in the public interest to do so? Does he agree with the approach set out in paragraph 12, which is to assess initially offences that may have been committed using social media? What steps is he taking to work with prosecutors to find new ways to identify trolls, such as the scheme devised by the university of Central Lancashire? Finally, will he consider my request to increase the period of time that the police have to collect their evidence on trolls before a case must be brought before the courts?