Read Bill Ministerial Extracts
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Buscombe
Main Page: Baroness Buscombe (Conservative - Life peer)Department Debates - View all Baroness Buscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 7 months ago)
Lords ChamberMy Lords, before speaking to my Amendment 137, I want to put a marker down to say that I strongly support Amendment 135 in the name of my noble friend Lord Moylan. I will not repeat anything that he said but I agree with absolutely every word.
Amendment 137 is in my name and that of my noble and learned friend Lord Garnier and the noble Lord, Lord Moore of Etchingham. This amendment is one of five which I have tabled with the purpose of meeting a core purpose of the Bill. In the words of my noble friend the Minister in response to Amendment 1, it is
“to protect users of all ages from being exposed to illegal content”—[Official Report, 19/4/23; col. 724.]
—in short, to ensure that what is illegal offline is illegal online.
If accepted, this small group of amendments would, I strongly believe, make a really important difference to millions of people’s lives—people who are not necessarily listed in Clause 12. I therefore ask the Committee to allow me to briefly demonstrate the need for these amendments through the prism of millions of people and their families working and living in rural areas. They are often quite isolated and working alone in remote communities, and are increasingly at risk of or are already suffering awful online abuse and harassment. This abuse often goes way beyond suffering; it destroys businesses and a way of life.
I find it extraordinary that the Bill seems to be absent of anything to do with livelihoods. It is all about focusing on feelings, which of course are important—and the most important focus is children—but people’s businesses and livelihoods are being destroyed through abuse online.
Research carried out by the Countryside Alliance has revealed a deeply disturbing trend online that appears to be disproportionately affecting people who live in rural areas and who are involved in rural pursuits. Beyond direct abuse, a far more insidious tactic that activists have adopted involves targeting businesses involved in activities of which they disapprove, such as livestock farming or hosting shoots. They post fake reviews on platforms including Tripadvisor and Google Maps, and their aim is to damage the victim, their business and their reputation by, to put it colloquially, trashing their business and thereby putting off potential customers. This is what some call trolling.
Let me be clear that I absolutely defend, to my core, the right to freedom of expression and speech, and indeed the right to offend. Just upsetting someone is way below the bar for the Bill, or any legislation. I am deeply concerned about the hate crime—or non-crime—issue we debated yesterday; in fact, I put off reading the debate because I so disagree with this nonsense from the College of Policing.
Writing a negative review directly based on a negative experience is entirely acceptable in my book, albeit unpleasant for the business targeted. My amendments seek to address something far more heinous and wrong, which, to date, can only be addressed as libel and, therefore, through the civil courts. Colleagues in both your Lordships’ House and in another place shared with me tremendously upsetting examples from their constituents and in their neighbourhoods of how anonymous activists are ruining the lives of hard-working people who love this country and are going the extra mile to defend our culture, historic ways of life and freedoms.
Fortunately, through the Bill, the Government are taking an important step by introducing a criminal offence of false communications. With the leave of the Committee, I will briefly cite and explain the other amendments in order to make sense of Amendment 137. One of the challenges of the offence of false communications is the need to recognise that so much of the harm that underpins the whole reason why the Bill is necessary is the consequence of allowing anonymity. It is so easy to destroy and debilitate others by remaining anonymous and using false communications. Why be anonymous if you have any spine at all to stand up for what you believe? It is not possible offline—when writing a letter to a newspaper, for example—so why is it acceptable online? The usual tech business excuse of protecting individuals in rogue states is no longer acceptable, given the level of harm that anonymity causes here at home.
Therefore, my Amendment 106 seeks to address the appalling effect of harm, of whatever nature, arising from false or threatening communications committed by unverified or anonymous users—this is what we refer to as trolling. Amendments 266 and 267, in my name and those of my noble and learned friend Lord Garnier and my noble friend Lord Leicester, would widen the scope of this new and welcome offence of false communications to include financial harm, and harm to the subject of the false message arising from its communication to third parties.
The Bill will have failed unless we act beyond feelings and harm to the person and include loss of livelihood. As I said, I am amazed that it is not front and centre of the Bill after safety for our children. Amendment 268, also supported by my noble and learned friend, would bring within the scope of the communications offences the instigation of such offences by others—for example, Twitter storms, which can involve inciting others to make threats without doing so directly. Currently, we are unsure whether encouraging others to spread false information—for example, by posting fake reviews of businesses for ideologically motivated reasons—would become an offence under the Bill. We believe that it should, and my Amendment 268 would address this issue.
I turn briefly to the specifics of my Amendment 137. Schedule 7 lists a set of “priority offences” that social media platforms must act to prevent, and they must remove messages giving rise to certain offences. However, the list does not include the new communications offences created elsewhere in Part 10. We believe that this is a glaring anomaly. If there is a reason why the new communications offences are not listed, it is important that we understand why. I hope that my noble friend the Minister can explain.
The practical effect of Amendment 137 would be to include the communications offences introduced in the Bill and communications giving rise to them within the definition of “relevant offence” and “priority illegal content” for the purposes of Clause 53(4) and (7) and otherwise.
I ask the Committee to have a level of imagination here because I have been asked to read the speech of the noble Viscount, Lord Colville—
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Buscombe
Main Page: Baroness Buscombe (Conservative - Life peer)Department Debates - View all Baroness Buscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 6 months ago)
Lords ChamberMy Lords, the noble Baroness, Lady Kidron, said words to the effect that perhaps we should begin by having particular regard for certain vulnerabilities, but we are dealing with primary legislation and this really concerns me. Lists such as in Clause 12 are really dangerous. It is not a great way to write law. We could be with this law for a long time.
I took the Communications Act 2003 through for Her Majesty’s Opposition, and we were doing our absolute best to future-proof the legislation. There was no mention of the internet in that piece of legislation. With great respect to the noble Lord, Lord McNally, with whom I sparred in those days, in was not that Act that introduced Ofcom but a separate Act. The internet was not even mentioned until the late Earl of Northesk introduced an amendment with the word “internet” to talk about the investigative powers Act.
The reality is that we already had Facebook, and tremendous damage being done through it to people such as my daughter. Noble Lords will remember that in the early days it was Oxford, Cambridge, Yale and Harvard; that is how it all began. It was an amazing thing, and we could not foresee what would happen but there was a real attempt to future-proof. If you start having lists such as in Clause 12, you cannot just add on or change. Cultural mores change. This list, which looks great in 2023, might look really odd in about 2027. Different groups will have emerged and say, “Well, what about me, what about me?”.
I entirely agree with the noble Baroness, Lady Fox. Who will be the decider of what is right, what is rude or what is abusive? I have real concerns with this. The Government have had several years to get this right. I say that with great respect to my noble friend the Minister, but we will have to think about these issues a little further. The design of the technology around all this is what we should be imposing on the tech companies. I was on the Communications and Digital Committee in 2020 when that was a key plank of our report, following the inquiry that we carried out and prior to the Joint Committee, then looking at this issue of “legal but harmful”, et cetera. I am glad that was dropped because—I know that I should not say this—when I asked a civil servant what was meant by “harmful”, he said, “Well, it might upset people”.
It is a very subjective thing. This is difficult for the Government. We must do all we can to support the Government in trying to find the right solutions, but I am sorry to say that I am a lawyer—a barrister—and I worry. We are trying to make things right but, remember, once it is there in an Act, it is there. People will use that as a tool. In 2002, at New Scotland Yard, I was introduced to an incredible website about 65 ways to become a good paedophile. Where does that fit in Clause 12? I have not quite worked that out. Is it sex? What is it? We have to be really careful. I would prefer having no list and making it more general, relying on the system to allow us to opt in.
I support my noble friend Lady Morgan’s amendment on this, which would make it easier for people to say, “Well, that’s fine”, but would not exclude people. What happens if you do not fit within Clause 12? Do you then just have to suck it up? That is not a very House of Lords expression, but I am sure that noble Lords will relate to it.
We have to go with care. I will say a little more on the next group of amendments, on anonymity. It is really hard, but what the Government are proposing is not quite there yet.
That seemed to be provoked by me saying that we must look after the vulnerable, but I am suggesting that we use UK law and the rights that are already established. Is that not better than having a small list of individual items?
My Lords, I support the noble Baroness, Lady Buscombe, on the built-in obsolescence of any list. It would very soon be out of date.
I support the amendments tabled by the noble Lord, Lord Clement-Jones, and by the noble Baroness, Lady Morgan of Cotes. They effectively seek a similar aim. Like the noble Baroness, Lady Fraser, I tend towards those tabled by the noble Lord, Lord Clement-Jones, because they seem clearer and more inclusive, but I understand that they are trying for the same thing. I also register the support for this aim of my noble friend Lady Campbell of Surbiton, who cannot be here but whom I suspect is listening in. She was very keen that her support for this aim was recorded.
The issue of “on by default” inevitably came up at Second Reading. Then and in subsequent discussions, the Minister reiterated that a “default on” approach to user empowerment tools would negatively impact people’s use of these services. Speaking at your Lordships’ Communications and Digital Committee, on which I sat at the time, Minister Scully went further, saying that the strongest option, of having the settings off in the first instance,
“would be an automatic shield against people’s ability to explore what they want to explore on the internet”.
According to the Government’s own list, this was arguing for the ability to explore content that abuses, targets or incites hatred against people with protected characteristics, including race and disability. I struggle to understand why protecting this right takes precedence over ensuring that groups of people with protected characteristics are, well, protected. That is our responsibility. It is precedence, because switching controls one way is not exactly the same as switching them the other way. It is easy to think so, but the noble Baroness, Lady Parminter, explained very clearly that it is not the same. It is undoubtedly easier for someone in good health and without mental or physical disabilities to switch controls off than it is for those with disabilities or vulnerabilities to switch them on. That is self-evident.
It cannot be right that those most at risk of being targeted online, including some disabled people—not all, as we have heard—and those with other protected characteristics, will have the onus on them to switch on the tools to prevent them seeing and experiencing harm. There is a real risk that those who are meant to benefit from user empowerment tools, those groups at higher risk of online harm, including people with a learning disability, will not be able to access the tools because the duties allow category 1 services to design their own user empowerment tools. This means that we are likely to see as many versions of user empowerment tools as there are category 1 services to which this duty applies.
Given what we know about the nature of addiction and self-harm, which has already been very eloquently explained, it surely cannot be the intention of the Bill that those people who are in crisis and vulnerable to eating disorders or self-harm, for example, will be required to seek and activate a set of tools to turn off the very material that feeds their addiction or encourages their appetite for self-harm.
The approach in the Bill does little to prevent people spiralling down this rabbit hole towards ever more harmful content. Indeed, instead it requires people to know that they are approaching a crisis point, and to have sufficient levels of resilience and rationality to locate the switch and turn on the tools that will protect them. That is not how the irrational or distressed mind works.
So, all the evidence that we have about the existence of harm which arises from mental states, which has been so eloquently set out in introducing the amendments— I refer again to my noble friend Lady Parminter, because that is such powerful evidence—tips the balance in favour, I believe, of setting the tools to be on by default. I very much hope the Minister will listen and heed the arguments we have heard set out by noble Lords across the Committee, and come back with some of his own amendments on Report.
My Lords, I am going to endeavour to be relatively brief. I rise to move Amendment 38 and to speak to Amendments 39, 139 and 140 in this group, which are in my name. All are supported by my noble friend Lord Vaizey of Didcot, to whom I am grateful.
Amendments 38 and 39 relate to Clause 12. They remove subsections (6) and (7) from the Bill; that is, the duty to filter out non-verified users. Noble Lords will understand that this is different from the debate we have just had, which was about content. This is about users and verification of the users, rather than the harm or otherwise of the content. I am sure I did not need to say that, but perhaps it helps to clarify my own thinking to do so. Amendments 139 and 140 are essentially consequential but make it clear that my amendments do not prohibit category 1 services from offering this facility. They make it a choice, not a duty.
I want to make one point only in relation to these amendments. It has been well said elsewhere that this is a Twitter-shaped Bill, but it is trying to apply itself to a much broader part of the internet than Twitter, or things like it. In particular, community-led services like Wikipedia, to which I have made reference before, operate on a totally different basis. The Bill seeks to create a facility whereby members of the public like you and me can, first, say that we want the provider to offer a facility for verifying those who might use their service, and secondly, for us, as members of the public, to be able to say we want to see material from only those verified accounts. However, the contributors to Wikipedia are not verified, because Wikipedia has no system to verify them, and therefore it would be impossible for Wikipedia, as a category 1 service, to be able to comply with this condition on its current model, which is a non-commercial, non-profit one, as noble Lords know from previous comments. It would not be able to operate this clause; it would have to say that either it is going to require every contributing editor to Wikipedia to be verified first in order to do so, which would be extremely onerous; or it would have to make it optional, which would be difficult, but lead to the bizarre conclusion that you could open an article on Wikipedia and find that some of its words or sentences were blocked, and you could not read them because those amendments to the article had been made by someone who had not been verified. Of course, putting a system in place to allow that absurd outcome would itself be an impossible burden on Wikipedia.
My complaint—as always, in a sense—about the Bill is that it misfires. Every time you touch it, it misfires in some way because it has not been properly thought through. It is perhaps trying to do too much across too broad a front, when it is clear that the concern of the Committee is much narrower than trying to bowdlerize Wikipedia articles. That is not the objective of anybody here, but it is what the Bill is tending to do.
I will conclude by saying—I invite my noble friend to comment on this if he wishes; I think he will have to comment on it at some stage—that in reply to an earlier Committee debate, I heard him say somewhat tentatively that he did not think that Wikipedia would qualify as a category 1 service. I am not an advocate for Wikipedia; I am just a user. But we need to know what the Government’s view is on the question of Wikipedia and services like it. Wikipedia is the only community-led service, I think, of such a scale that it would potentially qualify as category 1 because of its size and reach.
If the Minister’s view is that Wikipedia would not qualify as a category 1 service—in which case, my amendments are irrelevant because it would not be caught by this clause—then he needs to say so. More than that, he needs to say on what basis it would not qualify as a category 1 service. Would it be on the face of the Bill? If not, would it be in the directions given by the Secretary of State to the regulator? Would it be a question of the regulator deciding whether it was a category 1 service? Obviously, if you are trying to run an operation such as Wikipedia with a future, you need to know which of those things it is. Do you have legal security against being determined as a category 1 provider or is it merely at the whim—that is not the right word; the decision—of the regulator in circumstances that may legitimately change? The regulator may have a good or bad reason for changing that determination later. You cannot run a business not knowing these things.
I put it to noble Lords that this clause needs very careful thinking through. If it is to apply to community-led services such as Wikipedia, it is an absurdity. If it is not to apply to them because what I think I heard my noble friend say pertains and they are not, in his view, a category 1 service, why are they not a category 1 service? What security do they have in knowing either way? I beg to move.
My Lords, I will speak to Amendment 106 in my name and the names of my noble and learned friend Lord Garnier and the noble Lord, Lord Moore of Etchingham. This is one of five amendments focused on the need to address the issue of activist-motivated online bullying and harassment and thereby better safeguard the mental health and general well-being of potential victims.
Schedule 4, which defines Ofcom’s objectives in setting out codes of practice for regulated user-to-user services, should be extended to require the regulator to consider the protection of individuals from communications offences committed by anonymous users. The Government clearly recognise that there is a threat of abuse from anonymous accounts and have taken steps in the Bill to address that, but we are concerned that their approach is insufficient and may be counterproductive.
I will explain. The Government’s approach is to require large social media platforms to make provision for users to have their identity verified, and to have the option of turning off the ability to see content shared by accounts whose owners have not done this. However, all this would mean is that people could not see abuse being levelled at them. It would not stop the abuse happening. Crucially, it would not stop other people seeing it, or the damage to his or her reputation or business that the victim may suffer as a result. If I am a victim of online bullying and harassment, I do not want to see it, but I do not want it to be happening at all. The only means I have of stopping it is to report it to the platform and then hope that it takes the right action. Worse still, if I have turned off the ability to see content posted by unverified—that is, anonymous—accounts, I will not be able to complain to the platform as I will not have seen it. It is only when my business goes bust or I am shunned in the street that I realise that something is wrong.
The approach of the Bill seems to be that, for the innocent victim—who may, for example, breed livestock for consumption—it is up that breeder to be proactive to correct harm already done by someone who does not approve of eating meat. This is making a nonsense of the law. This is not how we make laws in this country —until now, it seems. Practically speaking, the worst that is likely to happen is that the platform might ban their account. However, if their victims have had no opportunity to read the abuse or report it, even that fairly low-impact sanction could not be levelled against them. In short, the Bill’s current approach, I am sorry to say, would increase the sense of impunity, not lessen it.
One could argue that, if a potential abuser believes that their victim will not read their abuse, they will not bother issuing it. Unfortunately, this misunderstands the psyche of the online troll. Many of them are content to howl into the void, satisfied that other people who have not turned on the option to filter out content from unverified accounts will still be able to read it. The troll’s objective of harming the victim may be partially fulfilled as a result.
There is also the question of how much uptake there will be of the option to verify one’s identity, and numerous questions about the factors that this will depend on. Will it be attractive? Will there be a cost? How quick and efficient will the process be? Will platforms have the capacity to implement it at scale? Will it have to be done separately for every platform?
If uptake of verification is low, most people simply will not use the option to filter content of unverified accounts, even if it means that they remain more susceptible to abuse, since they would be cutting themselves off from most of their users. Clearly, that is not an option for anyone using social media for any promotional purpose. Even those who use it for purely social reasons will find that they have friends who do not want to be verified. Fundamentally, people use social media because other people use it. Carving oneself off from most of them defeats the purpose of the exercise.
It is not clear what specific measures the Bill could take to address the issue. Conceivably, it could simply ban online platforms from maintaining user accounts whose owners have not had their identities verified. However, this would be truly draconian and most likely lead to major platforms exiting the UK market, as the noble Baroness, Lady Fox, has rightly argued in respect of other possible measures. It would also be unenforceable, since users could simply turn on a VPN, pretend to be from some other country where the rules do not apply and register an account as though they were in that country.
There are numerous underlying issues that the Bill recognises as problems but does not attempt to prescribe solutions for. Its general approach is to delegate responsibility to Ofcom to frame its codes of practice for operators to follow in order to effectively tackle these problems. Specifically, it sets out a list of objectives that Ofcom, in drawing up its codes of practice, will be expected to meet. The protection of users from abuse, specifically by unverified or anonymous users, would seem to be an ideal candidate for inclusion in this list of amendments. If required to do so, Ofcom could study the issue closely and develop more effective solutions over time.
I was pleased to see, in last week’s Telegraph, an article that gave an all too common example of where the livelihood of a chef running a pub in Cornwall has suffered what amounts to vicious abuse online from a vegan who obviously does not approve of the menu, and who is damaging the business’s reputation and putting the chef’s livelihood at risk. This is just one tiny example, if I can put it that way, of the many thousands that are happening all the time. Some 584 readers left comments, and just about everyone wrote in support of the need to do something to support that chef and tackle this vicious abuse.
I return to a point I made in a previous debate: livelihoods, which we are deeply concerned about, are at stake here. I am talking not about big business but about individuals and small and family businesses that are suffering—beyond abuse—loss of livelihood, financial harm and/or reputational damage to business, and the knock-on effects of that.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Buscombe
Main Page: Baroness Buscombe (Conservative - Life peer)Department Debates - View all Baroness Buscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 6 months ago)
Lords ChamberMy Lords, before we continue this debate, I want to understand why we have changed the system so that we break part way through a group of amendments. I am sorry, but I think this is very poor. It is definitely a retrograde step. Why are we doing it? I have never experienced this before. I have sat here and waited for the amendment I have just spoken to. We have now had a break; it has broken the momentum of that group. It was even worse last week, because we broke for several days half way through the debate on an amendment. This is unheard of in my memory of 25 years in this House. Can my noble friend the Minister explain who made this decision, and how this has changed?
I have not had as long in your Lordships’ House, but this is not unprecedented, in my experience. These decisions are taken by the usual channels; I will certainly feed that back through my noble friend. One of the difficulties, of course, is that because there are no speaking limits on legislation and we do not know how many people want to speak on each amendment, the length of each group can be variable, so I think this is for the easier arrangement of dinner-break business. Also, for the dietary planning of those of us who speak on every group, it is useful to have some certainty, but I do appreciate my noble friend’s point.
Okay; I thank my noble friend for his response. However, I would just say that we never would have broken like that, before 7.30 pm. I will leave it at that, but I will have a word with the usual channels.
My Lords, I rise to speak to Amendments 141 and 303 in the name of the noble Lord, Lord Stevenson. Before I do, I mention in passing how delighted I was to see Amendment 40, which carries the names of the Minister and the noble Lord, Lord Stevenson—may there be many more like that.
I am concerned that without Amendments 141 and 303, the concept of “verified” is not really something that the law can take seriously. I want to ask the Minister two rather technical questions. First, how confident can the Government and Ofcom be that with the current wording, Ofcom could form an assessment of whether Twitter’s current “verified by blue” system satisfies the duty in terms of robustness? If it does not, does Ofcom have the power to send it back to the drawing board? I am sure noble Lords understand why I raise this: we have recently seen “verified by blue” ticks successfully bought by accounts impersonating Martin Lewis, US Senators and Putin propagandists. My concern is that in the absence of a definition of verification in the Bill such as the one proposed in Amendments 141 and 303, where in the current wording does Ofcom have the authority to say that “verified by blue” does not satisfy the user verification duty?
I am sorry to interrupt the noble Lord, but I would like to ask him whether, when the Joint Committee was having its deliberations, it ever considered, in addition to people’s feelings and hurt, their livelihoods.
Of course. I think we looked at it in the round and thought that stripping away anonymity could in many circumstances be detrimental to those, for instance, working in hostile regimes or regimes where human rights were under risk. We considered a whole range of things, and the whole question about whether you should allow anonymity is subject to those kinds of human rights considerations.
I take the noble Baroness’s point about business, but you have to weigh up these issues, and we came around the other side.
Does the noble Lord not think that many people watching and listening to this will be thinking, “So people in far-off regimes are far more important than I am—I who live, work and strive in this country”? That is an issue that I think was lacking through the whole process and the several years that this Bill has been discussed. Beyond being hurt, people are losing their livelihoods.
I entirely understand what the noble Baroness is saying, and I know that she feels particularly strongly about these issues given her experiences. The whole Bill is about trying to weigh up different aspects—we are on day 5 now, and this has been very much the tenor of what we are trying to talk about in terms of balance.
Online Safety Bill Debate
Full Debate: Read Full DebateBaroness Buscombe
Main Page: Baroness Buscombe (Conservative - Life peer)Department Debates - View all Baroness Buscombe's debates with the Department for Digital, Culture, Media & Sport
(1 year, 5 months ago)
Lords ChamberMy Lords, I shall speak to Amendments 266 and 267, to which my noble and learned friend Lord Garnier, my noble friend Lord Leicester and the noble Lord, Lord Clement-Jones, have added their names. They are the final two amendments from a group of amendments that were also supported by the noble Lord, Lord Moore of Etchingham, and the noble Baroness, Lady Mallalieu.
The purpose of this Bill is to make the internet a safer place. The new offence of false communications is just one of the important means it seeks to use with the objective of making it an offence to harm people by telling lies online—and this is welcome. It is right that the Bill should focus on preventing harms to individuals. One of the most important guarantors that a person can have of good health and well-being is their freedom to pursue their livelihood unimpeded by illegitimate hostile action. Attacks on people’s livelihoods have the potential to wreak unimaginable harm on their mental and physical health, but these attacks are also among the easiest to perpetrate through the internet. My amendments seek to prevent such harms by protecting people who run, or work for, businesses that have been targeted with malicious fake reviews posted to online platforms, such as Google Maps or TripAdvisor. These platforms already fall within scope of this Bill in hosting user-generated content.
By referencing fake reviews, I am not referring to legitimate criticism, fair comment or even remarks about extraneous matters such as the owners’ pastimes or opinions, provided that the reviewer is honest about the nature of their relationship with the business. If someone wants to write a review of a business which they admit they have never patronised, and criticise it based on such factors, this would not be illegal, but it would very likely breach the platform’s terms of service and be removed. Review platforms are not the proper venue for such discussions; their role is to let people share opinions about a business’s products and services, but policing that is up to them.
The malicious fake reviews that I am referring to are those that are fundamentally dishonest. People with grudges to bear know that the platforms they use to attack their victims will remove any reviews that are clearly based on malice rather than a subjective assessment of quality. That is why they have come to adopt more insidious tactics. Without mentioning the real reason for their hostility towards a business and/or its staff, they purport to be customers who have had bad experiences. Of course, in almost every case, the reviewer has never so much as gone near the business. The review is therefore founded on lies.
This is not merely an abstract concern. Real people are being really harmed. Noble Lords will know that in earlier debates I used the prism of rural communities to amplify the objective of my amendments. Only yesterday, during Oral Questions in your Lordships’ House, there was an overwhelming collective consensus that we need to do more to protect the livelihoods of those working so hard in rural communities. My simple amendments would make a massive difference to their well-being.
The Countryside Alliance recently conducted a survey that found innumerable instances of ideologically motivated fake reviews targeted at rural businesses; these were often carried out by animal rights extremists and targeted businesses and their employees who sometimes participated in activities to which they objected, such as hosting shoots or serving meat. In April this year, the Telegraph reported on one case of a chef running a rural pub whose business was attacked with fake reviews by a vegan extremist who had verifiably never visited the pub, based initially on the man’s objection to him having posted on social media a picture of a roast chicken. The chef said these actions were making him fear for his livelihood as his business fought to recover from the pandemic. He is supporting my amendments.
Amendment 266 would therefore simply add the word “financial” to “physical” and “psychological” in the Bill’s definition of the types of harm that a message would need to cause for it to amount to an offence. This amendment is not an attempt to make the Bill into something it was not designed to be. It is merely an attempt to protect the physical and mental health of workers whose businesses are at risk of attack through malicious fake reviews. It may be that the victim of such an attack could argue that a fake review has caused them physical or psychological harm, as required under the Bill as currently drafted—indeed, it would likely do so. The reason for adding financial harm is to circumvent the need for victims to make that argument to the police, the police to the Crown Prosecution Service and then the prosecutors in front of the jury.
That links to Amendment 267, which would enlarge the definition of parties who may be harmed by a message for it to an amount to an offence. Under the Bill, a message must harm its intended, or reasonably foreseeable, recipient; however, it is vital to understand that a person need not receive the message to be harmed by it. In the case of fake reviews, the victim is harmed because the false information has been seen by others; he or she is not an intended recipient. The amendment would therefore include harms to the person or organisation to which the information—or, in reality, disinformation—contained within it relates.
My principal objective in bringing these amendments is not to create a stick with which to beat those who wish harm to others through malicious fake reviews; rather—call me old-fashioned—it is about deterrence. It is to deter this conduct by making it clear that it is not acceptable and would, if necessary, be pursued by police and through the courts under criminal law. It is about seeing to it that malicious fake reviews are not written and their harm is not caused.
I am aware that the Government have responded to constituents who have contacted their MPs in support of these amendments to say that they intend to act through the Competition and Markets Authority against businesses that pay third parties to write fake disparaging reviews of their competitors. I must stress to my noble friend the Minister, with respect, that this response misunderstands the issue. While there is a problem with businesses fraudulently reviewing their competitors to gain commercial advantage—and it is welcome that the Government plan to act on it—I am concerned with extreme activists and other people with ideological or personal axes to grind. These people are not engaged in any relevant business and are not seeking to promote a competitor by comparison. It is hard to see how any action by the Competition and Markets Authority could offer an effective remedy. The CMA exists to regulate businesses, not individual cranks. Further, this is not a matter of consumer law.
If the Government wish to propose some alternative means of addressing this issue besides my amendments, I and those who have added their names—and those who are supporters beyond your Lordships’ House—would be pleased to engage with Ministers between now and Report. In that regard though, I gently urge the Government to start any conversation from a position of understanding—really understanding—what the problem is. I fully appreciate that the purpose of this Bill is to protect individuals, and that is the key point of my amendments. My focus is upon those running and working in small businesses who are easy targets of this form of bullying and abuse. It is entirely in keeping with the spirit and purpose of the Bill to protect them.
Finally, I must be clear that the prism of what happens in our rural areas translates directly to everything urban across the UK. A practical difference is that people working in remote areas are often very isolated and find this intrusion into their life and livelihood so hard to cope with. We live in a pretty unpleasant world that is diminishing our love of life—that is why this Bill is so necessary.
My Lords, I wish to add to what my noble friend Lady Buscombe has just said, but I can do so a little more briefly, not least because she has made all the points that need to be made.
I would disagree with her on only one point, which is that she said—I am not sure that she wanted to be called old-fashioned, but she certainly wanted to have it explained to us—that the purpose of our amendment was to deter people from making malicious posts to the detriments of businesses and so forth. I think it is about more than deterrence, if I may say so. It is about fairness and justice.
It is very natural for a civilised, humane person to want to protect those who cannot protect themselves because of the anonymity of the perpetrator of the act. Over the last nearly 50 years, I have practised at the media Bar, including in cases based on the tort of malicious falsehood, trade libel or slander of goods. Essentially, my noble friend and I are trying to bring into the criminal law the torts that I have advised on and appeared in cases involving, so that the seriousness of the damage caused by the people who do these anonymous things can be visited by the weight of the state as the impartial prosecutor.