(1 year, 6 months ago)
Lords ChamberMy Lords, my noble friend Lord Stevenson, who tabled this amendment, unfortunately cannot be with us today as he is off somewhere drinking sherry, I hope.
This is an important set of amendments about researchers’ access to data. As I have previously said to the Committee, we need to ensure that Ofcom has the opportunity to be as trusted as possible in doing its job, so that we can give it as much flexibility as we can, and so that it can deal with a rapidly changing environment. As I have also said on more than one occasion, in my mind, that trust is built by the independence of Ofcom from Secretary of State powers; the ongoing and post-legislative scrutiny of Parliament, which is not something that we can deal with in this Bill; and, finally, transparency—and this group of amendments goes to that very important issue.
The lead amendment in this group, Amendment 230 in my noble friend Lord Stevenson’s name, seeks to accelerate the process relating to Ofcom’s report on researchers’ access to information. Instead of simply requiring a report within two years of Clause 146 being brought into force, this amendment would require an interim report within three months with a final report to follow two years later. Although it is the lead amendment in the group, I do not think it is the more significant because, in the end, it does not do much about the fundamental problem that we want to deal with in this group, which is the need to do better than just having a report. We need to ensure that there really is access by independent reporters.
Amendments 233 and 234 are, I think, of more significance. These proposed new clauses would assist independent researchers in accessing information and data from providers of regulated services. Amendment 233 would allow Ofcom itself to appoint researchers to undertake a variety of research. Amendment 234 would require Ofcom to issue a code of practice on researchers’ access to data; again, this is important so that the practical and legal difficulties for both researchers and service providers can be overcome though negotiation and consultation by Ofcom. Amendment 233A from the noble Lord, Lord Allan, which I am sure he will speak to in a moment, is helpful in clarifying that no data protection breach would be incurred by allowing the research access.
In many ways, there is not a huge amount more to say. When Melanie Dawes, the head of Ofcom, appeared before the Joint Committee on 1 November 2021—all that time ago—she said that
“tightening up the requirement to work with external researchers would be a good thing in the Bill”.
It is therefore a disappointment that, when the Bill was finally published after the Joint Committee’s consideration of the draft, there was not something more significant and more weighty than just a report. That is what we are trying to address, particularly now that we see, as an example, that Twitter is charging more than £30,000 a month for researchers’ access. That is quite a substantial rate in order for researchers to be able to do their work in respect of that platform. Others are restricting or obscuring some of the information that people want to be able to see.
This is a vital set of measures if this Bill is to be effective. These amendments go a long way towards where we want to get to on this; for the reasons I have set out around ensuring that there is transparency, they are vital. We know from the work of Frances Haugen that the platforms themselves are doing this research. We need that out in the open, we need Ofcom to be able to see it through independent researchers and we need others to be able to see it so that Parliament and others can continue to hold these platforms to account. Given that the Minister is in such a positive mood, I look forward to his positive response.
My Lords, I must advise the Committee that if Amendment 230 is agreed to then I cannot call Amendment 231 because of pre-emption.
My Lords, we are reaching the end of our Committee debates, but I am pleased that we have some time to explore these important questions raised by the noble Lord, Lord Knight of Weymouth.
I have an academic friend who studies the internet. When asked to produce definitive answers about how the internet is impacting on politics, he politely suggests that it may be a little too soon to say, as the community is still trying to understand the full impact of television on politics. We are rightly impatient for more immediate answers to questions around how the services regulated by this Bill affect people. For that to happen, we need research to be carried out.
A significant amount of research is already being done within the companies themselves—both more formal research, often done in partnership with academics, and more quick-fix commercial analyses where the companies do their own studies of the data. These studies sometimes see the light of day through publication or quite often through leaks; as the noble Lord, Lord Knight, has referred to, it is not uncommon for employees to decide to put research into the public domain. However, I suggest that this is a very uneven and suboptimal way for us to get to grips with the impact on services. The public interest lies in there being a much more rigorous and independent body of research work, which, rightly, these amendments collectively seek to promote.
The key issues that we need to address head-on, if we are actively to promote more research, lie within the data protection area. That has motivated my Amendment 233A—I will explain the logic of it shortly—and is the reason why I strongly support Amendment 234.
A certain amount of research can be done without any access to personal data, bringing together aggregated statistics of what is happening on platforms, but the reality is that many of the most interesting research questions inevitably bring us into areas where data protection must be considered. For example, looking at how certain forms of content might radicalise people will involve looking at what individual users are producing and consuming and the relationships between them. There is no way of doing without it for most of the interesting questions around the harms we are looking at. If you want to know whether exposure to content A or content B led to a harm, there is no way to do that research without looking at the individual and the specifics.
There is a broad literature on how anonymisation and pseudonymisation techniques can be used to try to make those datasets a little safer. However, even if the data can be made safe from a technical point of view, that still leaves us with significant ethical questions about carrying out research on people who would not necessarily consent to it and may well disagree with the motivation behind the sorts of questions we may ask. We may want to see how misinformation affects people and steers them in a bad direction; that is our judgment, but the judgment of the people who use those services and consume that information may well be that they are entirely happy and there is no way on earth that they would consent to be studied by us for something that they perceive to be against their interests.
Those are real ethical questions that have to be asked by any researcher looking at this area. That is what we are trying to get to in the amendments—whether we can create an environment with that balance of equity between the individual, who would normally be required to give consent to any use of their data, and the public interest. We may determine that, for example, understanding vaccine misinformation is sufficiently important that we will override that individual’s normal right to choose whether to participate in the research programme.
My Amendment 233A is to Amendment 233, which rightly says that Ofcom may be in a position to say that, for example, vaccine misinformation is in the overriding public interest and we need research into it. If it decides to do that and the platforms transfer data to those independent researchers, because we have said in the amendment that they must, the last thing we want is for the platforms to feel that, if there is any problem further down the track, there will be comeback on them. That would be against the principle of natural justice, given that they have been instructed to hand the data over, and could also act as a barrier.
My Lords, Amendments 233 and 234 from the noble Lord, Lord Knight of Weymouth, were well motivated, so I will be brief. I just have a couple of queries.
First, we need to consider what the criteria are for who is considered worthy of the privileged status of receiving Ofcom approval as a researcher. We are discussing researchers as though they are totally reliable and trustworthy. We might even think that if they are academic researchers, they are bound to be. However, there was an interesting example earlier this week of confirmation bias leading to mistakes when King’s College had to issue a correction to its survey data that was used in the BBC’s “Mariana in Conspiracyland”. King’s College admitted that it had wildly overestimated the numbers of those reading conspiracy newspaper, The Light, and wildly overestimated the numbers of those attending what it dubbed conspiracy demonstrations. By the way, BBC Verify has so far failed to verify the mistake it repeated. I give this example not as a glib point but because we cannot just say that because researchers are accredited elsewhere they should just be allowed in. I also think that the requirement to give the researchers
“all such assistance as they may reasonably require to carry out their research”
sounds like a potentially very time-consuming and expensive effort.
The noble Lord, Lord Allan of Hallam, raised points around “can’t” or “won’t”, and whether this means researchers “must” or “should”, and who decides whether it is ethical that they “should” in all instances. There are ethical questions here that have been raised. Questions of privacy are not trivial. Studying individuals as specimens of “badthink” or “wrongthink” might appear in this Committee to be in the public interest but without the consent of people it can be quite damaging. We have to decide which questions fulfil the public interest so sufficiently that consent could be overridden in that way.
I do not think this is a slam-dunk, though it looks like a sensible point. I do not doubt that all of us want more research, and good research, and data we can use in arguments, whatever side we are on, but it does not mean we should just nod something through without at least pausing.
My Lords, I declare an interest as a trustee of the International Centre for the Study of Radicalisation at the War Studies department of King’s College London. That is somewhere that conducts research using data of the kind addressed in this group, so I have a particular interest in it.
We know from the kind of debates that the noble Lord, Lord Knight, referred to that it is widely accepted that independent researchers benefit hugely from access to relevant information from service providers to research online safety matters. That is why my Amendment 234, supported by the noble Lords, Lord Clement-Jones and Lord Knight, aims to introduce an unavoidable mandatory duty for regulated platforms to give access to that data to approved researchers.
As the noble Lord, Lord Knight, said, there are three ways in which this would be done. First, the timeframe for Ofcom’s report would be accelerated; secondly, proposed new Clause 147 would allow Ofcom to appoint the researchers; and, thirdly, proposed new Clause 148 would require Ofcom to write a code of practice on data access, setting up the fundamental principles for data access—a code which, by the way, should answer some of the concerns quite reasonably voiced by the noble Baroness, Lady Fox.
The internet is absolutely the most influential environment in our society today, but it is a complete black box, and we have practically no idea what is going on in some of the most important parts of it. That has a terrible impact on our ability to devise sensible policies and mitigate harm. Instead, we have a situation where the internet companies decide who accesses data, how much of it and for what purposes.
In answer to his point, I can tell the noble Lord, Lord Allan, who they give the data to—they give it to advertisers. I do not know if anyone has bought advertising on the internet, but it is quite a chilling experience. You can find out a hell of a lot about quite small groups of people if you are prepared to pay for the privilege of trying to reach them with one of your adverts: you can find out what they are doing in their bedrooms, what their mode of transport is to get to work, how old they are, how many children they have and so on. There is almost no limit to what you can find out about people if you are an advertiser and you are prepared to pay.
In fact, only the companies themselves can see the full picture of what goes on on the internet. That puts society and government at a massive disadvantage and makes policy-making virtually impossible. Noble Lords should be in no doubt that these companies deliberately withhold valuable information to protect their commercial interests. They obfuscate and confuse policymakers, and they protect their reputations from criticism about the harms they cause by withholding data. One notable outcome of that strategy is that it has taken years for us to be here today debating the Online Safety Bill, precisely because policy-making around the internet has been so difficult and challenging.
A few years ago, we were making some progress on this issue. I used to work with the Institute for Strategic Dialogue using CrowdTangle, a Facebook product. It made a big impact. We were working on a project on extremism, and having access to CrowdTangle revolutionised our understanding of how the networks of extremists that were emerging in British politics were coming together. However, since then, platforms have gone backwards a long way and narrowed their data-sharing. The noble Lord, Lord Knight, mentioned that CrowdTangle has essentially been closed down, and Twitter has basically stopped providing its free API for researchers—it charges for some access but even that is quite heavily restricted. These retrograde steps have severely hampered our ability to gather the most basic data from otherwise respectable and generally law-abiding companies. It has left us totally blind to what is happening on the rest of the internet—the bit beyond the nice bit; the Wild West bit.
Civil society plays a critical role in identifying harmful content and bad behaviour. Organisations such as the NSPCC, the CCDH, the ISD—which I mentioned—the Antisemitism Policy Trust and King’s College London, with which I have a connection, prove that their work can make a really big difference.
It is not as though other parts of our economy or society have the same approach. In fact, in most parts of our world there is a mixture of public, regulator and expert access to what is going on. Retailers, for instance, publish what is sold in our shops. Mobile phones, hospitals, banks, financial markets, the broadcast media—they all give access, both to the public and to their regulators, to a huge amount of data about what is going on. Once again, internet companies are claiming exceptional treatment—that has been a theme of debates on the Online Safety Bill—as if what happens online should, for some reason, be different from what happens in the rest of the world. That attitude is damaging the interests of our country, and it needs to be reversed. Does anyone think that the FSA, the Bank of England or the MHRA would accept this state of affairs in their regulated market? They absolutely would not.
Greater access to and availability of data and information about systems and processes would hugely improve our understanding of the online environment and thereby protect the innovation, progress and prosperity of the sector. We should not have to wait for Ofcom to be able to identify new issues and then appoint experts to look at them closely; there should be a broader effort to be in touch with what is going on with the internet. It is the nature of regulation that Ofcom will heavily rely on researchers and civil society to help enforce the Online Safety Bill, but this can be achieved only if researchers have sufficient access to data.
As the noble Lord, Lord Allan, pointed out, legislators elsewhere are making progress. The EU’s Digital Services Act gives a broad range of researchers access to data, including civil society and non-profit organisations dedicated to public interest research. The DSA sets out a framework for vetting and access procedures in detail, as the noble Baroness, Lady Fox, rightly pointed out, creating an explicit role for new independent supervisory authorities and digital services co-ordinators to manage that process.
Under Clause 146, Ofcom must produce a report exploring such access within two years of that section of the Bill coming into effect. That is too long. There is no obligation on the part of the regulator or service providers to take this further. No arguments have been put forward for this extended timeframe or relative uncertainty. In contrast, the arguments to speed up the process are extremely persuasive, and I invite my noble friend the Minister to address those.
My Lords, I too want to support this group of amendments, particularly Amendment 234, and will make just a couple of brief points.
First, one of the important qualities of the online safety regime is transparency, and this really speaks to that point. It is beyond clear that we are going to need all hands on deck, and again, this speaks to that need. I passionately agree with the noble Baroness, Lady Fox, on this issue and ask, when does an independent researcher stop being independent? I have met quite a lot on my journey who suddenly find ways of contributing to the digital world other than their independent research. However, the route described here offers all the opportunities to put those balancing pieces in place.
Secondly, I am very much aware of the fear of the academics in our universities. I know that a number of them wrote to the Secretary of State last week saying that they were concerned that they would be left behind their colleagues in Europe. We do not want to put up barriers for academics in the UK. We want the UK to be at the forefront of governance of the digital world, this amendment speaks to that, and I see no reason for the Government to reject it.
Finally, I want to emphasise the importance of research. Revealing Reality did research for 5Rights called Pathways, in which it built avatars for real children and revealed the recommendation loops in action. We could see how children were being offered self-harm, suicide, extreme diets and livestream porn within moments of them arriving online. Frances Haugen has already been mentioned. She categorically proved what we have been asserting for years, namely that Instagram impacts negatively on teenage girls. As we put this regime in place, it is not adequate to rely on organisations that are willing to work in the grey areas of legality to get their research or on whistleblowers—on individual acts of courage—to make the world aware.
One of the conversations I remember happened nearly five years ago, when the then Secretary of State asked me what the most important thing about the Bill was. I said, “To bring a radical idea of transparency to the sector”. This amendment goes some way to doing just that.
My Lords, I, too, support Amendments 233 and 234, and Amendment 233A, from the noble Lord, Lord Allan. As the noble Baroness, Lady Kidron, said, it has been made clear in the past 10 days of Committee that there is a role for every part of society to play to make sure that we see the benefits of the digital world but also mitigate the potential harms. The role that researchers and academics can play in helping us understand how the digital world operates is critical—and that is going to get ever more so as we enter a world of large language models and AI. Access to data in order to understand how digital systems and processes work will become even more important—next week, not just in 10 years’ time.
My noble friend Lord Bethell quite rightly pointed out the parallels with other regulators, such as the MHRA and the Bank of England. A number of people are now comparing the way in which the MHRA and other medical regulators regulate the development of drugs with how we ought to think about the emergence of regulation for AI. This is a very good read-across: we need to set the rules of the road for researchers and ensure, as the noble Baroness, Lady Kidron, said—nailing it, as usual—that we have the most transparent system possible, enabling people to conduct their research in the light, not in the grey zone.
My Lords, as the noble Baroness, Lady Kidron, said, clearly, transparency is absolutely one of the crucial elements of the Bill. Indeed, it was another important aspect of the Joint Committee’s report. Like the noble Lord, Lord Knight—a fellow traveller on the committee—and many other noble Lords, I much prefer the reach of Amendments 233 and 234, tabled by the noble Lord, Lord Bethell, to Amendment 230, the lead amendment in this group.
We strongly support amendments that aim to introduce a duty for regulated platforms to enable access by approved independent researchers to information and data from regulated services, under certain conditions. Of course, there are arguments for speeding up the process under Clause 146, but this is really important because companies themselves currently decide who accesses data, how much of it and for what purposes. Only the companies can see the full picture, and the effect of this is that it has taken years to build a really solid case for this Online Safety Bill. Without a greater level of insight, enabling quality research and harm analysis, policy-making and regulatory innovation will not move forward.
I was very much taken by what the noble Baroness, Lady Harding, had to say about the future in terms of the speeding up of technological developments in AI, which inevitably will make the opening up of data, and research into it, of greater and greater importance. Of course, I also take extremely seriously my noble friend’s points about the need for data protection. We are very cognisant of the lessons of Cambridge Analytica, as he mentioned.
It is always worth reading the columns of the noble Lord, Lord Hague. He highlighted this issue last December, in the Times. He said:
“Social media companies should be required to make anonymised data available to third-party researchers to study the effect of their policies. Crucially, the algorithms that determine what you see—the news you are fed, the videos you are shown, the people you meet on a website—should not only be revealed to regulators but the choices made in crafting them should then be open to public scrutiny and debate”.
Those were very wise words. The status quo leaves transparency in the hands of big tech companies with a vested interest in opacity. The noble Lord, Lord Knight, mentioned Twitter announcing in February that it would cease allowing free research access to its application programming interface. It is on a whim that a billionaire owner can decide to deny access to researchers.
I much prefer Amendment 233, which would enable Ofcom to appoint an approved independent researcher. The Ofcom code of practice proposed in Amendment 234 would be issued for researchers and platforms, setting out the procedures for enabling access to data. I take the point made by the noble Baroness, Lady Fox, about who should be an independent accredited researcher, but I hope that that is exactly the kind of thing that a code of practice would deal with.
Just as a little contrast, Article 40 of the EU’s Digital Services Act gives access to data to a broad range of researchers—this has been mentioned previously—including civil society and non-profit organisations dedicated to public interest research. The DSA sets out in detail the framework for vetting and access procedures, creating an explicit role for new independent supervisory authorities. This is an example that we could easily follow.
The noble Lord, Lord Bethell, mentioned the whole question of skilled persons. Like him, I do not believe that this measure is adequate as a substitute for what is contained in Amendments 233 and 234. It will be a useful tool for Ofcom to access external expertise on a case-by-case basis but it will not provide for what might be described as a wider ecosystem of inspection and analysis.
The noble Lord also mentioned the fact that internet companies should not regard themselves as an exception. Independent scrutiny is a cornerstone of the pharmaceutical, car, oil, gas and finance industries. They are open to scrutiny from research; we should expect that for social media as well. Independent researchers are already given access in many other circumstances.
The case for these amendments has been made extremely well. I very much hope to see the Government, with the much more open approach that they are demonstrating today, accept the value of these amendments.
My Lords, the Government are supportive of improving data sharing and encouraging greater collaboration between companies and researchers, subject to the appropriate safeguards. However, the data that companies hold about users can, of course, be sensitive; as such, mandating access to data that are not publicly available would be a complex matter, as noble Lords noted in their contributions. The issue must be fully thought through to ensure that the risks have been considered appropriately. I am grateful for the consideration that the Committee has given this matter.
It is because of this complexity that we have given Ofcom the task of undertaking a report on researchers’ access to information. Ofcom will conduct an in-depth assessment of how researchers can currently access data. To the point raised by the noble Lord, Lord Knight, and my noble friend Lord Bethell, let me provide reassurance that Ofcom will assess the impact of platforms’ policies that restrict access to data in this report, including where companies charge for such access. The report will also cover the challenges that constrain access to data and how such challenges might be addressed. These insights will provide an evidence base for any guidance that Ofcom may issue to help improve data access for researchers in a safe and secure way.
Amendments 230 and 231 seek to require Ofcom to publish a report into researchers’ access to data more rapidly than within the currently proposed two years. I share noble Lords’ desire to develop the evidence base on this issue swiftly, but care must be taken to balance Ofcom’s timelines to ensure that it can deliver its key priorities in establishing the core parts of the regulatory framework that the Bill will bring in; for example, the illegal content and child safety duties. Implementing these duties must be the immediate priority for Ofcom to ensure that the Bill meets its objective of protecting people from harm. It is crucial that we do not divert attention away from these areas and that we allow Ofcom to carry out this work as soon as is practicable.
Further to this, considering the complex matter of researchers’ access to data will involve consultation with interested parties, such as the Information Commissioner’s Office, the Centre for Data Ethics and Innovation, UK Research and Innovation, representatives of regulated services and others—including some of those parties mentioned by noble Lords today—as set out in Clause 146(3). This is an extremely important issue that we need to get right. Ofcom must be given adequate time to consult as it sees necessary and undertake the appropriate research.
Before the Minister succeeds in disappointing us, can he clarify something for us? Once Ofcom has published the report, it has the power to issue guidance. What requirement is there for platforms to abide by that guidance? We want there to be some teeth at the end of all this. There is a concern that a report will be issued, followed by some guidance, but that nothing much else will happen.
It is guidance rather than direction, but it will be done openly and transparently. Users will be able to see the guidance which Ofcom has issued, to see whether companies have responded to it as they see fit and, through the rest of the framework of the Bill, be empowered to make their decisions about their experiences online. This being done openly and transparently, and informed by Ofcom’s research, will mean that everyone is better informed.
We are sympathetic to the amendment. It is complex, and this has been a useful debate—
I wonder whether the Minister has an answer to the academic community, who now see their European colleagues getting ahead through being able to access data through other legislation in other parts of the world. Also, we have a lot of faith in Ofcom, but it seems a mistake to let it be the only arbiter of what needs to be seen.
We are very aware that we are not the only jurisdiction looking at the important issues the Bill addresses. The Government and, I am sure, academic researchers will observe the implementation of the European Union’s Digital Services Act with interest, including the provisions about researchers’ access. We will carefully consider any implications of our own online safety regime. As noble Lords know, the Secretary of State will be required to undertake a review of the framework between two and five years after the Bill comes into force. We expect that to include an assessment of how the Bill’s existing transparency provisions facilitate researcher access.
I do not expect the Minister to have an answer to this today, but it will be useful to get this on the record as it is quite important. Can he let us know the Government’s thinking on the other piece of the equation? We are getting the platforms to disclose the data, and an important regulatory element is the research organisations that receive it. In the EU, that is being addressed with a code of conduct, which is a mechanism enabled by the general data protection regulation that has been approved by the European Data Protection Board and creates this legal framework. I am not aware of equivalent work having been done in the UK, but that is an essential element. We do not want to find that we have the teeth to persuade the companies to disclose the data, but not the other piece we need—probably overseen by the Information Commissioner’s Office rather than Ofcom—which is a mechanism for approving researchers to receive and then use the data.
We are watching with interest what is happening in other jurisdictions. If I can furnish the Committee with any information in the area the noble Lord mentions, I will certainly follow up in writing.
I have a question, in that case, in respect of the jurisdictions. Why should we have weaker powers for our regulator than others?
I do not think that we do. We are doing things differently. Of course, Ofcom will be looking at all these matters in its report, and I am sure that Parliament will have an ongoing interest in them. As jurisdictions around the world continue to grapple with these issues, I am sure that your Lordships’ House and Parliament more broadly will want to take note of those developments.
But surely, there is no backstop power. There is the review but there is no backstop which would come into effect on an Ofcom recommendation, is there?
We will know once Ofcom has completed its research and examination of these complex issues; we would not want to pre-judge its conclusions.
With that, if there are no further questions, I invite the noble Lord to withdraw his amendment.
My Lords, this was a short but important debate with some interesting exchanges at the end. The noble Baroness, Lady Harding, mentioned the rapidly changing environment generated by generative AI. That points to the need for wider ecosystem-level research on an independent basis than we fear we might get as things stand, and certainly wider than the skilled persons we are already legislating for. The noble Lord, Lord Bethell, referred to the access that advertisers already have to insight. It seems a shame that we run the risk, as the noble Baroness, Lady Kidron, pointed out, of researchers in other jurisdictions having more privileged access than researchers in this country, and therefore becoming dependent on those researchers and whistleblowers to give us that wider view. We could proceed with a report and guidance as set out in the Bill but add in some reserved powers in order to take action if the report suggests that Ofcom might need and want that. The Minister may want to reflect on that, having listened to the debate. On that basis, I am happy to beg leave to withdraw the amendment.
My Lords, I am pleased to speak to Amendments 242, 243 and 245, which have been tabled in the name of my noble friend Lord Stevenson. The intention of this group is to probe what we consider to be an interesting if somewhat niche area, and I hope the Minister will take it in that spirit.
To give the Committee some idea of the background to this group, when Ofcom was originally set up and was mainly dealing with mobile and fixed telephony cartels, it had a somewhat torrid time, if I can describe it that way. Just about every decision it took was challenged in the courts on the so-called merits of the respective cases and on its powers, as the companies taking it to court had many resources they could call upon. That very much held up Ofcom’s progress and, of course, incurred major costs.
Prior to the Digital Economy Act, the worst of the experiences of this period were over, but Ofcom managed to persuade the Government that challenges made by companies in scope of Ofcom would in future be based on judicial review, rather than on merits. In other words, the test was whether Ofcom had acted within its powers and had not acted irrationally. An area of concern to a number of companies is who can challenge the regulator, even if it is acting within its powers, if it gets it wrong in the eyes of said companies. Perhaps the Minister will reflect on that.
This group of amendments is intended to provide better protections for service providers, their users and the wider public, alongside processes that should mean fewer delays and greater efficiency. The Competition Act 1998 permits appeals of Ofcom’s decisions to be made additionally on account of an error of fact, an error of law or an error of the exercise of its discretion.
My Lords, I congratulate the noble Baroness on having elucidated this arcane set of amendments. Unfortunately, though, it makes me deeply suspicious when I see what the amendments seem to do. I am not entirely clear about whether we are returning to some kind of merits-based appeal. If so, since the main litigators are going to be the social media companies, it will operate for their benefit to reopen every single thing that they possibly can on the basis of the original evidence that was taken into account by Ofcom, as opposed to doing it on a JR basis. It makes me feel quite uncomfortable if it is for their benefit, because I suspect it is not going to be for the ordinary user who has been disadvantaged by a social media company. I hope our brand spanking new independent complaints system—which the Minister will no doubt assure us is well on the way—will deal with that, but this strikes me as going a little too far.
My Lords, I enter the fray with some trepidation. In a briefing, Carnegie, which we all love and respect, and which has been fantastic in the background in Committee days, shared some concerns. As I interpret its concerns, when Ofcom was created in 2003 its decisions could be appealed on their merits, as the noble Lord has just suggested, to the Competition Appeal Tribunal, and I believe that this was seen as a balancing measure against an untested regime. What followed was that the broad basis on which appeal was allowed led to Ofcom defending 10 appeals per year, which really frustrated its ability as a regulator to take timely decisions. It turned out that the appeals against Ofcom made up more than 80% of the workload of the Competition Appeal Tribunal, whose work was supposed to cover a whole gamut of matters. When there was a consultation in the fringes of the DEA, it was decided to restrict appeal to judicial review and appeal on process. I just want to make sure that we are not opening up a huge and unnecessary delaying tactic.
I thank all those who have spoken, and I very much appreciate the spirit in which the amendments were tabled. They propose changes to the standard of appeal, the standing to appeal and the appeals process itself. The Government are concerned that enabling a review of the full merits of cases, as proposed by Amendments 243 and 245, could prove burdensome for the courts and the regulator, since a full-merits approach, as we have been hearing, has been used by regulated services in other regulatory regimes to delay intervention, undermining the effectiveness of the enforcement process. With deep-pocketed services in scope, allowing for a full-merits review could incentivise speculative appeals, both undermining the integrity of the system and slowing the regulatory process.
While the Government are fully committed to making sure that the regulator is properly held to account, we feel that there is not a compelling case for replacing the decisions of an expert and well-resourced regulator with those of a tribunal. Ofcom will be better placed to undertake the complex analysis, including technical analysis, that informs regulatory decisions.
Amendment 245 would also limit standing and leave to appeal only to providers and those determined eligible entities to make super-complaints under Clause 150. This would significantly narrow the eligibility requirements for appeals. For appeals against Ofcom notices we assess that the broader, well-established standard in civil law of sufficient interest is more appropriate. Super-complaints fulfil a very different function from appeals. Unlike appeals, which will allow regulated services to challenge decisions of the regulator, super-complaints will allow organisations to advocate for users, including vulnerable groups and children, to ensure that systemic issues affecting UK users are brought to Ofcom’s attention. Given the entirely distinct purposes of these functions, it would be inappropriate to impose the eligibility requirements for super-complaints on the appeals system.
I am also concerned about the further proposal in Amendment 245 to allow the tribunal to replace Ofcom’s decision with its own. Currently, the Upper Tribunal is able to dismiss an appeal or quash Ofcom’s decision. Quashed decisions must be remitted to Ofcom for reconsideration, and the tribunal may give directions that it considers appropriate. Amendment 245 proposes instead allowing the Upper Tribunal to
“impose or revoke, or vary the amount of, a penalty … give such directions or take such other steps as OFCOM could itself have given or taken, or … make any other decision which OFCOM could itself have made”.
The concern is that this risks undermining Ofcom’s independence and discretion in applying its powers and issuing sanctions, and in challenging the regulator’s credibility and authority. It may also further incentivise well-resourced providers to appeal opportunistically, with a view to securing a more favourable outcome at a tribunal.
On that basis, I fear that the amendments tabled by the noble Lord would compromise the fundamental features of the current appeals provisions, without any significant benefits, and risk introducing a range of inadvertent consequences. We are confident that the Upper Tribunal’s judicial review process, currently set out in the Bill, provides a proportionate, effective means of appeal that avoids unnecessary expense and delays, while ensuring that the regulator’s decisions can be thoroughly scrutinised. It is for these reasons that I hope the noble Baroness will withdraw the amendment.
My Lords, I am grateful to the Minister. I will take that as a no—but a very well-considered no, for which I thank him. I say to the noble Lord, Lord Clement-Jones, that we certainly would not wish to make him feel uncomfortable at any time. I am grateful to him and the noble Baroness, Lady Kidron, for their contributions. As I said at the outset, this amendment was intended to probe the issue, which I feel we have done. I certainly would not want to open a can of worms—online, judicial or otherwise. Nor would I wish, as the Minister suggested, to undermine the work, efficiency and effectiveness of Ofcom. I am glad to have had the opportunity to present these amendments. I am grateful for the consideration of the Committee and the Minister, and with that I beg leave to withdraw.
My Lords, even by the standards of this Bill, this is a pretty diverse group of amendments. I am leading the line with an amendment that does not necessarily fit with much of the rest of the group, except for Amendment 266, which the noble Baroness, Lady Buscombe, will be speaking to. I look forward to hearing her speak.
This amendment is designed to probe the creation of a new offence of identity theft in Clause 160. As I argued in my evidence to the consultation on digital identity and attributes in 2021, a new offence of identity theft is required. Under the Fraud Act 2006, the Identity Documents Act 2010, the Forgery and Counterfeiting Act 1981, the Computer Misuse Act 1990 and the Data Protection Act 2018 there are currently the offences of fraud using a false identity, document theft, forging an identity, unauthorised computer access and data protection offences respectively, but no specific crime of digital identity theft.
My Lords, I shall speak to Amendments 266 and 267, to which my noble and learned friend Lord Garnier, my noble friend Lord Leicester and the noble Lord, Lord Clement-Jones, have added their names. They are the final two amendments from a group of amendments that were also supported by the noble Lord, Lord Moore of Etchingham, and the noble Baroness, Lady Mallalieu.
The purpose of this Bill is to make the internet a safer place. The new offence of false communications is just one of the important means it seeks to use with the objective of making it an offence to harm people by telling lies online—and this is welcome. It is right that the Bill should focus on preventing harms to individuals. One of the most important guarantors that a person can have of good health and well-being is their freedom to pursue their livelihood unimpeded by illegitimate hostile action. Attacks on people’s livelihoods have the potential to wreak unimaginable harm on their mental and physical health, but these attacks are also among the easiest to perpetrate through the internet. My amendments seek to prevent such harms by protecting people who run, or work for, businesses that have been targeted with malicious fake reviews posted to online platforms, such as Google Maps or TripAdvisor. These platforms already fall within scope of this Bill in hosting user-generated content.
By referencing fake reviews, I am not referring to legitimate criticism, fair comment or even remarks about extraneous matters such as the owners’ pastimes or opinions, provided that the reviewer is honest about the nature of their relationship with the business. If someone wants to write a review of a business which they admit they have never patronised, and criticise it based on such factors, this would not be illegal, but it would very likely breach the platform’s terms of service and be removed. Review platforms are not the proper venue for such discussions; their role is to let people share opinions about a business’s products and services, but policing that is up to them.
The malicious fake reviews that I am referring to are those that are fundamentally dishonest. People with grudges to bear know that the platforms they use to attack their victims will remove any reviews that are clearly based on malice rather than a subjective assessment of quality. That is why they have come to adopt more insidious tactics. Without mentioning the real reason for their hostility towards a business and/or its staff, they purport to be customers who have had bad experiences. Of course, in almost every case, the reviewer has never so much as gone near the business. The review is therefore founded on lies.
This is not merely an abstract concern. Real people are being really harmed. Noble Lords will know that in earlier debates I used the prism of rural communities to amplify the objective of my amendments. Only yesterday, during Oral Questions in your Lordships’ House, there was an overwhelming collective consensus that we need to do more to protect the livelihoods of those working so hard in rural communities. My simple amendments would make a massive difference to their well-being.
The Countryside Alliance recently conducted a survey that found innumerable instances of ideologically motivated fake reviews targeted at rural businesses; these were often carried out by animal rights extremists and targeted businesses and their employees who sometimes participated in activities to which they objected, such as hosting shoots or serving meat. In April this year, the Telegraph reported on one case of a chef running a rural pub whose business was attacked with fake reviews by a vegan extremist who had verifiably never visited the pub, based initially on the man’s objection to him having posted on social media a picture of a roast chicken. The chef said these actions were making him fear for his livelihood as his business fought to recover from the pandemic. He is supporting my amendments.
Amendment 266 would therefore simply add the word “financial” to “physical” and “psychological” in the Bill’s definition of the types of harm that a message would need to cause for it to amount to an offence. This amendment is not an attempt to make the Bill into something it was not designed to be. It is merely an attempt to protect the physical and mental health of workers whose businesses are at risk of attack through malicious fake reviews. It may be that the victim of such an attack could argue that a fake review has caused them physical or psychological harm, as required under the Bill as currently drafted—indeed, it would likely do so. The reason for adding financial harm is to circumvent the need for victims to make that argument to the police, the police to the Crown Prosecution Service and then the prosecutors in front of the jury.
That links to Amendment 267, which would enlarge the definition of parties who may be harmed by a message for it to an amount to an offence. Under the Bill, a message must harm its intended, or reasonably foreseeable, recipient; however, it is vital to understand that a person need not receive the message to be harmed by it. In the case of fake reviews, the victim is harmed because the false information has been seen by others; he or she is not an intended recipient. The amendment would therefore include harms to the person or organisation to which the information—or, in reality, disinformation—contained within it relates.
My principal objective in bringing these amendments is not to create a stick with which to beat those who wish harm to others through malicious fake reviews; rather—call me old-fashioned—it is about deterrence. It is to deter this conduct by making it clear that it is not acceptable and would, if necessary, be pursued by police and through the courts under criminal law. It is about seeing to it that malicious fake reviews are not written and their harm is not caused.
I am aware that the Government have responded to constituents who have contacted their MPs in support of these amendments to say that they intend to act through the Competition and Markets Authority against businesses that pay third parties to write fake disparaging reviews of their competitors. I must stress to my noble friend the Minister, with respect, that this response misunderstands the issue. While there is a problem with businesses fraudulently reviewing their competitors to gain commercial advantage—and it is welcome that the Government plan to act on it—I am concerned with extreme activists and other people with ideological or personal axes to grind. These people are not engaged in any relevant business and are not seeking to promote a competitor by comparison. It is hard to see how any action by the Competition and Markets Authority could offer an effective remedy. The CMA exists to regulate businesses, not individual cranks. Further, this is not a matter of consumer law.
If the Government wish to propose some alternative means of addressing this issue besides my amendments, I and those who have added their names—and those who are supporters beyond your Lordships’ House—would be pleased to engage with Ministers between now and Report. In that regard though, I gently urge the Government to start any conversation from a position of understanding—really understanding—what the problem is. I fully appreciate that the purpose of this Bill is to protect individuals, and that is the key point of my amendments. My focus is upon those running and working in small businesses who are easy targets of this form of bullying and abuse. It is entirely in keeping with the spirit and purpose of the Bill to protect them.
Finally, I must be clear that the prism of what happens in our rural areas translates directly to everything urban across the UK. A practical difference is that people working in remote areas are often very isolated and find this intrusion into their life and livelihood so hard to cope with. We live in a pretty unpleasant world that is diminishing our love of life—that is why this Bill is so necessary.
My Lords, I wish to add to what my noble friend Lady Buscombe has just said, but I can do so a little more briefly, not least because she has made all the points that need to be made.
I would disagree with her on only one point, which is that she said—I am not sure that she wanted to be called old-fashioned, but she certainly wanted to have it explained to us—that the purpose of our amendment was to deter people from making malicious posts to the detriments of businesses and so forth. I think it is about more than deterrence, if I may say so. It is about fairness and justice.
It is very natural for a civilised, humane person to want to protect those who cannot protect themselves because of the anonymity of the perpetrator of the act. Over the last nearly 50 years, I have practised at the media Bar, including in cases based on the tort of malicious falsehood, trade libel or slander of goods. Essentially, my noble friend and I are trying to bring into the criminal law the torts that I have advised on and appeared in cases involving, so that the seriousness of the damage caused by the people who do these anonymous things can be visited by the weight of the state as the impartial prosecutor.
My Lords, as the noble Lord, Lord Clement-Jones, said, this is a very broad group, so I hope noble Lords will forgive me if I do not comment on every amendment in it. However, I have a great deal of sympathy for the case put forward by my noble friend Lady Buscombe and my noble and learned friend Lord Garnier. The addition of the word “financial” to Clause 160 is not only merited on the case made but is a practical and feasible thing to do in a way that the current inclusion of the phrase “non-trivial psychological” is not. After all, a financial loss can be measured and we know how it stands. I will also say that I have a great deal of sympathy with what the noble Lord, Lord Clement-Jones, said about his amendment. In so far as I understand them—I appreciate that they have not yet been spoken to—I am also sympathetic to the amendments in the names of the noble Baroness, Lady Kennedy of The Shaws, and the noble Lord, Lord Allan of Hallam.
I turn to my Amendment 265, which removes the word “psychological” from this clause. We have debated this already, in relation to other amendments, so I am going to be fairly brief about it. Probably through an oversight of mine, this amendment has wandered into the wrong group. I am going to say simply that it is still a very, very good idea and I hope that my noble friend, when he comes to reflect on your Lordships’ Committee as a whole, will take that into account and respond appropriately. Instead, I am going to focus my remarks on the two notices I have given about whether Clauses 160 and 161 should stand part of the Bill; Clause 161 is merely consequential on Clause 160, so the meat is whether Clause 160 should stand part of the Bill.
I was a curious child, and when I was learning the Ten Commandments—I am sorry to see the right reverend Prelate has left because I hoped to impress him with this—I was very curious as to why they were all sins, but some of them were crimes and others were not. I could not quite work out why this was; murder is a crime but lying is not a crime—and I am not sure that at that stage I understood what adultery was. In fact, lying can be a crime, of course, if you undertake deception with intent to defraud, and if you impersonate a policeman, you are lying and committing a crime, as I understand it—there are better-qualified noble Lords than me to comment on that. However, lying in general has never been a crime, until we get to this Bill, because for the first time this Bill makes lying in general—that is, the making of statements you know to be false—a crime. Admittedly, it is a crime dependent on the mode of transmission: it has to be online. It will not be a crime if I simply tell a lie to my noble and learned friend Lord Garnier, for example, but if I do it online, any form of statement which is not true, and I know not to be true, becomes a criminal act. This is really unprecedented and has a potentially chilling effect on free speech. It certainly seems to be right that, in your Lordships’ Committee, the Government should be called to explain what they think they are doing, because this is a very portentous matter.
The Bill states that a person commits the false communications offence if they send a message that they know to be false, if they intend the message to cause a degree of harm of a non-trivial psychological or physical character, and if they have no reasonable excuse for sending the message. Free speech requires that one should be allowed to make false statements, so this needs to be justified. The wording of the offence raises substantial practical issues. How is a court meant to judge what a person knows to be false? How is a committee of the House of Commons meant to judge, uncontroversially, what a person knows to be false at the time they say it? I say again: what is non-trivial psychological harm and what constitutes an excuse? None of these things is actually defined; please do not tell me they are going to be defined by Ofcom—I would not like to hear that. This can lead to astonishing inconsistency in the courts and the misapplication of criminal penalties against people who are expressing views as they might well be entitled to do.
Then there is the question of the audience, because the likely audience is not just the person to whom the false statement is directed but could be anybody who subsequently encounters the message. How on earth is one going to have any control over how that message travels through the byways and highways of the online world and be able to say that one had some sense of who it was going to reach and what non-trivial psychological harm it might cause when it reached them?
We are talking about this as if this criminal matter is going to be dealt with by the courts. What makes this whole clause even more disturbing is that in the vast majority of cases, these offences will never reach the courts, because there is going to be, inevitably, an interaction with the illegal content duties in the Bill. By definition, these statements will be illegal content, and the platforms have obligations under the Bill to remove and take down illegal content when they become aware of it. So, the platform is going to have to make some sort of decision about not only the truth of the statement but whether the person knows what the statement is, that the statement is false and what their intention is. Under the existing definition of illegal content, they will be required to remove anything they reasonably believe is likely to be false and to prevent it spreading further, because the consequences of it, in terms of the harm it might do, are incalculable by them at that point.
We are placing a huge power of censorship—and mandating it—on to the platforms, which is one of the things that some of us in this Committee have been very keen to resist. Just exploring those few points, I think my noble friend really has to explain what he thinks this clause is doing, how it is operable and what its consequences are going to be for free speech and censorship. As it stands, it seems to me unworkable and dangerous.
Does my noble friend agree with me that our courts are constantly looking into the state of mind of individuals to see whether they are lying? They look at what they have said, what they have done and what they know. They can draw an inference based on the evidence in front of them about whether the person is dishonest. This is the daily bread and butter of court. I appreciate the points he is making but, if I may say so, he needs to dial back slightly his apoplexy. Underlying this is a case to be made in justice to protect the innocent.
I did not say that it would be impossible for a court to do this; I said it was likely to lead to high levels of inconsistency. We are dealing with what is likely to be very specialist cases. You can imagine this in the context of people feeling non-trivially psychologically harmed by statements about gender, climate, veganism, and so forth. These are the things where you see this happening. The idea that there is going to be consistency across the courts in dealing with these issues is, I think, very unlikely. It will indeed have a chilling effect on people being able to express views that may be controversial but are still valid in an open society.
My Lords, I want to reflect on the comments that the noble Lord, Lord Moylan, has just put to us. I also have two amendments in the group; they are amendments to the government amendment, and I am looking to the Minister to indicate whether it is helpful for me to explain the rationale of my amendments now or to wait until he has introduced his. I will do them collectively.
First, the point the noble Lord, Lord Moylan, raised is really important. We have reached the end of our consideration of the Bill; we have spent a lot of time on a lot of different issues, but we have not spent very much time on these new criminal offences, and there may be other Members of your Lordships’ House who were also present when we discussed the Communications Act back in 2003, when I was a Member at the other end. At that point, we approved something called Section 127, which we were told was essentially a rollover of the dirty phone call legislation we had had previously, which had been in telecoms legislation for ever to prevent that deep-breathing phone call thing.
My Lords, I have two amendments in this grouping. I am afraid that I did not have time to get others to put their names to them, but I hope that they will find some support in this Committee.
For almost the whole of 2021, I chaired an inquiry in Scotland into misogyny. It was about the fact that many complaints were being made to the devolved Government in Scotland about women’s experiences not just of online harassment but of the disinhibition that the internet and social media have given people to be abusive online now also visiting the public square. Many people described the ways in which they are publicly harassed. I know that concerns people in this House too.
When I came to the Bill, I was concerned about something that became part of the evidence we heard. It is no different down here from in Scotland. As we know, many women—I say women, but men receive harassment online too—are sent really vicious, vile things. We all know of parliamentarians and journalists who have received them, their lives made a misery by threats to rape and kill and people saying, “Go and kill yourself”. There are also threats of disfigurement—“Somebody should take that smile off your face”—and suggestions that an acid attack be carried out on someone.
In hearing that evidence, it was interesting that some of the forms of threat are not direct in the way that criminal law normally works; they are indirect. They are not saying, “I’m going to come and rape you”. Sometimes they say that, but a lot of the time they say, “Somebody should rape you”; “You should be raped”; “You deserve to be raped”; “You should be dead”; “Somebody should take you out”; “You should be disfigured”; “Somebody should take that smile off your face, and a bit of acid will do it”. They are not saying, “I’m going to come and do it”, in which case the police go round and, if the person is identifiable, make an arrest—as happened with Joanna Cherry, the Scottish MP, for example, who had a direct threat of rape, and the person was ultimately charged under the Communications Act.
Our review of the kinds of threat taking place showed that it was increasingly this indirect form of threat, which has a hugely chilling effect on women. It creates fear and life changes, because women think that some follower of this person might come and do what is suggested and throw acid at them as they are coming out of their house, and they start rearranging their lives because of it—because they live in constant anxiety about it. It was shocking to hear the extent to which this is going on.
In the course of the past year, we have all become much more familiar with Andrew Tate. What happens with these things is that, because of the nature of social media and people having big followings, you get the pile-on: an expression with which I was not that familiar in the past but now understand only too well. The pile-on is where, algorithmically, many different commentaries are brought together and suddenly the recipient receives not just one, or five, but thousands of negative and nasty threats and comments. Of course, as a public person in Parliament, or a councillor, you are expected to open up your social media, because that is how people will get in touch with you or comment on the things you are doing, but then you receive thousands of these things. This affects journalists, Members of Parliament, councillors and the leaders of campaigns. For example, it was interesting to hear that people involved in the Covid matters received threats. It affects both men and women, but the sexual nature of the threats to women is horrifying.
The Andrew Tate thing is interesting because only yesterday I saw in the newspapers that part of the charging in Romania is about the way in which, because of his enormous following, and his encouragement of violence towards women, he is being charged, among many other things that are directly about violence to and the rape of women, for his incitement to these behaviours in many of his young male followers. In the report of the inquiry that I conducted, there are a number of recommendations around offences of that sort.
To specifically deal with this business of online threats, my amendments seek to address their indirect nature—not the ones that say, “I’m going to do it”, but the encouragement to others to do it or to create the fear that it will happen—and to look at how the criminal law addresses that.
My Lords, I will address my remarks to government Amendment 268AZA and its consequential amendments. I rather hope that we will get some reassurance from the Minister on these amendments, about which I wrote to him just before the debate. I hope that that was helpful; it was meant to be constructive. I also had a helpful discussion with the noble Lord, Lord Allan.
As has already been said, the real question relates to the threshold and the point at which this measure will clock in. I am glad that the Government have recognised the importance of the dangers of encouraging or assisting serious self-harm. I am also grateful for the way in which they have defined it in the amendment, relating to it grievous bodily harm and severe injury. The amendment says that this also
“includes successive acts of self-harm which cumulatively reach that threshold”.
That is important; it means, rather than just one act, a series of them.
However, I have a question about subsection (10), which states that:
“A provider of an internet service by means of which a communication is sent, transmitted or published is not to be regarded as a person who sends, transmits or publishes it”.
We know from bereaved parents that algorithms have been set up which relay this ghastly, horrible and inciteful material that encourages and instructs. That is completely different from those organisations that are trying to provide support.
I am grateful to Samaritans for all its help with my Private Member’s Bill, and for the briefing that it provided in relation to this amendment. As it points out, over 5,500 people in England and Wales took their own lives in 2021 and self-harm is
“a strong risk factor for future suicide”.
Interestingly, two-thirds of those taking part in a Samaritans research project said that
“online forums and advice were helpful to them”.
It is important that there is clarity around providing support and not encouraging and goading people into activity which makes their self-harming worse and drags them down to eventually ending their own lives. Three-quarters of people who took part in that Samaritans research said that they had
“harmed themselves more severely after viewing self-harm content online”.
It is difficult to know exactly where this offence sits and whether it is sufficiently narrowly drawn.
I am grateful to the Minister for arranging for me to meet the Bill team to discuss this amendment. When I asked how it was going to work, I was somewhat concerned because, as far as I understand it, the mechanism is based on the Suicide Act, as amended, which talks about the offence of encouraging or assisting suicide. The problem as I see it is that, as far as I am aware, there has not been a string of prosecutions following the suicide of many young people. We have met their families and they have been absolutely clear about how their dead child or sibling—whether a child or a young adult—was goaded, pushed and prompted. I recently had experience outside of a similar situation, which fortunately did not result in a death.
The noble Lord, Lord Allan, has already addressed some of the issues around this, and I would not want the amendment not to be there because we must address this problem. However, if we are to have an offence here, with a threshold that the Government have tried to define, we must understand why, if assisting and encouraging suicide on the internet is already a criminal offence, nothing has happened and there have been no prosecutions.
Why is subsection (10) in there? It seems to negate the whole problem of forwarding on through dangerous algorithms content which is harmful. We know that a lot of the people who are mounting this are not in the UK, and therefore will be difficult to catch. It is the onward forwarding through algorithms that increases the volume of messaging to the vulnerable person and drives them further into the downward spiral that they find themselves in—which is perhaps why they originally went to the internet.
I look forward to hearing the Government’s response, and to hearing how this will work.
My Lords, this group relates to communications offences. I will speak in support of Amendment 265, tabled by the noble Lord, Lord Moylan, and in support of his opposition to Clause 160 standing part of the Bill. I also have concerns about Amendments 267AA and 267AB, in the name of the noble Baroness, Lady Kennedy. Having heard her explanation, perhaps she can come back and give clarification regarding some of my concerns.
On Clause 160 and the false communications offence, unlike the noble Lord, Lord Moylan, I want to focus on psychological harm and the challenge this poses for freedom of expression. I know we have debated it before but, in the context of the criminal law, it matters in a different way. It is worth us dwelling on at least some aspects of this.
The offence refers to what is described as causing
“non-trivial psychological or physical harm to a likely audience”.
As I understand it—maybe I want some clarity here—it is not necessary for the person sending the message to have intended to cause harm, yet there is a maximum sentence of 51 weeks in prison, a fine, or both. We need to have the context of a huge cultural shift when we consider the nature of the harm we are talking about.
J.S. Mill’s harm principle has now been expanded, as previously discussed, to include traumatic harm caused by words. Speakers are regularly no-platformed for ideas that we are told cause psychological harm, at universities and more broadly as part of the whole cancel culture discussion. Over the last decade, harm and safety have come no longer to refer just to physical safety but have been conflated. Historically, we understood the distinction between physical threats and violence as distinct from speech, however aggressive or incendiary that speech was; we did not say that speech was the same as or interchangeable with bullets or knives or violence—and now we do. I want us to at least pause here.
What counts as psychological harm is not a settled question. The worry is that we have an inability to ascertain objectively what psychological harm has occurred. This will inevitably lead to endless interpretation controversies and/or subjective claims-making, at least some of which could be in bad faith. There is no median with respect to how humans view or experience controversial content. There are wildly divergent sensibilities about what is psychologically harmful. The social media lawyer Graham Smith made a really good point when he said that speech is not a physical risk,
“a tripping hazard … a projecting nail … that will foreseeably cause injury … Speech is nuanced, subjectively perceived and capable of being reacted to in as many different ways as there are people.”
That is true.
We have seen an example of the potential disputes over what creates psychological harm in a case in the public realm over the past week. The former Culture Secretary, Nadine Dorries, who indeed oversaw much of this Bill in the other place, had her bullying claims against the SNP’s John Nicolson MP overturned by the standards watchdog. Her complaints had previously been upheld by the standards commissioner. John Nicolson tweeted, liked and retweet offensive and disparaging material about Ms Dorries 168 times over 24 hours—which, as they say, is a bit OTT. He “liked” tweets describing Ms Dorries as grotesque, a “vacuous goon” and much worse. It was no doubt very unpleasant for her and certainly a personalised pile-on—the kind of thing the noble Baroness, Lady Kennedy, just talked about—and Ms Dorries would say it was psychologically harmful. But her complaint was overturned by new evidence that led to the bullying claim being turned down. What was this evidence? Ms Dorries herself was a frequent and aggressive tweeter. So, somebody is a recipient of something they say causes them psychological harm, and it has now been said that it does not matter because they are the kind of person who causes psychological harm to other people. My concern about turning this into a criminal offence is that the courts will be full of those kinds of arguments, which I do not think we want.
My Lords, as the noble Lord, Lord Clement-Jones, said in introducing this group some time ago, it is very diverse. I shall comment on two aspects of the amendments in this group. I entirely associate myself with the remarks of the noble Lord, Lord Allan, who really nailed the problems with Amendment 266, and I very much support the amendments in the name of the noble Baroness, Lady Kennedy of The Shaws; I would have signed them if I had caught up with them.
The noble Baroness, Lady Fox, talked about causing alarm and distress. I can draw on my own experience here, thinking about when someone randomly starts to post you pictures of crossbows. I think about what used to happen when I was a journalist in Bangkok, when various people used to get hand grenades posted into their letterbox. That was not actively dangerous—the pin was not pulled; it was still held down—but it was clearly a threat, and the same thing happens on social media.
This is something of which I have long experience. In 2005, when I was the founder of the feminist blog Carnival of Feminists, I saw the kinds of messages that the noble Baronesses have referred to, which in the days before social media used to be posted as comments on people’s blogs. You can still find the blog out there—it ran from 2005 to 2009—but many of its links to other blogs will be dead because they were often run by young women, often young women of colour, who were driven to pull down their blogs and sometimes were driven off the internet entirely by threatening, fearsome messages of the type that the noble Baroness, Lady Kennedy, referred to. We can argue about the drafting here—I will not have any opinion on that in detail—but something that addresses that issue is really important.
Secondly, we have not yet heard the Government’s introductions to Amendment 268AZA, but the noble Lord, Lord Clement-Jones, provided us with the information that it is an amendment to create the offence of encouraging or assisting self-harm. I express support for the general tenor of that, but I want to make one specific point: so far as I can see, the amendment does not have any defence or carve-out for harm-reduction messages, which may be necessary.
To set the context here, figures from the Royal College of Psychiatrists say that about one in 10 young people self-harm at some stage in their youth, and the RCP says those figures are probably an underestimate because they are based on figures where medical professionals actually see them so the number is probably significantly higher than that. An article in the Journal of Psychiatric and Mental Health Nursing from 2018 entitled “Self-cutting and harm reduction” is focused on in-patient settings, but the arguments in it are important in setting the general tone. It says that
“harm reduction in all its guises starts from the premise that the end goal”—
that is, to end self-harm entirely—
“is neither necessarily nor inevitably abstinence”,
which cannot be the solution for some people. Rather,
“the extinction of some particular form of behaviour may not be realistic for, or even desired by, the individual”.
So you may find messages that say, “If you are going to cut yourself, use a clean blade. If you do cut yourself, look after the wound afterwards”, but there is a risk that those kinds of well-intentioned, well-meaning and indeed expert messages could be caught by the amendment. I googled self-harm and harm reduction, and the websites that came up included Self Injury Support, which provides expert advice; a number of mental health trusts and healthcare trusts; and, indeed, the royal college’s own website.
The noble Lord, Lord Allan of Hallam, was trying to address this issue with Amendment 268AZC, which would allow the DPP to authorise prosecutions, but it seems to me that a better approach would be to have in the government amendment a statement saying, “We acknowledge that there will be cases where people talk about self-harm in ways that seek to minimise harm rather than simply stopping it, and they are not meant to be caught by this amendment”.
My Lords, as the noble Baroness, Lady Bennett, said, it seems a very long time since we heard the introduction from the noble Lord, Lord Clement-Jones, but it was useful in setting this helpful and well-informed debate on its way. I am sure the whole Committee is keen to hear the Minister introducing the government amendments, even at this very late stage in the debate.
I would like to make reference to a few points. I was completely captivated by the noble Lord, Lord Moylan, who invoked the 10 commandments. I say to him that one can go to no higher order, which I am sure will support the amendments that he and his colleagues have put forward.
I will refer first to the amendments tabled by my noble friend Lady Kennedy. At a minimum, they are interesting because they try to broaden the scope of the current offences. I believe they also try to anticipate the extent of the impact of the government amendments, which in my view would be improved by my noble friend’s amendments. As my noble friend said, so many of the threats that are experienced online by, and directed towards, women and girls are indirect. They are about encouraging others: saying “Somebody should do something terrible to you” is extremely common. I feel that here is an opportunity to address that in the Bill, and if we do not, we will have missed a major aspect. I hope that the Minister will take account of that and be positive. We can all be relaxed about whether the amendments need to be made, but the intent is there.
That part of the debate made a strong case to build on the debate we had on an earlier day in Committee about violence against women and girls, which was led by the noble Baroness, Lady Morgan, and supported by noble Baronesses and noble Lords from all sides of the House. We called upon the Minister then to ensure that the Bill explicitly includes the necessary amendments to make it refer to violence against women and girls because, for all the reasons that my noble friend Lady Kennedy has explained, it is considerably greater for them than for others. Without wishing to dismiss the fact that everybody receives levels of abuse, we have to be realistic here: I believe that my noble friend’s amendments are extremely helpful there.
This is a bit in anticipation of what the Minister will say—I am sure he will forgive me if he already has the answers. The noble Lords, Lord Clement-Jones and Lord Allan, referred particularly to the coalition of some 130 individuals and organisations which have expressed their concerns. I want to highlight those concerns as well, because they speak to some important points. The groups in that coalition include the largest self-harm charity, Self Injury Support, along with numerous smaller self-harm support organisations and, of course, the mental health charity Mind. Their voice is therefore considerable.
To emphasise what has already been outlined, the concern with the current amendments is that they are somewhat broad and equivalent to an offence of glamorising self-harm, which was rejected by the Law Commission in its consultation on the offence. That followed concern from the Magistrates’ Association and the Association of Police and Crime Commissioners that the offence would be ambiguous in application and complex to prosecute. It also risks criminalising people in distress, something that none of us want to see.
In addition, the broadness of the offence risks criminalising peer support and harm reduction resources, by defining them as capable of “encouraging or assisting” when they are in fact intended to help people who self-harm. This was raised by the noble Baroness, Lady Finlay, today and in respect of her Private Member’s Bill, which we debated very recently in this Chamber, and I am sure that it would not be the Minister’s intention.
I would like to emphasise another point that has been made. The offence may also criminalise content posted by people who are in distress and sharing their own experiences of self-harm—the noble Baroness, Lady Finlay, referred to this—by, for example, posting pictures of wounds. We do not want to subject vulnerable people to litigation, so let us not have an offence which ends up harming the very people it aims to protect. I shall be listening closely to the Minister.
My Lords, this has been a broad and mixed group of amendments. I will be moving the amendments in my name, which are part of it. These introduce the new offence of encouraging or assisting serious self-harm and make technical changes to the communications offences. If there can be a statement covering the group and the debate we have had, which I agree has been well informed and useful, it is that this Bill will modernise criminal law for communications online and offline. The new offences will criminalise the most damaging communications while protecting freedom of expression.
Amendments 264A, 266 and 267, tabled by the noble Lord, Lord Clement-Jones, and my noble friend Lady Buscombe, would expand the scope of the false communications offence to add identity theft and financial harm to third parties. I am very grateful to them for raising these issues, and in particular to my noble friend Lady Buscombe for raising the importance of financial harm from fake reviews. This will be addressed through the Digital Markets, Competition and Consumers Bill, which was recently introduced to Parliament. That Bill proposes new powers to address fake and misleading reviews. This will provide greater legal clarity to businesses and consumers. Where fake reviews are posted, it will allow the regulator to take action quickly. The noble Baroness is right to point out the specific scenarios about which she has concern. I hope she will look at that Bill and return to this issue in that context if she feels it does not address her points to her satisfaction.
Identity theft is dealt with by the Fraud Act 2006, which captures those using false identities for their own benefit. It also covers people selling or using stolen personal information, such as banking information and national insurance numbers. Adding identity theft to the communications offences here would duplicate existing law and expand the scope of the offences too broadly. Identity theft, as the noble Lord, Lord Clement-Jones, noted, is better covered by targeted offences rather than communications offences designed to protect victims from psychological and physical harm. The Fraud Act is more targeted and therefore more appropriate for tackling these issues. If we were to add identity theft to Clause 160, we would risk creating confusion for the courts when interpreting the law in these areas—so I hope the noble Lord will be inclined to side with clarity and simplicity.
Amendment 265, tabled by my noble friend Lord Moylan, gives me a second chance to consider his concerns about Clause 160. The Government believe that the clause is necessary and that the threshold of harm strikes the right balance, robustly protecting victims of false communications while maintaining people’s freedom of expression. Removing “psychological” harm from Clause 160 would make the offence too narrow and risk excluding communications that can have a lasting and serious effect on people’s mental well-being.
But psychological harm is only one aspect of Clause 160; all elements of the offence must be met. This includes a person sending a knowingly false message with an intention to cause non-trivial harm, and without reasonable excuse. It has also been tested extensively as part of the Law Commission’s report Modernising Communications Offences, when determining what the threshold of harm should be for this offence. It thus sets a high bar for prosecution, whereby a person cannot be prosecuted solely on the basis of a message causing psychological harm.
The noble Lord, Lord Allan, rightly recalled Section 127 of the Communications Act and the importance of probing issues such as this. I am glad he mentioned the Twitter joke trial—a good friend of mine acted as junior counsel in that case, so I remember it well. I shall spare the blushes of the noble Baroness, Lady Merron, in recalling who the Director of Public Prosecutions was at the time. But it is important that we look at these issues, and I am happy to speak further with my noble friend Lord Moylan and the noble Baroness, Lady Fox, about this and their broader concerns about freedom of expression between now and Report, if they would welcome that.
My noble friend Lord Moylan said that it would be unusual, or novel, to criminalise lying. The offence of fraud by false representation already makes it an offence dishonestly to make a false representation—to breach the ninth commandment—with the intention of making a gain or causing someone else a loss. So, as my noble and learned friend Lord Garnier pointed out, there is a precedent for lies with malicious and harmful intent being criminalised.
Amendments 267AA, 267AB and 268, tabled my noble friend Lady Buscombe and the noble Baroness, Lady Kennedy of The Shaws, take the opposite approach to those I have just discussed, as they significantly lower and expand the threshold of harm in the false and threatening communications offences. The first of these would specify that a threatening communications offence is committed even if someone encountering the message did not fear that the sender specifically would carry out the threat. I am grateful to the noble Baroness for her correspondence on this issue, informed by her work in Scotland. The test here is not whether a message makes a direct threat but whether it conveys a threat—which can certainly cover indirect or implied threats.
I reassure the noble Baroness and other noble Lords that Clause 162 already captures threats of “death or serious harm”, including rape and disfigurement, as well as messages that convey a threat of serious harm, including rape and death threats, or threats of serious injury amounting to grievous bodily harm. If a sender has the relevant intention or recklessness, the message will meet the required threshold. But I was grateful to see my right honourable friend Edward Argar watching our debates earlier, in his capacity as Justice Minister. I mentioned the matter to him and will ensure that his officials have the opportunity to speak to officials in Scotland to look at the work being done with regard to Scots law, and to follow the points that the noble Baroness, Lady Bennett, made about pictures—
I am grateful to the Minister. I was not imagining that the formulations that I played with fulfilled all of the requirements. Of course, as a practising lawyer, I am anxious that we do not diminish standards. I thank the noble Baroness, Lady Fox, for raising concerns about freedom of speech, but this is not about telling people that they are unattractive or ugly, which is hurtful enough to many women and can have very deleterious effects on their self-confidence and willingness to be public figures. Actually, I put the bar reasonably high in describing the acts that I was talking about: threats that somebody would kill, rape, bugger or disfigure you, or do whatever to you. That was the shocking thing: the evidence showed that it was often at that high level. It is happening not just to well-known public figures, who can become somewhat inured to this because they can find a way to deal with it; it is happening to schoolgirls and young women in universities, who get these pile-ons as well. We should reckon with the fact that it is happening on a much wider basis than many people understand.
Yes, we will ensure that, in looking at this in the context of Scots law, we have the opportunity to see what is being done there and that we are satisfied that all the scenarios are covered. In relation to the noble Baroness’s Amendment 268, the intentional encouragement or assistance of a criminal offence is already captured under Sections 44 to 46 of the Serious Crime Act 2007, so I hope that that satisfies her that that element is covered—but we will certainly look at all of this.
I turn to government Amendment 268AZA, which introduces the new serious self-harm offence, and Amendments 268AZB and 268AZC, tabled by the noble Lords, Lord Allan and Lord Clement-Jones. The Government recognise that there is a gap in the law in relation to the encouragement of non-fatal self-harm. The new offence will apply to anyone carrying out an act which intends to, and is capable of, encouraging or assisting another person seriously to self-harm by means of verbal or electronic communications, publications or correspondence.
I say to the noble Baroness, Lady Finlay of Llandaff, that the new clause inserted by Amendment 268AZA is clear that, when a person sends or publishes a communication that is an offence, it is also clear that, when a person forwards on another person’s communication, that will be an offence too. The new offence will capture only the most serious behaviour and avoid criminalising vulnerable people who share their experiences of self-harm. The preparation of these clauses was informed by extensive consultation with interested groups and campaign bodies. The new offence includes two key elements that constrain the offence to the most culpable offending; namely, that a person’s act must be intended to encourage or assist the serious self-harm of another person and that serious self-harm should amount to grievous bodily harm. If a person does not intend to encourage or assist serious self-harm, as will likely be the case with recovery and supportive material, no offence will be committed. The Law Commission looked at this issue carefully, following evidence from the Samaritans and others, and the implementation will be informed by an ongoing consultation as well.
I am sorry to interrupt the Minister, but the Law Commission recommended that the DPP’s consent should be required. The case that the Minister has made on previous occasions in some of the consultations that he has had with us is that this offence that the Government have proposed is different from the Law Commission one, and that is why they have not included the DPP’s consent. I am rather baffled by that, because the Law Commission was talking about a high threshold in the first place, and the Minister is talking about a high threshold of intent. Even if he cannot do so now, it would be extremely helpful to tie that down. As the noble Baroness and my noble friend said, 130 organisations are really concerned about the impact of this.
The Law Commission recommended that the consent, but not the personal consent, of the Director of Public Prosecutions should be required. We believe, however, that, because the offence already has tight parameters due to the requirement for an intention to cause serious self-harm amounting to grievous bodily harm, as I have just outlined, an additional safeguard of obtaining the personal consent of the Director of Public Prosecutions is not necessary. We would expect the usual prosecutorial discretion and guidance to provide sufficient safeguards against inappropriate prosecutions in this area. As I say, we will continue to engage with those groups that have helped to inform the drafting of these clauses as they are implemented to make sure that that assessment is indeed borne out.
I completely accept that, yes, by requiring the regulated services to prevent access to this kind of content, we will make a significant difference, but it is still the case that there will be—we know there will be, because they exist today—these individual websites, blogs or whatever you want to call them which are not regulated user-to-user services and which are promoting self-harm content. It would be really helpful to know what the Government think should happen to a service such as that, given that it is outside the regulation; it may be persistently breaking the law but be outside our jurisdiction.
I will follow up in writing on that point.
Before I conclude, I will mention briefly the further government amendments in my name, which make technical and consequential amendments to ensure that the communications offences, including the self-harm offence, have the appropriate territorial extent. They also set out the respective penalties for the communications offences in Northern Ireland, alongside a minor adjustment to the epilepsy trolling offence, to ensure that its description is more accurate.
I hope that noble Lords will agree that the new criminal laws that we will make through this Bill are a marked improvement on the status quo. I hope that they will continue to support the government amendments. I express my gratitude to the Law Commission and to all noble Lords—
Just before the Minister sits down—I assume that he has finished his brief on the self-harm amendments; I have been waiting—I have two questions relating to what he said. First, if I heard him right, he said that the person forwarding on is also committing an offence. Does that also apply to those who set up algorithms that disseminate, as opposed to one individual forwarding on to another individual? Those are two very different scenarios. We can see how one individual forwarding to another could be quite targeted and malicious, and we can see how disseminating through an algorithm could have very widespread harms across a lot of people in a lot of different groups—all types of groups—but I am not clear from what he said that that has been caught in his wording.
Secondly—I will ask both questions while I can—I asked the Minister previously why there have been no prosecutions under the Suicide Act. I understood from officials that this amendment creating an offence was to reflect the Suicide Act and that suicide was not included in the Bill because it was already covered as an offence by the Suicide Act. Yet there have been no prosecutions and we have had deaths, so I do not quite understand why I have not had an answer to that.
I will have to write on the second point to try to set that out in further detail. On the question of algorithms, the brief answer is no, algorithms would not be covered in the way a person forwarding on a communication is covered unless the algorithm has been developed with the intention of causing serious self-harm; it is the intention that is part of the test. If somebody creates an algorithm intending people to self-harm, that could be captured, but if it is an algorithm generally passing it on without that specific intention, it may not be. I am happy to write to the noble Baroness further on this, because it is a good question but quite a technical one.
It needs to be addressed, because these very small websites already alluded to are providing some extremely nasty stuff. They are not providing support to people and helping decrease the amount of harm to those self-harming but seem to be enjoying the spectacle of it. We need to differentiate and make sure that we do not inadvertently let one group get away with disseminating very harmful material simply because it has a small website somewhere else. I hope that will be included in the Minister’s letter; I do not expect him to reply now.
Some of us are slightly disappointed that my noble friend did not respond to my point on the interaction of Clause 160 with the illegal content duty. Essentially, what appears to be creating a criminal offence could simply be a channel for hyperactive censorship on the part of the platforms to prevent the criminal offence taking place. He has not explained that interaction. He may say that there is no interaction and that we would not expect the platforms to take any action against offences under Clause 160, or that we expect a large amount of action, but nothing was said.
If my noble friend will forgive me, I had better refresh my memory of what he said—it was some time ago—and follow up in writing.
My Lords, I will be extremely brief. There is much to chew on in the Minister’s speech and this was a very useful debate. Some of us will be happier than others; the noble Baroness, Lady Buscombe, will no doubt look forward to the digital markets Bill and I will just have to keep pressing the Minister on the Data Protection and Digital Information Bill.
There is a fundamental misunderstanding about digital identity theft. It will not necessarily always be fraud that is demonstrated—the very theft of the identity is designed to be the crime, and it is not covered by the Fraud Act 2006. I am delighted that the Minister has agreed to talk further with the noble Baroness, Lady Kennedy, because that is a really important area. I am not sure that my noble friend will be that happy with the response, but he will no doubt follow up with the Minister on his amendments.
The Minister made a very clear statement on the substantive aspect of the group, the new crime of encouraging self-harm, but further clarification is still needed. We will look very carefully at what he said in relation to what the Law Commission recommended, because it is really important that we get this right. I know that the Minister will talk further with the noble Baroness, Lady Finlay, who is very well versed in this area. In the meantime, I beg leave to withdraw my amendment.
My Lords, I now have to go through a mass of amendments that are not to be the subject of debate today as they have been debated previously. I will proceed as swiftly as I can.
Land is in sight! I call Amendment 286ZA.
Amendment 286ZA
My Lords, that was a bravura performance by the noble Lord, Lord Lexden. We thank him. To those listening in the Public Gallery, I should say that we debated most of those; it was not quite as on the nod as it looked.
Amendment 286ZA, in the name of my noble friend Lord Stevenson, seeks to address a critical issue in our digital landscape: the labelling of AI-generated content on social media platforms.
As we navigate the ever-evolving world of technology, it is crucial that we uphold a transparency safeguarding the principles of honesty and accountability. Social media has become an integral part of our lives, shaping public discourse, disseminating information and influencing public opinion. However, the rise of AI-powered algorithms and tools has given rise to a new challenge: an increasing amount of content generated by artificial intelligence without explicit disclosure.
We live in an age where AI is capable of creating incredibly realistic text, images and even videos that can be virtually indistinguishable from those generated by humans. While this advancement holds immense potential, it also raises concerns regarding authenticity, trust and the ethical implications of AI-generated content. The proposed amendment seeks to address this concern by advocating for a simple but powerful solution—labelling AI-generated content as such. By clearly distinguishing human-generated content from AI-generated content, we empower individuals to make informed decisions about the information they consume, promoting transparency and reducing the potential for misinformation or manipulation.
Labelling AI-generated content serves several crucial purposes. First and foremost, it allows individuals to differentiate between information created by humans and that generated by algorithms in an era where misinformation and deep fakes pose a significant threat to public trust. Such labelling becomes a vital tool to protect and promote digital literacy.
Secondly, it enables users to better understand the potential biases and limitations of AI-generated content. AI algorithms are trained on vast datasets, and without labelling, individuals might unknowingly attribute undue credibility to AI-generated information, assuming it to be wholly objective and reliable. Labelling, however, helps users to recognise the context and provides an opportunity for critical evaluation.
Furthermore, labelling AI-generated content encourages responsible behaviour from the platforms themselves. It incentivises social media companies to develop and implement AI technologies with integrity and transparency, ensuring that users are aware of the presence and influence of AI in their online experiences.
Some may argue that labelling AI-generated content is an unnecessary burden or that it could stifle innovation. However, the intention behind this amendment is not to impede progress but to foster a healthier digital ecosystem built on trust, integrity and informed decision-making. By promoting transparency, we can strike a balance that allows innovation to flourish while safeguarding the interests of individuals and society as a whole.
In conclusion, the amendment to label AI-generated content on social media platforms represents a crucial step forward in addressing the challenges of the digital age. By embracing transparency and empowering individuals, we can foster a more informed and discerning society. Let us lead by example and advocate for a digital landscape that values accountability, integrity and the rights of individuals. I urge your Lordships to support this amendment as we strive to build a future where technology works hand-in-hand with humanity for the betterment of all.
In the spirit of the amendment, I must flag that my entire speaking note was generated by AI, as the noble Lord, Lord Allan, from his expression, had clearly guessed. In using this tool, I do so not to belittle the amendment but to illustrate that these tools are already infiltrating everyday life and can supercharge misinformation. We need to do something to ease internet users in trusting what they read.
Does the noble Lord agree that the fact that we did not notice his speech was generated by AI somewhat damages his argument?
The fact that I labelled it as being AI-generated helped your Lordships to understand, and the transparency eases the debate. I beg to move.
My Lords, I thank the noble Lord, Lord Knight, for laying out the amendment and recognise that there was a very thoughtful debate on the subject of machine-generated content on Amendment 125 in my name on a previous day of Committee.
I appreciate that the concept of labelling or watermarking machine-generated material is central to recent EU legislation, but I am equally aware that there is more than one school of thought on the efficacy of that approach among AI experts. On the one hand, as the noble Lord, Lord Knight, beautifully set out—with the help of his artificial friend—there are those who believe that visibly marking the division of real and altered material is a clue for the public to look more carefully at what they are seeing and that labelling it might provide an opportunity for both creators and digital companies to give greater weight to “human-created material”. For example, it could be that the new BBC Verify brand is given greater validity by the public, or that Google’s search results promote it above material labelled as machine-generated as a more authentic source. There are others who feel that the scale of machine-generated material will be so vast that this labelling will be impossible or that labelling will downgrade the value of very important machine-generated material in the public imagination, when in the very near future it is likely that most human activity will be a blend of generated material and human interaction.
I spent the first part of this week locked in a room with others at the Institute for Ethics in AI in Oxford debating some of these issues. While this is a very live discussion, one thing is clear: if we are to learn from history, we must act now before all is certain, and we should act with pragmatism and a level of humility. It may be that either or both sets of experts are correct.
Industry has clearly indicated that there is an AI arms race, and many companies are launching services that they do not understand the implications of. This is not my view but one told to me by a company leader, who said that the speed of distribution was so great that the testing was confined to whether deploying large language models crashed the platforms; there was no testing for safety.
The noble Lord, Lord Stevenson, says in his explanatory statement that this is a probing amendment. I therefore ask the Minister whether we might meet before Report and look once again at the gaps that might be covered by some combination of Amendment 125 and the amendment in front of us, to make certain that the Bill adequately reflects the concerns raised by the enforcement community and reflects the advice of those who best understand the latest iterations of the digital world.
The Communications Act 2003 made a horrible mistake in not incorporating digital within it; let us not do the same here. Adding explicit safety duties to AI and machine learning would not slow down innovation but would ensure that innovation is not short-sighted and dangerous for humanity. It is a small amendment for what may turn out to be an unimaginably important purpose.
My Lords, it is a pleasure to follow the noble Baroness, Lady Kidron. I will try to keep my remarks brief.
It is extremely helpful that we have the opportunity to talk about this labelling question. I see it more as a kind of aperitif for our later discussion of AI regulation writ large. Given that it is literally aperitif hour, I shall just offer a small snifter as to why I think there may be some challenges around labelling—again, perhaps that is not a surprise to the noble Baroness.
When we make rules, as a general matter we tend to assume that people are going to read them and respond in a rationalist, conformist way. In reality, particularly in the internet space, we often see that there is a mixed environment and there will be three groups. There are the people who will look at the rules and respond in that rational way to them; a large group of people will just ignore them—they will simply be unaware and not at all focused on the rules; and another group will look for opportunities to subvert them and use them to their own advantage. I want to comment particularly on that last group by reference to cutlery and call centres, two historic examples of where rules have been subverted.
On the cutlery example, I am a Sheffielder, and “Made in Sheffield” used to mean that you had made the entire knife in Sheffield. Then we had this long period when we went from knives being made in Sheffield to bringing them to Sheffield and silver-plating them, to eventually just sharpening them and putting them in boxes. That is relevant in the context of AI. Increasingly, if there is an advantage to be gained by appearing to be human, people will look at what kind of finishing you need, so: “The content may have been generated by AI but the button to post it was pushed by a human, therefore we do not think it is AI because we looked at it and posted it”. On the speech of the noble Lord, Lord Knight, does the fact that my noble friend intervened on him and the noble Lord had to use some of his own words now mean that his speech in Hansard would not have to be labelled “AI-generated” because we have now departed from it? Therefore, there is that question of individuals who will want something to appear human-made even if it was largely AI-generated, and whether they will find the “Made in Sheffield” way of bypassing it.
Interestingly, we may see the phenomenon flipping the other way, and this is where my call centres come in. If people go to a popular search engine and type in “SpinVox”, they will see the story of a tech company that promised to transcribe voicemails into written text. This was a wonderful use of technology, and it was valued on the basis that it had developed that fantastic technology. However, it turned out—or at least there were claims, which I can repeat here under privilege—that it was using call centres in low-cost, low-wage environments to type those messages out. Therefore, again, we may see, curiously, some people seeing an advantage to presenting content as AI-generated when it is actually made by humans. That is just to flag that up—as I say, it is a much bigger debate that we are going to have. It is really important that we are having it, and labelling has a role to play. However, as we think about it, I urge that we remember those communities of people who will look at whatever rules we come up with and say, “Aha! Where can I get advantage?”, either by claiming that something is human when it is generated by AI or claiming that it is generated by AI if it suits them when it was actually produced by humans.
My Lords, it is a pleasure to follow the noble Lord, Lord Allan. He reminded me of significant reports of the huge amount of exploitation in the digital sector that has come from identification of photos. A great deal of that is human labour, even though it is often claimed to have been done through machine intelligence.
In speaking to this late but important amendment, I thank the noble Lords, Lord Stevenson and Lord Knight, for giving us the chance to do so, because, as every speaker has said, this is really important. I should declare my position as a former newspaper editor. I distinctly recall teasing a sports journalist in the early 1990s when it was reported that journalists were going to be replaced by computer technology. I said that the sports journalists would be the first to go because they just wrote to a formula anyway. I apologise to sports journalists everywhere.
The serious point behind that is that a lot of extreme, high claims are now being made about so-called artificial intelligence. I declare myself an artificial-intelligence sceptic. What we have now—so-called generative AI—is essentially big data. To quote the science fiction writer, Ted Chiang, what we have is applied statistics. Generative AI relies on looking at what already exists, and it cannot produce anything original. In many respects, it is a giant plagiarism machine. There are huge issues, beyond the scope of the Bill, around intellectual property and the fact that it is not generating anything original.
None the less, it is generating what people in the sector like to describe as hallucinations, which might otherwise be described as errors, falsehoods or lies. This is where quotes are made up; ideas are presented which, at first glance, look as though they make sense but fall apart under examination; and data is actively invented. There is one rather famous case where a lawyer got himself into a great deal of trouble by producing a whole lot of entirely false cases that a bot generated for him. We need to be really careful, and this amendment shows us a way forward in attempting to deal with some of the issues we are facing.
To pick up the points made by the noble Lord, Lord Allan, about the real-world impacts, I was at an event in Parliament this week entitled “The Worker Experience of the AI Revolution”, run by the TUC and Connected by Data. It highlighted what has happened with a lot of the big data exercises already in operation: rather than humans being replaced by robots, people are being forced to act like robots. We heard from Royal Mail and Amazon workers, who are monitored closely and expected to act like machines. That is just one example of the unexpected outcomes of the technologies we have been exercising in recent years.
I will make two final comments. First, I refer to 19th-century Luddite John Booth, who was tortured to death by the state. He was a Luddite, but he was also on the record as saying that new machinery
“might be man’s chief blessing instead of his curse if society were differently constituted”.
History is not pre-written; it is made by the choices, laws and decisions we make in this Parliament. Given where we are at the moment with so-called AI, I urge that caution really is warranted. We should think about putting some caution in the Bill, which is what this amendment points us towards.
My final point relates to an amendment I was not allowed to table because, I was told, it was out of scope. It asked the Secretary of State to report on the climate emissions coming from the digital sector, specifically from artificial intelligence. The noble Baroness, Lady Kidron, said that it will operate on a vast scale. I point out that, already, the digital sector is responsible for 3% of the world’s electricity use and 2% of the world’s carbon emissions, which is about the same as the airline sector. We really need to think about caution. I very much agree with everyone who said that we need to have more discussions on all these issues before Report.
My Lords, this is a real hit-and-run operation from the noble Lord, Lord Stevenson. He has put down an amendment on my favourite subject in the last knockings of the Bill. It is totally impossible to deal with this now—I have been thinking and talking about the whole area of AI governance and ethics for the past seven years—so I am not going to try. It is important, and the advisory committee under Clause 139 should take it into account. Actually, this is much more a question of authenticity and verification than of content. Trying to work out whether something is ChatGPT or GPT-4 content is a hopeless task; you are much more likely to be able to identify whether these are automated users such as chatbots than you are to know about the content itself.
I will leave it there. I missed the future-proofing debate, which I would have loved to have been part of. I look forward to further debates with the noble Viscount, Lord Camrose, on the deficiencies in the White Paper and to the Prime Minister’s much more muscular approach to AI regulation in future.
I am sure that the noble Lord, Lord Stevenson of Balmacara, is smiling over a sherry somewhere about the debate he has facilitated. His is a useful probing amendment and we have had a useful discussion.
The Government certainly recognise the potential challenges posed by artificial intelligence and digitally manipulated content such as deepfakes. As we have heard in previous debates, the Bill ensures that machine-generated content on user-to-user services created by automated tools or machine bots will be regulated where appropriate. Clause 49(4)(b) means that machine-generated content is regulated unless the bot or automated tool producing the content is controlled by the provider of the service.
The labelling of this content via draft legislation is not something to which I can commit today. The Government’s AI regulation White Paper sets out the principles for the responsible development of artificial intelligence in the UK. These principles, such as safety, transparency and accountability, are at the heart of our approach to ensuring the responsible development and use of AI. As set out in the White Paper, we are building an agile approach that is designed to be adaptable in response to emerging developments. We do not wish to introduce a rigid, inflexible form of legislation for what is a flexible and fast-moving technology.
The public consultation on these proposals closed yesterday so I cannot pre-empt our response to it. The Government’s response will provide an update. I am joined on the Front Bench by the Minister for Artificial Intelligence and Intellectual Property, who is happy to meet with the noble Baroness, Lady Kidron, and others before the next stage of the Bill if they wish.
Beyond labelling such content, I can say a bit to make it clear how the Bill will address the risks coming from machine-generated content. The Bill already deals with many of the most serious and illegal forms of manipulated media, including deepfakes, when they fall within scope of services’ safety duties regarding illegal content or content that is potentially harmful to children. Ofcom will recommend measures in its code of practice to tackle such content, which could include labelling where appropriate. In addition, the intimate image abuse amendments that the Government will bring forward will make it a criminal offence to send deepfake images.
In addition to ensuring that companies take action to keep users safe online, we are taking steps to empower users with the skills they need to make safer choices through our work on media literacy. Ofcom, for example, has an ambitious programme of work through which it is funding several initiatives to build people’s resilience to harm online, including initiatives designed to equip people with the skills to identify disinformation. We are keen to continue our discussions with noble Lords on media literacy and will keep an open mind on how it might be a tool for raising awareness of the threats of disinformation and inauthentic content.
With gratitude to the noble Lords, Lord Stevenson and Lord Knight, and everyone else, I hope that the noble Lord, Lord Knight, will be content to withdraw his noble friend’s amendment.
My Lords, I am grateful to everyone for that interesting and quick debate. It is occasionally one’s lot that somebody else tables an amendment but is unavoidably detained in Jerez, drinking sherry, and monitoring things in Hansard while I move the amendment. I am perhaps more persuaded than my noble friend might have been by the arguments that have been made.
We will return to this in other fora in response to the need to regulate AI. However, in the meantime, I enjoyed in particular the John Booth quote from the noble Baroness, Lady Bennett. In respect of this Bill and any of the potential harms around generative AI, if we have a Minister who is mindful of the need for safety by design when we have concluded this Bill then we will have dealt with the bits that we needed to deal with as far as this Bill is concerned.
Can the noble Lord confirm whether he generated those comments himself, or was he on his phone while we were speaking?
I do not have an invisible earpiece feeding me my lines—that was all human-generated. I beg leave to withdraw the amendment.
My Lords, we already had a long debate on this subject earlier in Committee. In the interim, many noble Lords associated with these amendments have had conversations with the Government, which I hope will bear some fruit before Report. Today, I want to reiterate a few points that I hope are clarifying to the Committee and the department. In the interests of everyone’s evening plans, the noble Lord, Lord Bethell, and the noble Baroness, Lady Harding, wish to associate themselves with these remarks so that they represent us in our entirety.
For many years, we thought age verification was a gold standard, primarily because it involved a specific government-issued piece of information such as a passport. By the same token, we thought age estimation was a lesser beast, given that it is an estimate by its very nature and that the sector primarily relied on self-declarations with very few checks and balances. In recent years, many approaches to age checking have flourished. Some companies provide age assurance tokens based on facial recognition; others use multiple signals of behaviour, friendship group, parental controls and how you move your body in gameplay; and, only yesterday, I saw the very impressive on-device privacy-preserving age-verification system that Apple rolled out in the US two weeks ago. All of these approaches, used individually and cumulatively, have a place in the age-checking ecosystem, and all will become more seamless over time. But we must ensure that, when they are used, they are adequate for the task they are performing and are quality controlled so that they do not share information about a child, are secure and are effective.
That is why, at the heart of the package of measures put forward in my name and that of the noble Lords, Lord Stevenson and Lord Bethell, and the right reverend Prelate the Bishop of Oxford, are two concepts. First, the method of measuring age should be tech neutral so that all roads can be used. Secondly, there must be robust mechanism of measurement of effectiveness so that only effective systems can be used in high-risk situations, particularly those of primary priority harms such as self-harm and pornography, and that such a measurement will be determined by Ofcom, not industry.
From my work over the last decade and from recent discussion with industry, I am certain that any regime of age assurance must be measurable and hold to certain principles. We cannot create a situation where children’s data is loosely held and liberally shared; we cannot have a system that discriminates against, or does not have automatic appeal mechanisms for, children of colour or those who are 17 or 19, who are at most likelihood of error. Systems should aim to be interoperable and private, not leave traces as children go from one service to another.
Each of the principles of our age-verification package set out in the schedule are of crucial importance. I hope that the Government will see the sense in that because, without them, this age checking will not be trusted. Equally, I urge the Committee to embrace the duality of age verification and estimation that the Government have put forward, because, if a child uses an older sibling’s form of verification and a company understands through the child’s behaviour that they are indeed a child, then we do not want to set up a perverse situation in which the verification is considered of a higher order and they cannot take action based on estimation; ditto, if estimation in gameplay is more accurate than tokens that verify whether someone is over or under 18, it may well be that estimation gives greater assurance that the company will treat the child according to their age.
I hope and believe that, in his response, the Minister will confirm that definitions of age assurance and age estimation will be on the face of the Bill. I also urge him to make a generous promise to accept the full gamut of our concerns about age checking and bring forward amendments in his name on Report that reflect them in full. I beg to move.
My Lords, I associate these Benches with the introduction by the noble Baroness, Lady Kidron, support her amendments and, likewise, hope that they form part of the package that is trundling on its way towards us.
My Lords, what more can I say than that I wish to be associated with the comments made by the noble Baroness and then by the noble Lord, Lord Clement-Jones? I look forward to the Minister’s reply.
I am very grateful to the noble Baroness for her amendment, which is a useful opportunity for us to state publicly and share with the Committee the progress we have been making in our helpful discussions on these issues in relation to these amendments. I am very grateful to her and to my noble friends Lord Bethell and Lady Harding for speaking as one on this, including, as is well illustrated, in this short debate this evening.
As the noble Baroness knows, discussions continue on the precise wording of these definitions. I share her optimism that we will be able to reach agreement on a suitable way forward, and I look forward to working with her, my noble friends and others as we do so.
The Bill already includes a definition of age assurance in Clause 207, which is
“measures designed to estimate or verify the age or age-range of users of a service”.
As we look at these issues, we want to avoid using words such as “checking”, which suggests that providers need to take a proactive approach to checking age, as that may inadvertently preclude the use of technologies which determine age through other means, such as profiling. It is also important that any definition of age assurance does not restrict the current and future use of innovative and accurate technologies. I agree that it is important that there should be robust definitions for terms which are not currently defined in the Bill, such as age verification, and recommit to the discussions we continue to have on what terms need to be defined and the best way to define them.
This has been a very helpful short debate with which to end our deliberations in Committee. I am very grateful to noble Lords for all the points that have been raised over the past 10 days, and I am very glad to be ending in this collaborative spirit. There is much for us still to do, and even more for the Office of the Parliamentary Counsel to do, before we return on Report, and I am grateful to it and to the officials working on the Bill. I urge the noble Baroness to withdraw her amendment.