Read Bill Ministerial Extracts
Data Protection and Digital Information Bill Debate
Full Debate: Read Full DebateLord Leong
Main Page: Lord Leong (Labour - Life peer)Department Debates - View all Lord Leong's debates with the Department for Work and Pensions
(7 months ago)
Grand CommitteeMy Lords, in moving this amendment, I will also speak to the other amendments in this group in the name of my noble friend Lady Jones of Whitchurch: Amendments 209 to 211 and 215.
It is estimated that a staggering 134 million personal injury compensation calls and texts have been made and sent in the UK in the past 12 months. YouGov research shows that more than 20 million people were contacted by companies touting for business through injury compensation claims. Personally, I have had more than my fair share, so I suppose I must declare an interest in this issue.
However, unsolicited calls are more than just a modern-day nuisance. If people have suffered an accident, they can be reminded of the trauma. People’s hopes of compensation can be raised cynically and unrealistically in order to encourage them to share personal financial information that can then be used to scam them out of their money. Research shows strong emotional responses to these calls. People are left feeling angry, anxious, disgusted and upset. That is hardly a surprise when they are being pestered in their own homes or on their own phones.
My Lords, I thank all noble Lords who have spoken, especially the noble Lords, Lord Kirkhope and Lord Clement-Jones, who have kindly supported this amendment.
I shall just make two points. The first is that “unlawful” is just not good enough. People are still carrying on making these cold calls. Sometimes we have to listen to experts. The Law Society says that they are banned from making cold calls, and the Association of Personal Injury Lawyers is asking for a ban. Sometimes, as politicians, we need to listen to people who perhaps know more than we do. If they are asking for it, it is basically because they need this clarified. I hope that the Minister will look at this again.
As for Amendments 211 and 215, perhaps the Minister could share with me the detail of the various points just made about the sharing data with various other stakeholders. If he could write to us or share it with us, that would satisfy our position.
On that basis, I beg leave to withdraw the amendment.
Data Protection and Digital Information Bill Debate
Full Debate: Read Full DebateLord Leong
Main Page: Lord Leong (Labour - Life peer)Department Debates - View all Lord Leong's debates with the Department for Science, Innovation & Technology
(7 months ago)
Grand CommitteeMy Lords, having been involved in and seen the campaigning of the bereaved families and the noble Baroness, Lady Kidron, in particular in the Joint Committee on the Draft Online Safety Bill onwards, I associate myself entirely with the noble Baroness’s statement and with my noble friend Lord Allan’s remarks.
My Lords, I thank the Minister for setting out the amendment and all noble Lords who spoke. I am sure the Minister will be pleased to hear that we support his Amendment 236 and his Amendment 237, to which the noble Baroness, Lady Kidron, has added her name.
Amendment 236 is a technical amendment. It seeks the straightforward deletion of words from a clause, accounting for the fact that investigations by a coroner, or procurator fiscal in Scotland, must start upon them being notified of the death of a child. The words
“or are due to conduct an investigation”
are indeed superfluous.
We also support Amendment 237. The deletion of this part of the clause would bring into effect a material change. It would empower Ofcom to issue a notice to an internet service provider to retain information in all cases of a child’s death, not just cases of suspected suicide. Sadly, as many of us have discovered in the course of our work on this Bill, there is an increasing number of ways in which communication online can be directly or indirectly linked to a child’s death. These include areas of material that is appropriate for adults only; the inability to filter harmful information, which may adversely affect mental health and decision-making; and, of course, the deliberate targeting of children by adults and, in some cases, by other children.
There are adults who use the internet with the intention of doing harm to children through coercion, grooming or abuse. What initially starts online can lead to contact in person. Often, this will lead to a criminal investigation, but, even if it does not, the changes proposed by this amendment could help prevent additional tragic deaths of children, not just those caused by suspected child suicides. If the investigating authorities have access to online communications that may have been a contributing factor in a child’s death, additional areas of concern can be identified by organisations and individuals with responsibility for children’s welfare and action taken to save many other young lives.
Before I sit down, I want to take this opportunity to say a big thank you to the noble Baroness, Lady Kidron, the noble Lord, Lord Kennedy, and all those who have campaigned on this issue relentlessly and brought it to our attention.
Let me begin by reiterating my thanks to the noble Baroness, Peers, families and coroners for their help in developing these measures. My momentary pleasure in being supported on these amendments is, of course, tempered by the desperate sadness of the situations that they are designed to address.
I acknowledge the powerful advocacy that has taken place on this issue. I am glad that we have been able to address the concerns with the amendment to the Online Safety Act, which takes a zero-tolerance approach to protecting children by making sure that the buck stops with social media platforms for the content they host. I sincerely hope that this demonstrates our commitment to ensuring that coroners can fully access the online data needed to provide answers for grieving families.
On the point raised by the noble Baroness, Lady Kidron, guidance from the Chief Coroner is likely to be necessary to ensure both that this provision works effectively and that coroners feel supported in their decisions on whether to trigger the data preservation process. Decisions on how and when to issue guidance are a matter for the Chief Coroner, of course, but we understand that he is very likely to issue guidance to coroners on this matter. His office is working with my department and Ofcom to ensure that our processes are aligned. The Government will also work with the regulators and interested parties to see whether any guidance is required to support parents in understanding the data preservation process. Needless to say, I would be more than happy to arrange a meeting with the noble Baroness to discuss the development of the guidance; other Members may wish to join that as well.
Once again, I thank noble Lords for their support on this matter.
My Lords, I support this probing amendment, Amendment 251. I thank all noble Lords who have spoken. From this side of the Committee, I say how grateful we are to the noble Lord, Lord Arbuthnot, for all that he has done and continues to do in his campaign to find justice for those sub-postmasters who have been wronged by the system.
This amendment seeks to reinstate the substantive provisions of Section 69 of PACE, the Police and Criminal Evidence Act 1984, revoking this dangerous assumption. I would like to imagine that legislators in 1984 were perhaps alert to the warning in George Orwell’s novel Nineteen Eighty-Four, written some 40 years earlier, about relying on an apparently infallible but ultimately corruptible technological system to define the truth. The Horizon scandal is, of course, the most glaring example of the dangers of assuming that computers are always right. Sadly, as hundreds of sub-postmasters have known for years, and as the wider public have more recently become aware, computer systems can be horribly inaccurate.
However, the Horizon system is very primitive compared to some of the programs which now process billions of pieces of our sensitive data every day. The AI revolution, which has already begun, will exponentially accelerate the risk of compounded errors being multiplied. To take just one example, some noble Lords may be aware of the concept of AI hallucinations. This is a term used to describe when computer models make inaccurate predictions based on seeing incorrect patterns in data, which may be caused by incomplete, biased or simply poor-quality inputs. In an earlier debate, the noble Viscount, Lord Younger of Leckie, said that account information notices will be decided. How will these decisions be made? Will they be made by individual human beings or by some AI-configured algorithms? Can the Minister share with us how such decisions will be taken?
Humans can look at clouds in the sky or outlines on the hillside and see patterns that look like faces, animals or symbols, but ultimately we know that we are looking at water vapour or rock formations. Computer systems do not necessarily have this innate common sense—this reality check. Increasingly, we will depend on computer systems talking to each other without any human intervention. This will deliver some great efficiencies, but it could lead to greater injustices on a scale which would terrify even the most dystopian science fiction writers. The noble Baroness, Lady Kidron, has already shared with us some of the cases where a computer has made errors and people have been wronged.
Amendment 251 would reintroduce the opportunity for some healthy human scepticism by enabling the investigation of whether there are reasonable grounds for questioning information in documents produced by a computer. The digital world of 2024 depends more on computers than the world of Nineteen Eighty-Four in actual legislation or in an Orwellian fiction. Amendment 251 enables ordinary people to question whether our modern “Big Brother” artificial intelligence is telling the truth when he or it is watching us. I look forward to the Minister’s responses to all the various questions and on the current assumption in law that information provided by the computer is always accurate.
My Lords, I recognise the feeling of the Committee on this issue and, frankly, I recognise the feeling of the whole country with respect to Horizon. I thank all those who have spoken for a really enlightening debate. I thank the noble Baroness, Lady Kidron, for tabling the amendment and my noble friend Lord Arbuthnot for speaking to it and—if I may depart from the script—his heroic behaviour with respect to the sub-postmasters.
There can be no doubt that hundreds of innocent sub-postmasters and sub-postmistresses have suffered an intolerable miscarriage of justice at the hands of the Post Office. I hope noble Lords will indulge me if I speak very briefly on that. On 13 March, the Government introduced the Post Office (Horizon System) Offences Bill into Parliament, which is due to go before a Committee of the whole House in the House of Commons on 29 April. The Bill will quash relevant convictions of individuals who worked, including on a voluntary basis, in Post Office branches and who have suffered as a result of the Post Office Horizon IT scandal. It will quash, on a blanket basis, convictions for various theft, fraud and related offences during the period of the Horizon scandal in England, Wales and Northern Ireland. This is to be followed by swift financial redress delivered by the Department for Business and Trade.
On the amendment laid by the noble Baroness, Lady Kidron—I thank her and the noble Lords who have supported it—I fully understand the intent behind this amendment, which aims to address issues with computer evidence such as those arising from the Post Office cases. The common law presumption, as has been said, is that the computer which has produced evidence in a case was operating effectively at the material time unless there is evidence to the contrary, in which case the party relying on the computer evidence will need to satisfy the court that the evidence is reliable and therefore admissible.
This amendment would require a party relying on computer evidence to provide proof up front that the computer was operating effectively at the time and that there is no evidence of improper use. I and my fellow Ministers, including those at the MoJ, understand the intent behind this amendment, and we are considering very carefully the issues raised by the Post Office cases in relation to computer evidence, including these wider concerns. So I would welcome the opportunity for further meetings with the noble Baroness, alongside MoJ colleagues. I was pleased to hear that she had met with my right honourable friend the Lord Chancellor on this matter.
We are considering, for example, the way reliability of evidence from the Horizon system was presented, how failures of investigation and disclosure prevented that evidence from being effectively challenged, and the lack of corroborating evidence in many cases. These issues need to be considered carefully, with the full facts in front of us. Sir Wyn Williams is examining in detail the failings that led to the Post Office scandal. These issues are not straightforward. The prosecution of those cases relied on assertions that the Horizon system was accurate and reliable, which the Post Office knew to be wrong. This was supported by expert evidence, which it knew to be misleading. The issue was that the Post Office chose to withhold the fact that the computer evidence itself was wrong.
This amendment would also have a significant impact on the criminal justice system. Almost all criminal cases rely on computer evidence to some extent, so any change to the burden of proof would or could impede the work of the Crown Prosecution Service and other prosecutors.
Although I am not able to accept this amendment for these reasons, I share the desire to find an appropriate way forward along with my colleagues at the Ministry of Justice, who will bear the brunt of this work, as the noble Lord, Lord Clement-Jones, alluded to. I look forward to meeting the noble Baroness to discuss this ahead of Report. Meanwhile, I hope she will withdraw her amendment.
My Lords, I will speak to all the amendments in this group, other than Amendment 295 from the noble Baroness, Lady Jones. Without stealing her thunder, I very much support it, especially in an election year and in the light of the deepfakes we have already seen in the political arena—those of Sadiq Khan, those used in the Slovakian election and the audio deepfakes of the President of the US and Sir Keir Starmer. This is a real issue and I am delighted that she has put down this amendment, which I have signed.
In another part of the forest, the recent spread of deepfake photos purporting to show Taylor Swift engaged in explicit acts has brought new attention to the use, which has been growing in recent years, of deepfake images, video and audio to harass women and commit fraud. Women constitute 99% of the victims and the most visited deepfake site had 111 million users in October 2023. More recently, children have been found using “declothing” apps, which I think the noble Baroness mentioned, to create explicit deepfakes of other children.
Deepfakes also present a growing threat to elections and democracy, as I have mentioned, and the problems are increasingly rampant. Deepfake fraud rates rose by 3,000% globally in 2023, and it is hardly surprising that, in recent polling, 86% of the UK population supported a ban on deepfakes. I believe that the public are demanding an urgent solution to this problem. The only effective way to stop deepfakes, which is analogous to what the noble Baroness, Lady Kidron, has been so passionately advocating, is for the Government to ban them at every stage, from production to distribution. Legal liability must hold to account those who produce deepfake technology, create and enable deepfake content, and facilitate its spread.
Existing legislation seeks to limit the spread of images on social media, but this is not enough. The recent images of Taylor Swift were removed from X and Telegram, but not before one picture had been viewed more than 47 million times. Digital watermarks are not a solution, as shown by a paper by world-leading Al researchers released in 2023, which concluded that
“strong and robust watermarking is impossible to achieve”.
Without measures across the supply chain to prevent the creation of deepfakes, the law will forever be playing catch-up.
The Government now intend to ban the creation of sexual imagery deepfakes; I welcome this and have their announcement in my hand:
“Government cracks down on ‘deepfakes’ creation”.
This will send a clear message that the creation of these intimate images is not acceptable. However, this appears to cover only sexual image deepfakes. These are the most prevalent form of deepfakes, but other forms of deepfakes are also causing noticeable and rapidly growing harms, most obviously political deepfakes—as the noble Baroness, Lady Jones, will illustrate—and deepfakes used for fraud. This also appears to cover only the endpoint of the creation of deepfakes, not the supply chain leading up to that point. There are whole apps and companies dedicated to the creation of deepfakes, and they should not exist. There are industries which provide legitimate services—generative Al and cloud computing—which fail to take adequate measures and end up enabling creation of deepfakes. They should take measures or face legal accountability.
The Government’s new measures are intended to be introduced through an amendment to the Criminal Justice Bill, which is, I believe, currently between Committee and Report in the House of Commons. As I understand it, however, there is no date scheduled yet for Report, as the Bill seems to be caught in a battle over amendments.
The law will, however, be extremely difficult to enforce. Perpetrators are able to hide behind anonymity and are often difficult to identify, even when victims or authorities are aware that deepfakes have been created. The only reliable and effective countermeasure is to hold the whole supply chain responsible for deepfake creation and proliferation. All parties involved in the AI supply chain, from AI model developers and providers to cloud compute providers, must demonstrate that they have taken steps to preclude the creation of deepfakes. This approach is similar to how society combats—or, rather, analogous to the way that I hope the Minister will concede to the noble Baroness, Lady Kidron, society will combat—child abuse material and malware.
My Lords, I speak to Amendments 293 and 294 from the noble Lord, Lord Clement-Jones, Amendment 295 proposed by my noble friend Lady Jones and Amendments 295A to 295F, also in the name of the noble Lord, Lord Clement-Jones.
Those noble Lords who are avid followers of my social media feeds will know that I am an advocate of technology. Advanced computing power and artificial intelligence offer enormous opportunities, which are not all that bad. However, the intentions of those who use them can be malign or criminal, and the speed of technological developments is outpacing legislators around the world. We are constantly in danger of creating laws that close the stable door long after the virtual horse has bolted.
The remarkable progress of visual and audio technology has its roots in the entertainment industry. It has been used to complete or reshoot scenes in films in the event of actors being unavailable, or in some cases, when actors died before filming was completed. It has also enabled filmmakers to introduce characters, or younger versions of iconic heroes for sequels or prequels in movie franchises. This enabled us to see a resurrected Sir Alec Guinness and a younger version of Luke Skywalker, or a de-aged Indiana Jones, on our screens.
The technology that can do this is only around 15 years old, and until about five years ago it required extremely powerful computers, expensive resources and advanced technical expertise. The first malicious use of deepfakes occurred when famous actors and celebrities, mainly and usually women, had their faces superimposed on to bodies of participants in pornographic videos. These were then marketed online as Hollywood stars’ sex tapes or similar, making money for the producers while causing enormous distress to the women targeted. More powerful computer processors inevitably mean that what was once very expensive rapidly becomes much cheaper very quickly. An additional factor has turbo-boosted this issue: generative AI. Computers can now learn to create images, sound and video movement almost independently of software specialists. It is no longer just famous women who are the targets of sexually explicit deepfakes; it could be anyone.
Amendment 293 directly addresses this horrendous practice, and I hope that there will be widespread support for it. In an increasingly digital world, we spend more time in front of our screens, getting information and entertainment on our phones, laptops, iPads and smart TVs. What was once an expensive technology, used to titillate, entertain or for comedic purposes, has developed an altogether darker presence, well beyond the reach of most legislation.
In additional to explicit sexual images, deepfakes are known to have been used to embarrass individuals, misrepresent public figures, enable fraud, manipulate public opinion and influence democratic political elections and referendums. This damages people individually: those whose images or voices are faked, and those who are taken in by the deepfakes. Trusted public figures, celebrities or spokespeople face reputational and financial damage when their voices or images are used to endorse fake products or for harvesting data. Those who are encouraged to click through are at risk of losing money to fraudsters, being targeted for scams, or having their personal and financial data leaked or sold on. There is growing evidence that information used under false pretences can be used for profiling in co-ordinated misinformation campaigns, for darker financial purposes or political exploitation.
In passing, it is worth remembering that deepfakes are not always images of people. Last year, crudely generated fake images of an explosion, purported to be at the Pentagon, caused the Dow Jones industrial average to drop 85 points within four minutes of the image being published, and triggered emergency response procedures from local law enforcement before it was debunked 20 minutes later. The power of a single image, carefully placed and virally spreading, shows the enormous and rapid economic damage that deepfakes can create.
Amendment 294 would make it an offence for a person to generate a deepfake for the purpose of committing fraud, and Amendment 295 would make it an offence to create deepfakes of political figures, particularly when they risk undermining electoral integrity. We support all the additional provisions in this group of amendments; Amendments 295A to 295F outline the requirements, duties and definitions necessary to ensure that those creating deepfakes can be prosecuted.
I bring to your Lordships’ attention the wording of Amendment 295, which, as well as making it an offence to create a deepfake, goes a little further. It also makes it an offence to send a communication which has been created by artificial intelligence and which is intended to create the impression that a political figure has said or done something that is not based in fact. This touches on what I believe to be a much more alarming aspect of deepfakes: the manner in which false information is distributed.