Monday 22nd January 2024

(3 months, 2 weeks ago)

Commons Chamber
Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Motion made, and Question proposed, That this House do now adjourn.—(Scott Mann.)
21:17
Dean Russell Portrait Dean Russell (Watford) (Con)
- View Speech - Hansard - - - Excerpts

I am grateful for the opportunity to raise this important topic of protecting consumers from artificial intelligence scams, or AI scams as I will refer to them. I understand that this topic has not been debated specifically in this House before, but it has been referenced in multiple debates. I can understand why this topic is new. At one point it may well have been science fiction, but now it is science fact. Not only that, it is probably a matter of fact that society is increasingly at risk of technology-driven crime and criminality. A new category, which I call AI-assisted criminals and AI-assisted crime, is emerging. They can operate anywhere in the world, know everything about their chosen victim and be seemingly invisible to detection. This AI-assisted crime is growing and becoming ever more sophisticated. I will share some examples in my speech, but let us address the bigger picture before I begin.

First, I appreciate that this entire debate may be new to many. What exactly is an AI scam? Why do consumers even need to be protected from something that many would argue does not yet exist? Let us step back slightly to explain the bigger picture. We live in a world where social media is everywhere: in our lives, our homes and our pockets. Social media has connected communities in ways we never thought possible. But for all the positives, it is also, as I saw as a member of the Online Safety Public Bill Committee, full of risk and harms. We share our thoughts, our connections and, most notably, our data. I am confident that if any Government asked citizens to share the same personal data that many give away for free to social media platforms, there would be uproar and probably marches on the streets; but every day, for the benefit of free usage, relevant advertisements and, ultimately, convenience, our lives are shared by us, in detail, with friends and family and, in some cases, the entire world.

We have, ultimately, become data sources, and my fear is that this data—this personal data—will be harvested increasingly for use with AI for criminal purposes. When I say “data”, I do not just mean a person’s name or birth date, the names of friends, family and colleagues, their job or their place of work, but their face, their voice, their fears and their hopes, their very identity.

Jim Shannon Portrait Jim Shannon (Strangford) (DUP)
- Hansard - - - Excerpts

I congratulate the hon. Gentleman on raising this issue. There were 5,400 cases of fraud in Northern Ireland last year, which cost us some £23.1 million. There is the fraud experienced by businesses when fraudsters pose as legitimate organisations seeking personal or financial details, there is identity theft, and now there are the AI scams that require consumer protection. Does the hon. Gentleman agree that more must be done to ensure that our vulnerable and possibly older constituents are aware of the warning signs to look out for, in order to protect them and their hard-earned finances from scammers and now, in particular, the AI scamming that could lead to a tragedy for many of those elderly and vulnerable people?

Dean Russell Portrait Dean Russell
- Hansard - - - Excerpts

I absolutely agree with the hon. Gentleman. I fear that this is yet another opportunity for criminals to scam the most vulnerable, and that it will reach across the digital divide in ways that we cannot even imagine. As I have said, this concerns the very identity that we have online. This data can ultimately be harvested by criminals to scam, to fool, to threaten or even to blackmail. The victims send their hard-earned cash to the criminals before the criminals disappear into the ether-net.

Some may argue that I am fearmongering and that I am somehow against progress, but I am not. I see the vast benefits of AI. I see the opportunities in healthcare for early diagnosis, improving patients’ experience, enabling a single-patient view across health and social care so that disparate systems can work together and treatment involves not just individual body parts, but individuals themselves. AI will improve efficiencies in business through customer service and personalisation, and will do so many other wonderful things. It will, for instance, create a new generation of jobs and opportunities. However, we must recognise that AI is like fire: it can be both good and bad. Fire can warm our home and keep us safe, or, unwatched, can burn it down. The rapidly emerging harms that I am raising are so fast-moving that we may be engulfed by them before we realise the risks.

I am not a lone voice on this. Back in 2020, the Dawes Centre for Future Crime at UCL produced a report on AI-enabled future crime. It placed audio/visual impersonation at the top of the list of for “high concern” crimes, along with tailored phishing and large-scale blackmail. More recently, in May 2023, a McAfee cybersecurity artificial intelligence report entitled “Beware the Artificial Impostor” shared the risks of voice clones and deepfakes, and revealed how common AI voice scams were, attacking many more people in their lives and their homes. Only a quarter of adults surveyed had shared experiences of such a scam, although that will increase over time, and only 36% of the adults questioned had even heard of voice-enabled scams. The practice is growing more rapidly than the number of people who are aware that it exists in the first place. I will share my thoughts on education and prevention later in my speech.

Increasingly online there are examples of deepfakes and AI impersonation being used both for entertainment and as warnings. Many will now have heard of a deepfake, from a “Taylor Swift” supposedly selling kitchenware, to various actors being replaced by deepfakes in famous roles—Jim Carrey in “The Shining”, for example. Many may be viewed as a bit of fun to watch, until one realises the dangers and risks that AI such as deepfakes and cloned audio can pose. An example is the frightening deepfake video of Volodymyr Zelensky that was broadcast on hacked Ukrainian TV falsely ordering the country’s troops to surrender to Russia. Thankfully, people spotted it and knew that it was not real. We also know that there are big risks for the upcoming elections here, in the US and elsewhere in the world, and for democracy itself. The challenge is that the ease with which convincing deepfakes and cloned voices can be made is rapidly opening up scam opportunities on an unprecedented scale, affecting not only politicians and celebrities but individuals in their own homes.

The challenge we face is that fraudsters are often not necessarily close to home. A recent report by Which? pointed out that the City of London police estimates that over 70% of fraud experienced by UK victims could have an international component, either involving offenders in the UK and overseas working together or the fraud being driven solely by a fraudster based outside the UK. Which? also shared how AI tools such as ChatGPT and Bard can be used to create convincing corporate emails from the likes of PayPal that could be misused by unscrupulous fraudsters. In this instance, such AI-assisted crime is simply an extension of the existing email fraud and scams we are already used to. If we imagine that it is not emails from a corporation but video calls or cloned voice messages from loved ones, we might suddenly see the scale of the risk.

I am aware that I have been referring to various reports and stories, but let me please give some context to what these scams can look like in real life. Given the time available, I shall give just a couple of recent examples reported by the media. Perhaps one of the most extreme was reported in The Independent. In the US, a mother from Arizona shared her story with a local news show on WKYT. She stated that she had picked up a call from an unknown number and heard what she believed to be her 15-year-old daughter “sobbing”. The voice on the other end of the line said, “Mom, I messed up”, before a male voice took over and made threatening demands. She shared that

“this man gets on the phone, and he’s like, ‘Listen here, I’ve got your daughter’.”

The apparent kidnapper then threatened the mother and the daughter. In the background, the mother said she could hear her daughter saying:

“Help me, mom, please help me,”

and crying. The mother stated:

“It was 100% her voice. It was never a question of who is this? It was completely her voice, it was her inflection, it was the way she would have cried—I never doubted for one second it was her. That was the freaky part that really got me to my core.”

The apparent kidnapper demanded money for the release of the daughter. The mother only realised that her daughter was safe after a friend called her husband and confirmed that that was the case. This had been a deepfake AI cloning her daughter’s voice to blackmail and threaten.

Another example was reported in the Daily Mail. A Canadian couple were targeted by an AI voice scam and lost 21,000 Canadian dollars. This AI scam targeted parents who were tricked by a convincing AI clone of their son’s voice telling them that he was in jail for killing a diplomat in a car crash. The AI caller stated that they needed 21,000 Canadian dollars for legal fees before going to court, and the frightened parents collected the cash from several banks and sent the scammer the money via Bitcoin. In this instance, the report shared that the parents filed a police report once they realised that they had been scammed. They said:

“The money’s gone. There’s no insurance. There’s no getting it back. It’s gone.”

These examples, in my view, are the canary in the mine.

I am sure that, over recent years, we have all received at least one scam text message. They are usually pretty unconvincing, but that is because they are dumb messages, in the sense that there is no context. But let us imagine that, like the examples I have mentioned, the message is not a text but a phone call or even a video call and that we can see a loved one’s face or hear their voice. The conversation could be as real as it would be if we were speaking to that loved one in person. Perhaps they will ask how we are. Perhaps they will mention something we recently did together, an event we attended, a nickname we use or even a band that we are a fan of—something that we would think only a friend or family member would know. On the call, they might say that they were in trouble and ask us to send £10 or perhaps £100 as they have lost their bank card, or ask for some personal banking information because it is an emergency. I am sure that many people would not think twice about helping a loved one, only to find out that the person they spoke to was not real but an AI scam, and that the information the person spoke about with an AI-cloned voice was freely available on the victim’s Facebook page or elsewhere online.

Imagine that this scam happens not to one person but to hundreds of thousands of people within the space of a few minutes. These AI-assisted criminals could make hundreds of thousands of pounds, perhaps millions of pounds, before anyone worked out that they had been scammed. The AI technology to do this is already here and will soon be unleashed, so we need to protect consumers now, before it arrives on everyone’s phone, and before it impacts our constituents and even our economy in ways that we cannot imagine.

Because of the precise topic of the debate, I will not stray too far into how this technology raises major concerns for the upcoming election. We could easily debate for hours the risk of people receiving a call from a loved one on the day of the election convincing them to vote a different way, or not to vote at all.

Everything that I have said today is borne out by the evidence and predictions. The Federal Trade Commission has already warned that AI is being used to “turbocharge” scams, so it is just a matter of time, and time is running out. How do we protect consumers from AI scams? First, I am aware that the Government are on the front foot with AI. I was fortunate to attend the Prime Minister’s speech on AI last year—a speech that I genuinely believe will be considered in decades to come to be one of the most important made by a Prime Minister because, amid all the global challenges we face, he was looking to a long-term challenge that we did not know we were facing.

I appreciate that the Government have said that they expect to have robust mechanisms in place to stop the spread of AI-powered disinformation before the general election, but the risks of deepfakes go far and wide, and the economic impact of AI scams is already predicted by some media outlets to run into the billions. The Daily Hodl reports that the latest numbers from the US Federal Trade Commission show that imposter scams accounted for $2.6 billion of losses in 2022.

The Secretary of State for Science, Innovation and Technology has said that the rise of generative AI, which can be used to create written, audio and video content, has “made it easier” for people to create “more sophisticated” misleading content and “amplifies an existing risk” around online disinformation.

With the knowledge that the Government are ahead of the game on AI, I ask that the Minister, who knows this topic inside out, considers some simple measures. First, will he consider legislation, guidelines or simple frameworks to create a “Turing clause”? Everyone knows that Turing said technology would one day be able to fool humans, and that time seems to be here. The principle of a Turing clause would be that any application or use of AI where the intention is to pretend to be a human must be clearly labelled. I believe we can begin this by encouraging all Government Departments, and all organisations that work with the Government, to have clear labelling. A simple example would be chatbots. It must be clearly identified where a person is speaking to an AI, not to a real human being.

Secondly, I believe there is a great opportunity for the Government to support research and development within the industry to create accredited antivirus-style AI detection for use in phones, computers and other technology. This would be similar to the rise of antivirus software in the early days of the world wide web. The technology’s premise would be to help to identify the risk that AI is being used in any communication with an individual. For example, the technology could be used to provide a contextual alert that a phone call, text message or other communication might be AI generated or manipulated, such as a call from a supposed family member received from an unknown phone number. In the same way as anti-virus software warns of computer users of malware risks, that could become a commonplace system that allows the public to be alerted to AI risks, and it could position the UK as a superpower in policing AI around the world. We could create the technologies that other countries use to protect their citizens by, in effect, creating AI policing and alert systems.

Thirdly, I would like to find out what, if any, engagement is taking place with insurance companies and banks to make sure they protect consumers affected by AI scams. I am conscious that the AI scams that are likely to convince victims will most likely get them to do things willingly, so it is much harder for consumers to be protected because before they even realise they have been fooled by what they believe is a loved one but is in fact an AI voice clone or video deepfake, they will have already given over their money. I do not want insurance companies and banks to use that against our consumers and the public, when they have been fooled by something that is incredibly sophisticated.

A further ask relates to the fact that prevention is better than cure. We therefore need to help the public to identify AI scams, for example, by suggesting that they use a codeword when speaking to loved ones on the phone or via video calls, so that they know they are real. The public should be cautious about unknown callers; we need to make them aware that that is the most likely way of getting a phone call that is a deepfake or is by a cloned voice and that puts them at risk. We should also encourage people not to act too quickly when asked to transfer money. As stated by the hon. Member for Strangford (Jim Shannon), the most vulnerable will be the older people in society—those who are most worried about these things. We need to make sure they are aware of what is possible and to make it clear that this is about not science fiction, but science fact.

Finally, I appreciate that this falls under a Department different from the Minister’s, but I would like to understand what mechanisms, both via policing and through the courts, are being explored to both deter and track down AI-assisted crime and criminals, so that we can not only find the individuals who are pushing and creating this technology—they will, no doubt, be those in serious and organised crime gangs—but shut down their technologies at source.

To conclude, unlike some, I do not subscribe to the belief that “The end of the world is nigh,” or even that “The end of the world is AI.” I hope Members excuse the pun. However, it would be wrong not to be wary of the risks that we know about and the fact that there are many, many unknown unknowns in this space. Our ability to be nimble in the face of growing risks is a must, and spotting early warning signs, several of which I have outlined today, is essential. We may not see this happen every day now, but there is a real risk that in the next year or two, and definitely within a decade, we will see it on a very regular basis, in ways that even I have not been able to predict today. So we need to look beyond the potential economic and democratic opportunities, to the potential economic and democratic harms that AI could inflict on us all.

Scams such as those I have outlined could ruin people’s lives—mentally, financially and in so many other ways. If it is not worth doing all we can now to avoid that, I do not know when the right time is. So, along with responding to my points, will the Minister recommend that colleagues throughout the House become familiar with the risk of AI scams so that they can warn their constituents? I ask Members also to consider joining the fantastic all-party group on artificial intelligence, which helps these things—the scams, the opportunity and much more—to be discussed regularly. I thank the Minister for his time and look forward to hearing his response.

21:38
Saqib Bhatti Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Saqib Bhatti)
- View Speech - Hansard - - - Excerpts

First, let me put on the record how pleased I was to see my hon. Friend the Member for Watford (Dean Russell) back in his place, having heard about his health issues. I say that not just because his parents are constituents of mine or because he was born and brought up in my constituency, but because he is a dear friend of mine.

I thank my hon. Friend for securing this debate and raising the important issue of AI scams and the use of AI to defraud or manipulate people. I assure him that the Government take the issue very seriously. Technology is a fast-moving landscape and the pace of recent developments in artificial intelligence exemplifies the challenge with which we are presented when it comes to protecting our society.

I will start by being very clear: safely deployed, AI will bring great benefits and promises to revolutionise our economy, society and everyday lives. That includes benefits for fraud prevention, on which we are working closely with the Home Office and other Departments across Government. Properly used, AI can and does form the heart of systems that manage risk, detect suspect activity and prevent millions of scam texts from reaching potential victims. However, as my hon. Friend rightly identified, AI also brings challenges. To reap the huge social and economic benefits of AI, we must manage the risk that it presents. To do so, and thereby maintain public trust in these technologies, is key to effectively developing, deploying and adopting AI.

In the long term, AI provides the means to enhance and upscale the ability of criminals to defraud. Lone individuals could be enabled to operate like an organised crime gang, conducting sophisticated, personalised fraud operations at scale, and my hon. Friend spoke eloquently about some of the risks of AI. The Government have taken a technology-neutral approach. The Online Safety Act 2023 will provide significant protections from online fraud, including where Al has been used to perpetrate a scam. More broadly, on the services it regulates, the Act will regulate AI-generated content in much the same way that it regulates content created by humans.

Under the Online Safety Act, all regulated services will be required to take proactive action to tackle fraud facilitated through user-generated content. I am conscious that my hon. Friend may have introduced a new phrase into the lexicon when he spoke of AI-assisted criminals. I am confident that the Online Safety Act will be key to tackling fraud when users share AI-generated content with other users. In addition, the Act will mandate an additional duty for the largest and most popular platforms to prevent fraudulent paid-for advertising appearing on their services. This represents a major step forward in ensuring that internet users are protected from scams.

The Government are taking broader action on fraud, beyond the Online Safety Act. In May 2023, the Home Office published a fraud strategy to address the threat of fraud. The strategy sets out an ambitious and radical plan for how the Government, law enforcement, regulators, industry and charities will work together to tackle fraud.

On the points raised by the hon. Member for Strangford (Jim Shannon), the Government are working with industry to remove the vulnerabilities that fraudsters exploit, with intelligence agencies to shut down fraudulent infrastructure, and with law enforcement to identify and bring the most harmful offenders to justice. We are also working with all our partners to ensure that the public have the advice and support that they need.

The fraud strategy set an ambitious target to cut fraud by 10% from 2019 levels, down to 3.3 million fraud incidents by the end of this Parliament. Crime survey data shows that we are currently at this target level, but we are not complacent and we continue to take action to drive down fraud. Our £100 million investment in law enforcement and the launch of a new national fraud squad will help to catch more fraudsters. We are working with industry to block fraud, including by stopping fraudsters exploiting calls and texts to target victims. We have already blocked more than 870 million scam texts from reaching the public, and the strategy will enable us to go much further.

Social media companies should carefully consider the legality of different types of data scraping and implement measures to protect against unlawful data scraping. They also have data protection obligations concerning third-party scraping from their websites, which we are strengthening in the Data Protection and Digital Information Bill. That Bill will hit rogue firms that hound people with nuisance calls with tougher fines. The maximum fine is currently £500,000; under the Bill, it will rise to 4% of global turnover or £17.5 million, whichever is greater, to better tackle rogue activities and punish those who pester people with unwanted calls and messages.

Jim Shannon Portrait Jim Shannon
- Hansard - - - Excerpts

I thank the Minister for a comprehensive and detailed response to the hon. Member for Watford; it is quite encouraging. My intervention focused on the elderly and vulnerable—what can be done for those who fall specifically into that category?

Saqib Bhatti Portrait Saqib Bhatti
- Hansard - - - Excerpts

It is a great honour to be intervened on by the hon. Gentleman, who makes an important point. The Government will be doing more awareness raising, which will be key. I am willing to work with the hon. Gentleman to ensure that we make progress, because it is a key target that we must achieve.

Consumers are further protected by the Privacy and Electronic Communications (EC Directive) Regulations 2003, which govern the rules for direct marketing by electronic means. Under these regulations, organisations must not send marketing texts, phone calls or emails to individuals without their specific prior consent. We are also strengthening these regulations, which means that anyone trying to contact someone with unwanted marketing communication calls can be fined if they could cause harm or disturbance to individuals, even if they go unanswered by victims.

Beyond legislation, the Home Office and the Prime Minister’s anti-fraud champion worked with leading online service providers to create an online fraud charter. The charter, which was launched in November last year, sets out voluntary commitments from some of the largest tech firms in the world to reduce fraud on their platforms and services and to raise best practice across the sector.

This includes commitments to improve the blocking of fraud at source, making reporting fraud easier for users and being more responsive in removing content and ads found to be fraudulent. The charter will also improve intelligence sharing and better educate users about the risk on platforms and services, in response to the point of the hon. Member for Strangford.

Public awareness is a key defence against all fraud, whether or not AI-enabled. As set out in the fraud strategy, we have been working with leading counter-fraud experts and wider industry to develop an eye-catching public comms campaign, which we anticipate going live next month. This will streamline fraud communications and help people spot and take action to avoid fraud.

None the less, it is important to remember that this work is taking place in a wider context. The UK is leading the way in ensuring that AI is developed in a responsible and safe way, allowing UK citizens to reap the benefits of this new technology, but be protected from its harms. In March last year, we published the AI regulation White Paper, which sets out principles for the responsible development of AI in the UK. These principles, such as safety and accountability, are at the heart of our approach to ensure the responsible development and use of AI.

The UK Government showed international leadership in this space when we hosted the world’s first major AI safety summit last year at Bletchley Park. This was a landmark event where we brought together a globally representative group of world leaders, businesses, academia and civil society to unite for crucial talks to explore and build consensus on collective international action, which would promote safety at the frontier of AI.

We recognise the concerns around AI models generating large volumes of content that is indistinguishable from human-generated pictures, voice recordings or videos. Enabling users and institutions to determine what media is real is a key part of tackling a wide range of AI risks, including fraud. My hon. Friend has brought forward the idea of labelling to make it clear when AI is used. The Government have a strong track record of supporting private sector innovation, including in this field. Innovations from the safety tech sector will play a central role in providing the technologies that online companies need to protect their users from harm and to shape a safer internet.

Beyond that, Government support measures provide a blueprint for supporting other solutions to keep users safe, such as championing research into the art of the possible, including via the annual UK Safety Tech sectoral analysis report, and driving innovative solutions via challenge funds in partnership with GCHQ and the Home Office.

DSIT has already published best practices relating to AI identifiers, which can aid the identification of AI-generated content, in the “Emerging processes for frontier AI safety” document, which is published ahead of the AI safety summit. In the light of that, DSIT continues to investigate the potential for detecting and labelling AI-generated content. That includes both assessing technical evidence on the feasibility of such detection and the levers that we have as policymakers to ensure that it is deployed in a beneficial way. More broadly, last year the Government announced £100 million to set up an expert taskforce to help the UK to adopt the next generation of safe AI—the very first of its kind. The taskforce has now become the AI Safety Institute, which is convening a new global network and facilitating collaboration across international partners, industry and civil society. The AI Safety Institute is engaging with leading AI companies that are collaboratively sharing access to their AI models for vital safety research.

We are making the UK the global centre of AI safety—a place where companies at the frontier know that the guardrails are in place for them to seize all the benefits of AI while mitigating the risks. As a result, the UK remains at the forefront of developing cutting-edge technologies to detect and mitigate online harms. UK firms already have a 25% market share in global safety tech sectors. AI creates new risks, but as I have set out it also has the potential to super-charge our response to tackling fraud and to make our everyday lives better. The Government are taking action across a range of areas to ensure that we manage the risks and capitalise on the benefits of these new technologies. I thank all Members who have spoken in the debate, and I again thank my hon. Friend the Member for Watford for introducing this debate on AI scams, which I assure him, and the House, are a Government priority.

Question put and agreed to.

21:49
House adjourned.