First, let me put on the record how pleased I was to see my hon. Friend the Member for Watford (Dean Russell) back in his place, having heard about his health issues. I say that not just because his parents are constituents of mine or because he was born and brought up in my constituency, but because he is a dear friend of mine.
I thank my hon. Friend for securing this debate and raising the important issue of AI scams and the use of AI to defraud or manipulate people. I assure him that the Government take the issue very seriously. Technology is a fast-moving landscape and the pace of recent developments in artificial intelligence exemplifies the challenge with which we are presented when it comes to protecting our society.
I will start by being very clear: safely deployed, AI will bring great benefits and promises to revolutionise our economy, society and everyday lives. That includes benefits for fraud prevention, on which we are working closely with the Home Office and other Departments across Government. Properly used, AI can and does form the heart of systems that manage risk, detect suspect activity and prevent millions of scam texts from reaching potential victims. However, as my hon. Friend rightly identified, AI also brings challenges. To reap the huge social and economic benefits of AI, we must manage the risk that it presents. To do so, and thereby maintain public trust in these technologies, is key to effectively developing, deploying and adopting AI.
In the long term, AI provides the means to enhance and upscale the ability of criminals to defraud. Lone individuals could be enabled to operate like an organised crime gang, conducting sophisticated, personalised fraud operations at scale, and my hon. Friend spoke eloquently about some of the risks of AI. The Government have taken a technology-neutral approach. The Online Safety Act 2023 will provide significant protections from online fraud, including where Al has been used to perpetrate a scam. More broadly, on the services it regulates, the Act will regulate AI-generated content in much the same way that it regulates content created by humans.
Under the Online Safety Act, all regulated services will be required to take proactive action to tackle fraud facilitated through user-generated content. I am conscious that my hon. Friend may have introduced a new phrase into the lexicon when he spoke of AI-assisted criminals. I am confident that the Online Safety Act will be key to tackling fraud when users share AI-generated content with other users. In addition, the Act will mandate an additional duty for the largest and most popular platforms to prevent fraudulent paid-for advertising appearing on their services. This represents a major step forward in ensuring that internet users are protected from scams.
The Government are taking broader action on fraud, beyond the Online Safety Act. In May 2023, the Home Office published a fraud strategy to address the threat of fraud. The strategy sets out an ambitious and radical plan for how the Government, law enforcement, regulators, industry and charities will work together to tackle fraud.
On the points raised by the hon. Member for Strangford (Jim Shannon), the Government are working with industry to remove the vulnerabilities that fraudsters exploit, with intelligence agencies to shut down fraudulent infrastructure, and with law enforcement to identify and bring the most harmful offenders to justice. We are also working with all our partners to ensure that the public have the advice and support that they need.
The fraud strategy set an ambitious target to cut fraud by 10% from 2019 levels, down to 3.3 million fraud incidents by the end of this Parliament. Crime survey data shows that we are currently at this target level, but we are not complacent and we continue to take action to drive down fraud. Our £100 million investment in law enforcement and the launch of a new national fraud squad will help to catch more fraudsters. We are working with industry to block fraud, including by stopping fraudsters exploiting calls and texts to target victims. We have already blocked more than 870 million scam texts from reaching the public, and the strategy will enable us to go much further.
Social media companies should carefully consider the legality of different types of data scraping and implement measures to protect against unlawful data scraping. They also have data protection obligations concerning third-party scraping from their websites, which we are strengthening in the Data Protection and Digital Information Bill. That Bill will hit rogue firms that hound people with nuisance calls with tougher fines. The maximum fine is currently £500,000; under the Bill, it will rise to 4% of global turnover or £17.5 million, whichever is greater, to better tackle rogue activities and punish those who pester people with unwanted calls and messages.
I thank the Minister for a comprehensive and detailed response to the hon. Member for Watford; it is quite encouraging. My intervention focused on the elderly and vulnerable—what can be done for those who fall specifically into that category?
It is a great honour to be intervened on by the hon. Gentleman, who makes an important point. The Government will be doing more awareness raising, which will be key. I am willing to work with the hon. Gentleman to ensure that we make progress, because it is a key target that we must achieve.
Consumers are further protected by the Privacy and Electronic Communications (EC Directive) Regulations 2003, which govern the rules for direct marketing by electronic means. Under these regulations, organisations must not send marketing texts, phone calls or emails to individuals without their specific prior consent. We are also strengthening these regulations, which means that anyone trying to contact someone with unwanted marketing communication calls can be fined if they could cause harm or disturbance to individuals, even if they go unanswered by victims.
Beyond legislation, the Home Office and the Prime Minister’s anti-fraud champion worked with leading online service providers to create an online fraud charter. The charter, which was launched in November last year, sets out voluntary commitments from some of the largest tech firms in the world to reduce fraud on their platforms and services and to raise best practice across the sector.
This includes commitments to improve the blocking of fraud at source, making reporting fraud easier for users and being more responsive in removing content and ads found to be fraudulent. The charter will also improve intelligence sharing and better educate users about the risk on platforms and services, in response to the point of the hon. Member for Strangford.
Public awareness is a key defence against all fraud, whether or not AI-enabled. As set out in the fraud strategy, we have been working with leading counter-fraud experts and wider industry to develop an eye-catching public comms campaign, which we anticipate going live next month. This will streamline fraud communications and help people spot and take action to avoid fraud.
None the less, it is important to remember that this work is taking place in a wider context. The UK is leading the way in ensuring that AI is developed in a responsible and safe way, allowing UK citizens to reap the benefits of this new technology, but be protected from its harms. In March last year, we published the AI regulation White Paper, which sets out principles for the responsible development of AI in the UK. These principles, such as safety and accountability, are at the heart of our approach to ensure the responsible development and use of AI.
The UK Government showed international leadership in this space when we hosted the world’s first major AI safety summit last year at Bletchley Park. This was a landmark event where we brought together a globally representative group of world leaders, businesses, academia and civil society to unite for crucial talks to explore and build consensus on collective international action, which would promote safety at the frontier of AI.
We recognise the concerns around AI models generating large volumes of content that is indistinguishable from human-generated pictures, voice recordings or videos. Enabling users and institutions to determine what media is real is a key part of tackling a wide range of AI risks, including fraud. My hon. Friend has brought forward the idea of labelling to make it clear when AI is used. The Government have a strong track record of supporting private sector innovation, including in this field. Innovations from the safety tech sector will play a central role in providing the technologies that online companies need to protect their users from harm and to shape a safer internet.
Beyond that, Government support measures provide a blueprint for supporting other solutions to keep users safe, such as championing research into the art of the possible, including via the annual UK Safety Tech sectoral analysis report, and driving innovative solutions via challenge funds in partnership with GCHQ and the Home Office.
DSIT has already published best practices relating to AI identifiers, which can aid the identification of AI-generated content, in the “Emerging processes for frontier AI safety” document, which is published ahead of the AI safety summit. In the light of that, DSIT continues to investigate the potential for detecting and labelling AI-generated content. That includes both assessing technical evidence on the feasibility of such detection and the levers that we have as policymakers to ensure that it is deployed in a beneficial way. More broadly, last year the Government announced £100 million to set up an expert taskforce to help the UK to adopt the next generation of safe AI—the very first of its kind. The taskforce has now become the AI Safety Institute, which is convening a new global network and facilitating collaboration across international partners, industry and civil society. The AI Safety Institute is engaging with leading AI companies that are collaboratively sharing access to their AI models for vital safety research.
We are making the UK the global centre of AI safety—a place where companies at the frontier know that the guardrails are in place for them to seize all the benefits of AI while mitigating the risks. As a result, the UK remains at the forefront of developing cutting-edge technologies to detect and mitigate online harms. UK firms already have a 25% market share in global safety tech sectors. AI creates new risks, but as I have set out it also has the potential to super-charge our response to tackling fraud and to make our everyday lives better. The Government are taking action across a range of areas to ensure that we manage the risks and capitalise on the benefits of these new technologies. I thank all Members who have spoken in the debate, and I again thank my hon. Friend the Member for Watford for introducing this debate on AI scams, which I assure him, and the House, are a Government priority.
Question put and agreed to.