(10 months, 1 week ago)
Commons ChamberI am grateful for the opportunity to raise this important topic of protecting consumers from artificial intelligence scams, or AI scams as I will refer to them. I understand that this topic has not been debated specifically in this House before, but it has been referenced in multiple debates. I can understand why this topic is new. At one point it may well have been science fiction, but now it is science fact. Not only that, it is probably a matter of fact that society is increasingly at risk of technology-driven crime and criminality. A new category, which I call AI-assisted criminals and AI-assisted crime, is emerging. They can operate anywhere in the world, know everything about their chosen victim and be seemingly invisible to detection. This AI-assisted crime is growing and becoming ever more sophisticated. I will share some examples in my speech, but let us address the bigger picture before I begin.
First, I appreciate that this entire debate may be new to many. What exactly is an AI scam? Why do consumers even need to be protected from something that many would argue does not yet exist? Let us step back slightly to explain the bigger picture. We live in a world where social media is everywhere: in our lives, our homes and our pockets. Social media has connected communities in ways we never thought possible. But for all the positives, it is also, as I saw as a member of the Online Safety Public Bill Committee, full of risk and harms. We share our thoughts, our connections and, most notably, our data. I am confident that if any Government asked citizens to share the same personal data that many give away for free to social media platforms, there would be uproar and probably marches on the streets; but every day, for the benefit of free usage, relevant advertisements and, ultimately, convenience, our lives are shared by us, in detail, with friends and family and, in some cases, the entire world.
We have, ultimately, become data sources, and my fear is that this data—this personal data—will be harvested increasingly for use with AI for criminal purposes. When I say “data”, I do not just mean a person’s name or birth date, the names of friends, family and colleagues, their job or their place of work, but their face, their voice, their fears and their hopes, their very identity.
I congratulate the hon. Gentleman on raising this issue. There were 5,400 cases of fraud in Northern Ireland last year, which cost us some £23.1 million. There is the fraud experienced by businesses when fraudsters pose as legitimate organisations seeking personal or financial details, there is identity theft, and now there are the AI scams that require consumer protection. Does the hon. Gentleman agree that more must be done to ensure that our vulnerable and possibly older constituents are aware of the warning signs to look out for, in order to protect them and their hard-earned finances from scammers and now, in particular, the AI scamming that could lead to a tragedy for many of those elderly and vulnerable people?
I absolutely agree with the hon. Gentleman. I fear that this is yet another opportunity for criminals to scam the most vulnerable, and that it will reach across the digital divide in ways that we cannot even imagine. As I have said, this concerns the very identity that we have online. This data can ultimately be harvested by criminals to scam, to fool, to threaten or even to blackmail. The victims send their hard-earned cash to the criminals before the criminals disappear into the ether-net.
Some may argue that I am fearmongering and that I am somehow against progress, but I am not. I see the vast benefits of AI. I see the opportunities in healthcare for early diagnosis, improving patients’ experience, enabling a single-patient view across health and social care so that disparate systems can work together and treatment involves not just individual body parts, but individuals themselves. AI will improve efficiencies in business through customer service and personalisation, and will do so many other wonderful things. It will, for instance, create a new generation of jobs and opportunities. However, we must recognise that AI is like fire: it can be both good and bad. Fire can warm our home and keep us safe, or, unwatched, can burn it down. The rapidly emerging harms that I am raising are so fast-moving that we may be engulfed by them before we realise the risks.
I am not a lone voice on this. Back in 2020, the Dawes Centre for Future Crime at UCL produced a report on AI-enabled future crime. It placed audio/visual impersonation at the top of the list of for “high concern” crimes, along with tailored phishing and large-scale blackmail. More recently, in May 2023, a McAfee cybersecurity artificial intelligence report entitled “Beware the Artificial Impostor” shared the risks of voice clones and deepfakes, and revealed how common AI voice scams were, attacking many more people in their lives and their homes. Only a quarter of adults surveyed had shared experiences of such a scam, although that will increase over time, and only 36% of the adults questioned had even heard of voice-enabled scams. The practice is growing more rapidly than the number of people who are aware that it exists in the first place. I will share my thoughts on education and prevention later in my speech.
Increasingly online there are examples of deepfakes and AI impersonation being used both for entertainment and as warnings. Many will now have heard of a deepfake, from a “Taylor Swift” supposedly selling kitchenware, to various actors being replaced by deepfakes in famous roles—Jim Carrey in “The Shining”, for example. Many may be viewed as a bit of fun to watch, until one realises the dangers and risks that AI such as deepfakes and cloned audio can pose. An example is the frightening deepfake video of Volodymyr Zelensky that was broadcast on hacked Ukrainian TV falsely ordering the country’s troops to surrender to Russia. Thankfully, people spotted it and knew that it was not real. We also know that there are big risks for the upcoming elections here, in the US and elsewhere in the world, and for democracy itself. The challenge is that the ease with which convincing deepfakes and cloned voices can be made is rapidly opening up scam opportunities on an unprecedented scale, affecting not only politicians and celebrities but individuals in their own homes.
The challenge we face is that fraudsters are often not necessarily close to home. A recent report by Which? pointed out that the City of London police estimates that over 70% of fraud experienced by UK victims could have an international component, either involving offenders in the UK and overseas working together or the fraud being driven solely by a fraudster based outside the UK. Which? also shared how AI tools such as ChatGPT and Bard can be used to create convincing corporate emails from the likes of PayPal that could be misused by unscrupulous fraudsters. In this instance, such AI-assisted crime is simply an extension of the existing email fraud and scams we are already used to. If we imagine that it is not emails from a corporation but video calls or cloned voice messages from loved ones, we might suddenly see the scale of the risk.
I am aware that I have been referring to various reports and stories, but let me please give some context to what these scams can look like in real life. Given the time available, I shall give just a couple of recent examples reported by the media. Perhaps one of the most extreme was reported in The Independent. In the US, a mother from Arizona shared her story with a local news show on WKYT. She stated that she had picked up a call from an unknown number and heard what she believed to be her 15-year-old daughter “sobbing”. The voice on the other end of the line said, “Mom, I messed up”, before a male voice took over and made threatening demands. She shared that
“this man gets on the phone, and he’s like, ‘Listen here, I’ve got your daughter’.”
The apparent kidnapper then threatened the mother and the daughter. In the background, the mother said she could hear her daughter saying:
“Help me, mom, please help me,”
and crying. The mother stated:
“It was 100% her voice. It was never a question of who is this? It was completely her voice, it was her inflection, it was the way she would have cried—I never doubted for one second it was her. That was the freaky part that really got me to my core.”
The apparent kidnapper demanded money for the release of the daughter. The mother only realised that her daughter was safe after a friend called her husband and confirmed that that was the case. This had been a deepfake AI cloning her daughter’s voice to blackmail and threaten.
Another example was reported in the Daily Mail. A Canadian couple were targeted by an AI voice scam and lost 21,000 Canadian dollars. This AI scam targeted parents who were tricked by a convincing AI clone of their son’s voice telling them that he was in jail for killing a diplomat in a car crash. The AI caller stated that they needed 21,000 Canadian dollars for legal fees before going to court, and the frightened parents collected the cash from several banks and sent the scammer the money via Bitcoin. In this instance, the report shared that the parents filed a police report once they realised that they had been scammed. They said:
“The money’s gone. There’s no insurance. There’s no getting it back. It’s gone.”
These examples, in my view, are the canary in the mine.
I am sure that, over recent years, we have all received at least one scam text message. They are usually pretty unconvincing, but that is because they are dumb messages, in the sense that there is no context. But let us imagine that, like the examples I have mentioned, the message is not a text but a phone call or even a video call and that we can see a loved one’s face or hear their voice. The conversation could be as real as it would be if we were speaking to that loved one in person. Perhaps they will ask how we are. Perhaps they will mention something we recently did together, an event we attended, a nickname we use or even a band that we are a fan of—something that we would think only a friend or family member would know. On the call, they might say that they were in trouble and ask us to send £10 or perhaps £100 as they have lost their bank card, or ask for some personal banking information because it is an emergency. I am sure that many people would not think twice about helping a loved one, only to find out that the person they spoke to was not real but an AI scam, and that the information the person spoke about with an AI-cloned voice was freely available on the victim’s Facebook page or elsewhere online.
Imagine that this scam happens not to one person but to hundreds of thousands of people within the space of a few minutes. These AI-assisted criminals could make hundreds of thousands of pounds, perhaps millions of pounds, before anyone worked out that they had been scammed. The AI technology to do this is already here and will soon be unleashed, so we need to protect consumers now, before it arrives on everyone’s phone, and before it impacts our constituents and even our economy in ways that we cannot imagine.
Because of the precise topic of the debate, I will not stray too far into how this technology raises major concerns for the upcoming election. We could easily debate for hours the risk of people receiving a call from a loved one on the day of the election convincing them to vote a different way, or not to vote at all.
Everything that I have said today is borne out by the evidence and predictions. The Federal Trade Commission has already warned that AI is being used to “turbocharge” scams, so it is just a matter of time, and time is running out. How do we protect consumers from AI scams? First, I am aware that the Government are on the front foot with AI. I was fortunate to attend the Prime Minister’s speech on AI last year—a speech that I genuinely believe will be considered in decades to come to be one of the most important made by a Prime Minister because, amid all the global challenges we face, he was looking to a long-term challenge that we did not know we were facing.
I appreciate that the Government have said that they expect to have robust mechanisms in place to stop the spread of AI-powered disinformation before the general election, but the risks of deepfakes go far and wide, and the economic impact of AI scams is already predicted by some media outlets to run into the billions. The Daily Hodl reports that the latest numbers from the US Federal Trade Commission show that imposter scams accounted for $2.6 billion of losses in 2022.
The Secretary of State for Science, Innovation and Technology has said that the rise of generative AI, which can be used to create written, audio and video content, has “made it easier” for people to create “more sophisticated” misleading content and “amplifies an existing risk” around online disinformation.
With the knowledge that the Government are ahead of the game on AI, I ask that the Minister, who knows this topic inside out, considers some simple measures. First, will he consider legislation, guidelines or simple frameworks to create a “Turing clause”? Everyone knows that Turing said technology would one day be able to fool humans, and that time seems to be here. The principle of a Turing clause would be that any application or use of AI where the intention is to pretend to be a human must be clearly labelled. I believe we can begin this by encouraging all Government Departments, and all organisations that work with the Government, to have clear labelling. A simple example would be chatbots. It must be clearly identified where a person is speaking to an AI, not to a real human being.
Secondly, I believe there is a great opportunity for the Government to support research and development within the industry to create accredited antivirus-style AI detection for use in phones, computers and other technology. This would be similar to the rise of antivirus software in the early days of the world wide web. The technology’s premise would be to help to identify the risk that AI is being used in any communication with an individual. For example, the technology could be used to provide a contextual alert that a phone call, text message or other communication might be AI generated or manipulated, such as a call from a supposed family member received from an unknown phone number. In the same way as anti-virus software warns of computer users of malware risks, that could become a commonplace system that allows the public to be alerted to AI risks, and it could position the UK as a superpower in policing AI around the world. We could create the technologies that other countries use to protect their citizens by, in effect, creating AI policing and alert systems.
Thirdly, I would like to find out what, if any, engagement is taking place with insurance companies and banks to make sure they protect consumers affected by AI scams. I am conscious that the AI scams that are likely to convince victims will most likely get them to do things willingly, so it is much harder for consumers to be protected because before they even realise they have been fooled by what they believe is a loved one but is in fact an AI voice clone or video deepfake, they will have already given over their money. I do not want insurance companies and banks to use that against our consumers and the public, when they have been fooled by something that is incredibly sophisticated.
A further ask relates to the fact that prevention is better than cure. We therefore need to help the public to identify AI scams, for example, by suggesting that they use a codeword when speaking to loved ones on the phone or via video calls, so that they know they are real. The public should be cautious about unknown callers; we need to make them aware that that is the most likely way of getting a phone call that is a deepfake or is by a cloned voice and that puts them at risk. We should also encourage people not to act too quickly when asked to transfer money. As stated by the hon. Member for Strangford (Jim Shannon), the most vulnerable will be the older people in society—those who are most worried about these things. We need to make sure they are aware of what is possible and to make it clear that this is about not science fiction, but science fact.
Finally, I appreciate that this falls under a Department different from the Minister’s, but I would like to understand what mechanisms, both via policing and through the courts, are being explored to both deter and track down AI-assisted crime and criminals, so that we can not only find the individuals who are pushing and creating this technology—they will, no doubt, be those in serious and organised crime gangs—but shut down their technologies at source.
To conclude, unlike some, I do not subscribe to the belief that “The end of the world is nigh,” or even that “The end of the world is AI.” I hope Members excuse the pun. However, it would be wrong not to be wary of the risks that we know about and the fact that there are many, many unknown unknowns in this space. Our ability to be nimble in the face of growing risks is a must, and spotting early warning signs, several of which I have outlined today, is essential. We may not see this happen every day now, but there is a real risk that in the next year or two, and definitely within a decade, we will see it on a very regular basis, in ways that even I have not been able to predict today. So we need to look beyond the potential economic and democratic opportunities, to the potential economic and democratic harms that AI could inflict on us all.
Scams such as those I have outlined could ruin people’s lives—mentally, financially and in so many other ways. If it is not worth doing all we can now to avoid that, I do not know when the right time is. So, along with responding to my points, will the Minister recommend that colleagues throughout the House become familiar with the risk of AI scams so that they can warn their constituents? I ask Members also to consider joining the fantastic all-party group on artificial intelligence, which helps these things—the scams, the opportunity and much more—to be discussed regularly. I thank the Minister for his time and look forward to hearing his response.
(1 year, 10 months ago)
Commons ChamberI agree with my hon. Friend.
I am conscious of time. The bit that I really want to touch on is this legislation’s role with regard to growth and small businesses. In the different world that we live in nowadays, it is essential that our small businesses—I believe that they are about 99% of all our businesses—can be nimble. We used to talk about having a shop on every corner, and we now have businesses that can be in every corner of the world. We need to ensure that they can grow and that they are not burdened with spending most of their time doing admin and back-office stuff to fulfil legislation that is out of date and unnecessary. We need to know what that legislation is.
While most of the United Kingdom will benefit from the Bill, and my party will support the Government when it comes to the votes, Northern Ireland is being left behind due to the protocol, which the hon. Member for Stone (Sir William Cash) referred to. Does the hon. Gentleman agree that while we do these things tonight, we must ensure that the Northern Ireland Protocol Bill goes through so that the people of Northern Ireland have the same rights as the rest of us in United Kingdom?
I thank the hon. Member—my friend—for his comments. Absolutely, we need to get that sorted, because it is essential that we move forward in the right way.
My point on small businesses is that, at the moment, they need staff to do extra things to deal with Government—admin, processes and all those different things—and if we relieved that stress and enabled them to be more nimble, they could spend more of their time selling and doing rather than filling out paperwork. That has got to be a good thing. When we look at this legislation, we must ensure that everything is fit for purpose, that there is a purpose to it and that we are being purposeful in implementing it.
There are thousands of laws on the statue book that are not essential or necessary. They are just there, and many hon. Members probably do not realise that they exist. That cannot be good for this country. It cannot be good for growth and it cannot be good in particular for small businesses and those who run those small businesses.
There is lots more that I would like to talk about, but I will finish. I absolutely support the Bill and look forward to seeing it go to the Lords. I hope that Opposition Members will see the benefits that it will bring to this country and that, when they talk about taking back control, they realise that this is at the heart of that.
(1 year, 11 months ago)
Commons ChamberI thank the hon. Lady for her comments. That is absolutely true in rural areas, but also in urban areas. Bus services provide a really important role for our communities. That role is not political. We do not catch a blue bus or a red bus or a yellow bus—we catch a bus. The reality is that we must all work together. We must find ways to ensure we serve our community in the best way we can.
I congratulate the hon. Member on securing the debate. I did text him before he came in to ask if it would be okay to make an intervention. When I saw the title of the debate I immediately thought of my constituency of Strangford, which is similar to his constituency of Watford. Speaking as an active representative of the rural constituency of Strangford, I have attempted to fight many battles for those who are the victims of reduced services, often without prior warning. They are often cancelled without any consultation. Does he not agree that the duty of care to isolated communities should demand at least some consultation and that if bus companies are not prepared to do that voluntarily, then this place must be the place to take action legally?
I thank the hon. Member for his comments. To be fair, he did not need to text me. I was hoping he would join the Adjournment debate—it would be very odd if he did not. I appreciate his comments and agree wholeheartedly. Surely the point of a timetable is to ensure that people know what time buses are coming. If that timetable changes, the people who use the bus should be consulted and asked about how it will impact them, not just seen as numbers on a spreadsheet. Having spoken to local residents, I was surprised to learn that there is not a Government or local government edict that bus users must be consulted before a change to the timetable, which would seem an obvious thing to do, so I wholeheartedly agree with his comments.
I have been actively engaging, talking and corresponding with organisations, whether Arriva or local government, so none of them will be surprised about the concerns I raise today in the Chamber. This is a constructive opportunity to say that I will not give up on raising these issues, but will work with them to ensure they are resolved in the best way possible for my constituents.
(1 year, 11 months ago)
Commons ChamberThe Minister might be of the same mind himself.
Through speaking in these debates, my office has seen an increase in correspondence from parents who are thankful that these difficult issues are being talked about. The world is changing and progressing, and if we are going to live in a world where we want to protect our children and our grandchildren—I have six grandchildren —and all other grandchildren who are involved in social media, the least we can do is make sure they are safe.
I commend the hon. Member for Batley and Spen (Kim Leadbeater) and others, including the hon. Member for Watford (Dean Russell), who have spoken about Zach’s law. We are all greatly impressed that we have that in the Bill through constructive lobbying. New clause 28, which the hon. Member for Rotherham (Sarah Champion) referred to, relates to advocacy for young people. That is an interesting idea, but I feel that advocacy should be for the parents first and not necessarily young people.
Ahead of the debate, I was in contact with the Royal College of Psychiatrists. It published a report entitled “Technology use and the mental health of children and young people”—new clause 16 is related to that—which was an overview of research into the use of screen time and social media by children and young teenagers. It has been concluded that excessive use of phones and social media by a young person is detrimental to their development and mental health—as we all know and as Members have spoken about—and furthermore that online abuse and bullying has become more prevalent because of that. The right hon. Member for Witham (Priti Patel) referred to those who are susceptible to online harm. We meet them every day, and parents tell me that our concerns are real.
A recent report by NHS Digital found that one in eight 11 to 16-year-olds reported that they had been bullied online. When parents contact me, they say that bulling online is a key issue for them, and the statistics come from those who choose to be honest and talk about it. Although the Government’s role is to create a Bill that enables protection for our children, there is also an incredible role for schools, which can address bullying. My hon. Friend the Member for Upper Bann (Carla Lockhart) and I talked about some of the young people we know at school who have been bullied online. Schools have stepped in and stopped that, encouraging and protecting children, and they can play that role as well.
We have all read of the story of Molly Russell, who was only 14 years old when she took her life. Nobody in this House or outside it could not have been moved by her story. Her father stated that he strongly believed that the images, videos and information that she was able to access through Instagram played a crucial part in her life being cut short. The Bill must complete its passage and focus on strengthening protections online for children. Ultimately, the responsibility is on large social media companies to ensure that harmful information is removed, but the Bill puts the onus on us to hold social media firms to account and to ensure that they do so.
Harmful and dangerous content for children comes in many forms—namely, online abuse and exposure to self-harm and suicidal images. In addition, any inappropriate or sexual content has the potential to put children and young people at severe risk. The Bill is set to put provisions in place to protect victims in the sharing of nude or intimate photos. That is increasingly important for young people, who are potentially being groomed online and do not understand the full extent of what they are doing and the risks that come with that. Amendments have been tabled to ensure that, should such cases of photo sharing go to court, provisions are in place to ensure complete anonymity for the victims—for example, through video links in court, and so on.
I commend the right hon. Member for Basingstoke (Dame Maria Miller), who is not in her place, for her hard work in bringing forward new clause 48. Northern Ireland, along with England and Wales, will benefit from new clause 53, and I welcome the ability to hand down sentences of between six months and potentially five years.
Almost a quarter of girls who have taken a naked image have had their image sent to someone else online without their permission. Girls face very distinct and increased risks on social media, with more than four in five online grooming crimes targeting girls, and 97% of child abuse material featuring the sexual abuse of girls—wow, we really need to do something to protect our children and to give parents hope. There needs to be increased emphasis and focus on making children’s use of the internet safer by design. Once established, all platforms and services need to have the capacity and capability to respond to emerging patterns of sexual abuse, which often stem from photo sharing.
The Minister referred to terrorism and how terrorism can be promoted online. I intervened on him to mention the glorification of IRA terrorism and how that encourages further acts of terrorism and people who are susceptible to be involved. I am quite encouraged by the Minister’s response, and I think that we need to take a significant step. Some in Northern Ireland, for instance, try to rewrite history and use the glorification of terrorism for that purpose. We would like to see strengthening of measures to ensure that those involved in those acts across Northern Ireland are controlled.
In conclusion, there are many aspects of the Bill that I can speak in support of in relation to the benefits of securing digital protections for those on social media. This is, of course, about protecting not just children, but all of us from the dangers of social media. I have chosen to speak on these issues as they are often raised by constituents. There are serious matters regarding the glorification and encouragement of self-harm that the Bill needs to address. We have heard stories tonight that are difficult to listen to, because they are true stories from people we know, and we have heard horror stories about intimate photo sharing online. I hope that action on those issues, along with the many others that the Government are addressing, will be embedded in the Bill with the intent to finally ensure that we have regulations and protection for all people, especially our children—I think of my children and grandchildren, and like everybody else, my constituents.
I welcome the Minister to his place; I know that he will be excellent in this role, and it is incredible that he is so across the detail in such a short time.
I will primarily talk about new clause 53—that may not be that surprising, given how often it has been spoken about today—which is, ultimately, about Zach’s law. Zach is a truly heroic figure, as has been said. He is a young child with cerebral palsy, autism and epilepsy who was cruelly trolled by sick individuals who sent flashing images purposely to cause seizures and cause him damage. That was not unique to Zach, sadly; it happened to many people across the internet and social media. When somebody announced that they were looking for support, having been diagnosed with epilepsy, others would purposely identify that and target the person with flashing images to trigger seizures. That is absolutely despicable.
My hon. Friend the Member for Stourbridge (Suzanne Webb) has been my partner in crime—or in stopping the crime—over the past two years, and this has been a passion for us. Somebody said to me recently that we should perhaps do our victory lap in the Chamber today for the work that has been done to change the law, but Zach is the person who will get to go around and do that, as he did when he raised funds after he was first cruelly trolled.
My hon. Friend the Member for Folkestone and Hythe (Damian Collins) also deserves an awful lot of praise. My hon. Friend the Member for Stourbridge and I worked with him on the Joint Committee on the draft Online Safety Bill this time last year. It was incredible to work with Members of both Houses to look at how we can make the Bill better. I am pleased about the response to so many measures that we put forward, including the fact that we felt that the phrase “legal but harmful” created too many grey areas that would not catch the people who were doing these awful—what I often consider to be—crimes online to cause harm.
I want to highlight some of what has been done over the past two years to get Zach’s law to this point. If I ever write a memoir, I am sure that my diaries will not be as controversial as some in the bookshops today, but I would like to dedicate a chapter to Zach’s law, because it has shown the power of one individual, Zach, to change things through the democratic process in this House, to change the law for the entire country and to protect people who are vulnerable.
Not only was Zach’s case raised in the Joint Committee’s discussions, but afterwards my hon. Friend the Member for Stourbridge and I managed to get all the tech companies together on Zoom—most people will probably not be aware of this—to look at making technical changes to stop flashing images being sent to people. There were lots of warm words: lots of effort was supposedly put in so that we would not need a law to stop flashing images. We had Giphy, Facebook, Google, Twitter—all these billion-pound platforms that can do anything they want, yet they could not stop flashing images being sent to vulnerable people. I am sorry, but that is not the work of people who really want to make a difference. That is people who want to put profit over pain—people who want to ensure that they look after themselves before they look after the most vulnerable.
(3 years, 8 months ago)
Commons ChamberI wanted to secure this Adjournment debate because of an issue that originated in Woodmere Avenue in my constituency but which has highlighted a more national issue associated with section 6 powers under the Traffic Management Act 2004. I feel a bit odd, because I have introduced a ten-minute rule Bill today, so, a bit like it is for the Minister, two buses have arrived at once for me today. This is my second long speech, but I will try not to make it too long.
I wanted to raise this issue for three reasons. First, I would like to highlight the issues with Woodmere Avenue in my constituency, the concerns of residents and why those are important. My second point is about the use of section 6, which I think could solve some of the issues in my local area, and I will highlight some broader issues. My third point is about the critical importance of the power of local people to have control and a say in what happens to them in their local area.
I will start with Woodmere Avenue. The issue with Woodmere Avenue began for me when I was campaigning way before I was an MP. I had been out with a local campaigner called Carly Bishop, who had petitioned and spoken to local residents about issues in this area. Let me describe the situation. A width restriction has been in Woodmere Avenue in the Tudor ward of Watford for decades, and it is known as a bit of a landmark, but not in a positive way. There is a massive bus route through the middle of the road, and on either side there are width restrictions for cars. Increasingly, I hear people say that they have scratched their car recently or over the past few years on those restrictions. It does not feel right that somebody trying to drive to work in the morning, pick up their kids from school or just go to the shops should be worried about damaging their car en route because of the way that a width restriction was designed many decades ago.
The issue, for me, is about fairness. There is a whole debate that could be unpicked about the decisions that were made many years ago, why this has not changed and why petitions have not enabled change, but I do not want to get into a blame game. For me, this is about how we look forward and make a difference. When I was discussing this with local residents and the local council, I found that one potential solution is the use of automatic number plate recognition. It was highlighted that, instead of having rigid physical stops for people to drive through these areas, we could have a camera that recognises cars going through, perhaps with some speed bumps and other less invasive measures to calm traffic, make sure it is safe and slow and reduce the number of wide vehicles.
It turns out that the section 6 rules, which the council would love to use to stop certain vehicles doing certain things on roads such as Woodmere Avenue, are not available in Watford—but they are available in London. The section 6 rules actively apply to London but not to the rest of the country, despite the Local Government Association being very supportive of the change. Why is that an issue? First, it is one of fairness. Why should London be able to put in place mechanisms to make traffic safer that are not allowed outside London? Secondly, if there is a solution out there that is already working, why should it not be applicable in my constituency for my constituents?
When I visit Woodmere Avenue—which I do quite regularly because it is very close to my constituency office—I find myself carefully driving through the width restrictions, and I see the marks on them that have clearly come from cars and vans being scratched over the years. While driving through, I sometimes see a driver who does not want to go through the width restrictions, so they go straight through the middle where the bus lane is. The width restrictions are not even doing the job that they should, because cars are still breaking the rules, and there is no real comeback, because there is no way to detect it—there is no ANPR and no cameras. Is that fair? No, it is not.
The people who are scratching their cars are not necessarily bad drivers. I have had people say to me, “Perhaps they just don’t know how to drive their car,” but even if someone is not a great driver and is a bit cautious or wobbly when going through the restrictions, is it fair that they should scratch their car, damage their vehicle and face the cost of having to go to a garage to fix it? I do not think so. In addition, I have seen vehicles have their axles broken, not because the drivers have driven through the restrictions at a particularly fast pace but because they have slightly misjudged it and the front wheel has been hit and damaged.
There is a moral issue and a fairness issue, and there is the issue of ANPR and the rules being applicable in London but not elsewhere. There is also a bigger topic of the right of individuals to have a say in what happens outside their own homes. There is a really good argument here around what I call pavement politics. Surely a resident of a street—a member of the British public—should be allowed to have a say in what happens outside their front door. They should have more of a say than somebody who sits in a council office at a distance and is not affected by that.
I congratulate the hon. Gentleman on bringing this debate forward. He has come to the crux of the matter: this is about local residents. I believe that those who are affected by the measures on the roads have a right to be consulted and then to have a say in what happens or does not happen. Does he agree that sometimes, common sense has to prevail and the authorities just have to listen?
The hon. Gentleman makes an incredibly powerful point. This is about common sense. People invest in their houses, they invest in their gardens if they have them, and they invest in their local community. Common sense should be part of the community.
One thing that we have seen over the past 12 months is the cutting of red tape. That has been forced upon us because of the awfulness of covid—the pandemic has meant that we have had to cut through red tape to do things quicker—but it has allowed us to trust people on the frontline. It has allowed us to trust local people to form community groups and help their neighbours—to set up Facebook groups to get food delivered and help people in their community. Why can we not also trust those people to have a stronger say in what happens on the road in front of their house?
Not so long ago, when I was with someone from the highways department, the council and a local resident, I had a conversation with a gentleman who lives near the width restriction. He told me that, as someone had gone through it and it had pinged their car, something had shot off and gone through the window of his car on his drive, causing damage. That does not seem sensible. People who live in these areas live with the repercussions of that day in, day out, yet they do not have more of a say than someone who lives in another part of the county. That seems rather bizarre to me.
Surely, when we look at this in the round, there is an opportunity here to look at the way we engage with local communities—the way we do surveys, for example. At the moment, if another survey is done, the taxpayer will have to pay an awful lot of money for the county council and other groups to go and ask residents things, in a way that we could probably organise on our own by going door to door at the weekend. As the Member of Parliament, I even offered to go around and do a survey, asking people exactly the same question about what they would like to be done, but that is not possible, because a very rigid, bureaucratic, red-tape-driven process has to be followed to get those views. That does not seem right.
All I ask the Minister to do today is to address those three points. First, I would really appreciate further discussion around Woodmere Avenue—an opportunity to explore the issue and to see whether we can solve it for local residents while keeping the road safe, ensuring that large vehicles that should not go down the road do not, and ensuring that people do not speed down there, but in a way that does not risk people scratching their cars or causing large traffic jams because they drive through so slowly.
Secondly, I would really appreciate it if time were spent looking again at section 6, to identify why rules that work in London cannot be applied outside it. To be fair, Watford is not far from London, so even if it were just a case of expanding the rules slightly to solve this big issue, I would appreciate it. However, on a serious note, why do we not look at this again? I would really appreciate it if time were taken to understand why this is the case and whether there are any plans in this respect.
Thirdly, on the much bigger point about local communities, the past year has shown that, when we give people on the frontline trust—when we embrace our communities, give them a voice and listen to them—the common sense that the hon. Member for Strangford (Jim Shannon) mentioned is there. People know what the issues are in their local community. They usually know the solutions way before red tape and bureaucracy kick in. I would really appreciate a view on whether we can start to ensure that local communities can have that say, what we would do from there, and what the timeline might be for some of the solutions. I thank the Minister for listening.