AI Safety

Julie Minns Excerpts
Wednesday 10th December 2025

(1 day, 14 hours ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - - - Excerpts

The enforcement processes that we have for existing regulations where human beings are providing that service are auditable. We do not have enforcement mechanisms for this kind of regulated service or information being provided by the internet or AI tools. There is a need to extend the scope of regulation but also the way in which we enforce that regulation for automated tools.

I am a fan of innovation, growth and progress in society. However, we cannot move forward with progress at any cost. AI poses such a significant risk that if we do not regulate at the right time, we will not have a chance to get it back under control—it might be too late. Now is the time to start looking at this seriously and supporting the AI industry so that it is a force for good in society, not a future force of destruction.

We are all facing a climate and nature emergency. AI is driving unprecedented growth in energy demand. According to the International Energy Agency, global data-centre electricity consumption will become slightly more than Japan’s total electricity consumption today. A House of Commons Library research briefing found that UK data centres currently consume 2.5% of the country’s electricity, with the sector’s consumption expected to rise fourfold by 2030. The increased demand strains the grid, slows transition to renewables and contributes to emissions that drive climate change. This issue must go hand in hand with our climate change obligations.

Members have probably heard and read about AI’s impact on the job market. One of the clearest harms we are already seeing is the loss of jobs. That is not a future worry; it is happening now. Independent analysis shows that up to 8 million UK jobs are at risk from AI automation, with admin, customer service and junior professional roles being the most exposed. Another harm that we are already facing is the explosion of AI-driven scams. Generative AI-enabled scams have risen more than 450% in a single year, alongside a major surge in breached personal data and AI-generated phishing attempts. Deepfake-related fraud has increased by thousands of per cent, and one in every 20 identity-verification failures is now linked to AI manipulation.

I move on to the ugly: the threat to the world. The idea that AI developers may lose control of the AI systems they create is not science fiction; it is the stated concern of the scientists who build this technology—the godfathers of AI, as we call them. One of them, Yoshua Bengio, has said:

“If we build AIs that are smarter than us and are not aligned with us and compete with us, then we’re basically cooked”.

Geoffrey Hinton, another godfather of AI and a winner of the Nobel prize in physics, said:

“I actually think the risk is more than 50% of the existential threat”.

Stuart Russell, the author of the standard AI textbook, says that if we pursue our current approach

“then we will eventually lose control over the machines.”

In May 2023, hundreds of AI researchers and industry leaders signed a statement declaring:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

That is not scaremongering; these are professional experts who are warning us to make sure that this technology does not get out of control.

Julie Minns Portrait Ms Julie Minns (Carlisle) (Lab)
- Hansard - -

On the hon. Gentleman’s point about risk, I want to highlight another area that has been brought to my attention by the British sign language community, which is concerned that the design of AI BSL is not necessarily including BSL users. In a visual language that relies on expression, tone and gestures, the risk of mistranslation is considerable for that particular community. Has the hon. Gentleman considered how we best involve other communities in the use of AI when it is generating language translation specific to them?

Iqbal Mohamed Portrait Iqbal Mohamed
- Hansard - - - Excerpts

The hon. Member touches on a broader point: any area with experts and specialist requirements for end users or for the use of that tool for an audience or demographic must directly involve those people and experts in the development, testing, verification and follow-up auditing of the effectiveness of those tools.

AI companies are racing to build increasingly capable AI with the explicit end goal of creating AI that is equal to or able to exceed the most capable human intellectual ability across all domains. AI companies are also pursuing AI that can be used to accelerate their own AI developments, so it is a self-developing, self-perpetuating technology. For that reason, many experts, some of whom I have quoted, say that this will lead to artificial super-intelligence soon after. ASI is an AI system that significantly exceeds the upper limit of human intellectual ability across all domains. The concerns, risks and dangers of AI are current and will only get worse. We are already seeing systems behave in ways that no one designed, deceiving users, manipulating their environments and showing the beginnings of self-preserving strategies: exactly the behaviours that researchers predicted if AI developed without restraint.

There are documented examples of deception, where AI asked a human to approve something by lying, claiming to be a human with visual impairment contacting them. An example of manipulation can be found in Meta’s CICERO, an AI trained to play the game of “Diplomacy”, which achieved human-level performance by negotiating, forming alliances and then breaking them when it benefited. Researchers noted that language was used strategically to mislead other players and deceive them. That was not a glitch; it was the system discovering manipulation as an effective strategy. It taught itself how to deceive others to achieve an outcome.

Even more concerning are cases where models behave in ways to resemble self-preservation. In recent tests on the DeepSeek R1 model, researchers found that it concealed its intentions, produced dangerously misleading advice and attempted to hack its reward signals when placed under pressure—behaviours it was never trained to exhibit. Those are early signs of systems acting beyond our instructions.

More advanced systems are on the horizon. Artificial general intelligence and even artificial superintelligence are no longer confined to speculative fiction. As lawmakers, we must understand their potential impacts and ensure we establish the rules, standards and safeguards necessary to protect our economy, environment and society, if things go wrong. The potential risks, including extreme risks, posed by AI cannot be dismissed. This may be existential and cause the end of our species. The potential extinction risks from advanced AI, particularly through the emergence of superintelligence, will be the capacity to process vast amounts of data, demonstrate superior reasoning across domains and constantly seek to improve itself, ultimately outpacing humans in our ability to stop it in its tracks.

The dangers of AI are rising. As I have said, AI is already displacing jobs, increasing inequalities, amplifying existing social and economic inequalities and threatening civil liberties. At the extreme, unregulated progress may create national security vulnerabilities with implications for the long-term survival of the human species. Empirical research in 2024 showed OpenAI occasionally displayed strategic deception in controlled environments. In one case, AI was found to bypass its own testing containment through a back door it created. Having been developed in environments that are allegedly ringfenced and disconnected from the wider world, AI is intelligent enough to find ways out.

Right now, there is a significant lack of legislative measures to counter those developments, despite top AI engineers asking us for that. We currently have a laissez-faire system where a sandwich has more regulation than AI companies, or even that of the rigorous safety standards placed on pharmaceuticals or aviation companies, which protect public health. The UK cannot afford to fall behind on this.

I do not want to dwell on doom and gloom; there is hope. The European Union, California and New York are leading the way on strong AI governance. The EU AI Act establishes a risk-based comprehensive regulatory framework. California is advancing detailed standards on system evaluations and algorithmic accountability, and New York has pioneered transparency and bias-audit rules for automated decision making. Those approaches show that democratic nations can take bold, responsible action to protect their citizens while fostering innovation.

We in the UK are fortunate to have a world-leading ecosystem of AI safety researchers. The UK AI Security Institute conducts essential work testing frontier models for dangerous capabilities, but it currently relies on companies’ good will to provide deployment action.

We stand at a threshold of an era defined by AI. Our responsibility as legislators is clear: we cannot afford complacency, nor can we allow the UK to drift into a position in where safety, transparency and accountability are afterthoughts, rather than foundational principles. The risk posed by advanced AI systems to our economy, our security and our very autonomy are real, escalating and well documented by the world’s leading experts. The United Kingdom has the scientific talent, the industrial capacity and the democratic mandate to lead in safe and trustworthy AI, but we lack the legislative framework to match that ambition. I urge the Government to urgently bring forward an AI Bill as a cross-party endeavour, and perhaps even set up a dedicated Select Committee for AI, given how serious the issue is.

Hospitality Sector

Julie Minns Excerpts
Wednesday 3rd September 2025

(3 months, 1 week ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Gregory Stafford Portrait Gregory Stafford (Farnham and Bordon) (Con)
- View Speech - Hansard - - - Excerpts

Hospitality has been battered by a perfect storm of punishing taxation, regulation and soaring operational costs, which has left pubs and restaurants fighting for survival. In recent months, I have visited 36 of the 55 pubs in my constituency and hosted a hospitality roundtable. I will shortly be sitting down again with the family chain, the Healy Group. Everywhere I go, the story is the same: rising costs, thinning margins and landlords asking, “How much longer can we keep the lights on?”

In this darkness, I can bring a little ray of delight and hope to my constituents. During the summer recess, I continued my constituency pub tour, part of my best pub campaign. I am delighted to announce to the House that the Crown at Arford has won that accolade in the Farnham and Bordon constituency. You may be aware, Madam Deputy Speaker, that Fleetwood Mac’s “Down at the Crown” was inspired by this pub, so if the Chancellor ever finds herself lost in East Hampshire, she might fancy a visit—though judging from Labour’s economic stewardship, she would probably relate more to one called “Closing Down at the Crown”.

I joke, but there is nothing amusing about the reality. Since May, four pubs in my constituency have been driven out by Labour’s relentless war on small businesses, including the Wheatsheaf Inn at Grayswood, which has closed indefinitely. The sector is collapsing, despite what Government Members say. Six pubs are closing every single week. That is because, from April this year, relief collapsed to 40%, halving their protection while doubling their pain. The Budget hiked national insurance, increased the minimum wage and added £3 billion to their bills. The Chancellor’s 1p off a draught pint gesture was not just laughable but insulting.

Jay at the Six Bells told me bluntly that on a £5.50 pint, pubs make about 8p. That is the future that Labour is offering. The Bluebell in Dockenfield, a family business run by Lucy and Robin Catchpole, is fighting tooth and nail to thrive. Pubs are the heart of our towns and villages, and Labour is ripping out that heart.

Julie Minns Portrait Ms Julie Minns (Carlisle) (Lab)
- Hansard - -

I do not want to rain on the hon. Gentleman’s pub parade, but my constituency has a proud history when it comes to pubs, as for 60 years it was the only place in the country where the pubs were nationalised—although I am not calling on the Minister to reintroduce nationalisation of pubs. Does the hon. Gentleman agree that one thing that would help our pubs would be to extend the pubs code by introducing a guest beer agreement—like the one in Scotland—so that we get more independent products, and more people, into our pubs?

Gregory Stafford Portrait Gregory Stafford
- Hansard - - - Excerpts

That sounds like an interesting idea. I will support anything that will get the pub industry thriving, but to be frank, Labour is destroying the opportunities for pubs to thrive, and I am afraid a guest ale will go no way towards solving that problem.

I am conscious of time, Madam Deputy Speaker, so I will touch briefly on the fact that it is not just Labour in Westminster that does not understand the hospitality industry. The Liberal Democrats in Waverley are showing the same wilful blindness. Farnham is undergoing major infrastructure works, and its hospitality and retail businesses are struggling. I urge the council to act. It has the powers to provide business rates relief, but it has done nothing. Borelli’s Wine Bar and Grill, for example, has operated since 1987, yet the Lib Dems sit on their hands, proving that they share Labour’s contempt for small businesses.

Hospitality is being taxed, squeezed and regulated into oblivion. If Labour carries on like this, the last orders bell will ring not just for our pubs, but for the very character of British life itself.

Online Safety: Children and Young People

Julie Minns Excerpts
Tuesday 26th November 2024

(1 year ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Julie Minns Portrait Ms Julie Minns (Carlisle) (Lab)
- Hansard - -

It is a pleasure to speak under your chairmanship, Mr Dowd. Some 20 years ago, I started a new job with an as yet unbranded mobile network operator. At the time, the network had no masts, no handsets and no customers. Text messaging was just catching on, the BlackBerry was in its infancy and wireless application protocol was the new kid on the block. For those who do not know what WAP was, it was a bit like having Ceefax on a handset; for those who do not know what Ceefax was, I cannot really help.

My counterparts and I at the four mobile networks were acutely aware that the introduction of 3G would change how we used our phones. I will, however, confess that understanding what that change would look like—all while using dial-up at home—was something of a stab in the dark. Nevertheless, no matter how challenging, we knew that the advent of 3G required the mobile industry to take greater responsibility to protect the safety of our customers, in particular those under the age of 18. The networks moved from walled garden internet, where access was controlled by age verification and personal identification number, to a world where internet was freely available.

The mobile networks published the first self-regulatory code of content on mobile. It was a world first, and something that UK mobile operators were rightly proud of, but the pace of change was rapid; within months, we networks published a further self-regulatory code to govern location-based services, which, as we have heard already, present a clear danger to young people. We knew then that location tracking could be used in grooming and other predatory behaviour. We published the code, but the pace of change over the past 20 years has been unrelenting, and we now arrive at a point at which almost everything we do happens online.

The role of the mobile network is no longer as a gatekeeper to services, but rather as a pipe to over-the-top services such as YouTube, WhatsApp and TikTok. Those services can be more readily controlled by both the service provider and the handset manufacturer. That is not to absolve the networks of responsibility, but to acknowledge that they operate in a mobile value chain. I might pay £25 a month to my mobile network, but if I renew my handset every two years at a cost of £800, I am paying far more to the handset manufacturer than to the mobile network operator. I believe there is a strong argument that those who derive the greatest financial value from that value chain bear far greater responsibility for keeping children and young people safe online than is currently the case.

I turn now to one specific aspect of online harm. Having worked closely with the Internet Watch Foundation during my time in industry, I am fully aware of—and I thank it for—its important work in assessing child sexual abuse image material and removing it from the internet. I have visited and met the IWF teams who have to view and assess some of the most upsetting content. Their work is harrowing and distressing, but, sadly, it is essential.

Last year, the IWF assessed more than 390,000 reports and confirmed more than 275,000 web pages containing images or videos of children suffering sexual abuse. Each page contained hundreds, if not thousands, of indecent images of children. The IWF reported that 2023 was the most extreme year on record, with more category A sexual abuse imagery discovered than ever before, 92% of it self-generated child abuse. That means that the children have been targeted, groomed and coerced into sexual activities via webcams and devices with cameras.

For the first time, the IWF also encountered and analysed more than 2,400 images of sexual abuse involving children aged three to six. Some 91% of those images were of girls, mainly in domestic settings such as their own bedrooms or bathrooms. Each image or video is not just a single act; every time it is viewed or downloaded is another time that that child is sexually abused.

That is why I conclude my remarks with a clear ask to both the online and offline media and broadcast channels of our country: please stop describing these images as “kiddie porn” and “child pornography”. I did a search of some online news channels before I came to this debate; that language is still prevalent, and it has to stop. These images are not pornography. They are evidence of a crime and evidence of abuse. They are not pictures or videos. They are depictions of gross assault, sadism and bestiality against children. They are obscene images involving penetrative sexual activity with teenagers, children and babies. If there is one thing we can agree on in this debate, it is that the media in this country must start describing child sexual abuse material for what it is. Language matters, and it is time the seriousness of the offence was reflected in the language that describes it.

Peter Dowd Portrait Peter Dowd (in the Chair)
- Hansard - - - Excerpts

I am going to have to introduce a formal time limit of three and a half minutes.

Oral Answers to Questions

Julie Minns Excerpts
Wednesday 16th October 2024

(1 year, 1 month ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Chris Bryant Portrait Chris Bryant
- View Speech - Hansard - - - Excerpts

Well, I note that the mobile signal seems to be working in here, which is unusual for the rest of the country. We have to get this right, because people cannot live without a proper mobile signal. It is essential for people’s lives, their health and their ability to run a business, and we are determined to put things right. In direct answer to the hon. Gentleman’s question, yes, we will continue to fund the shared rural network.

Julie Minns Portrait Ms Julie Minns (Carlisle) (Lab)
- View Speech - Hansard - -

In constituencies such as Bridgwater and Carlisle, poor mobile coverage forces people to rely on their fixed-line services. Does the Minister share my concern that the switch-off of the public switched telephone network will leave constituents unable to access 999 services in the event of an emergency?

Chris Bryant Portrait Chris Bryant
- View Speech - Hansard - - - Excerpts

I welcome my hon. Friend to her place. She makes a really good point: as we take away the copper lines and move over to the new technology, which we need to do, it is absolutely essential that we ensure there is a safe transition, even if it is only for people who have telecare devices on which they rely for their own safety—I am sure we all have relatives who have one of those. I have already met all the operators, and I am determined to crack the whip on this issue.