Baroness Kidron
Main Page: Baroness Kidron (Crossbench - Life peer)My Lords, I commend the noble Lord, Lord Fairfax, on raising what is rightly a fundamental question of our time: the risk of AI systems becoming more powerful than their human creators. Advanced AI does not become unsafe in a vacuum; it becomes unsafe by design when it is developed without accountability, driven by profit incentives of private actors and embedded in infrastructure that the state can neither inspect nor exit. Alongside concerns of runaway capability is the risk of dependency. Long before any dramatic accident or attack, we risk a growing reliance on a narrow set of foreign-owned technologies, leaving the UK unable to act as a sovereign state with values, choices and technologies of its own.
Just this week, the Ministry of Defence awarded a £240 million contract for “critical operational decision-making support” to Palantir. The issue here is not one company but a pattern of outsourcing our national infrastructure to American firms, backed by a US Administration whose national security strategy states plainly: “In everything we do, we are putting America first”.
Across the economy, we have normalised deep vendor lock-in to US companies to the point where security, critical industries and sensitive government departments cannot credibly switch suppliers, even as the risks or the terms of engagement shift. One security expert recently described it to me as economic warfare, where they create strategic advantage by advancing their domestic industry and technology while simultaneously degrading the same capacity in adversaries and allies alike. Where the state cannot inspect, audit or exit the systems that shape its decisions or handle sensitive data, it has no sovereignty.
The US and China are determinedly ahead, but many AI experts believe that the next phase of AI will favour systems that are reliable, auditable and governed by understood rules. That is where the United Kingdom has an opportunity.
There is not time today to set out a sovereign strategy for AI. But in systems that shape our defence, policing, health service and democratic decision-making, sovereignty must be the default, with onshore audit and assurance, procurement that builds domestic capability, control over strategic pinch-points and the ultimate power to say no. I join the noble Lord in asking for greater autonomy and power for the AI Security Institute.
AI will not evade human control because it suddenly becomes clever; it will evade control because we have designed systems in which no one is responsible, no one can see clearly and no one can intervene in time.
My Lords, I will quickly respond. We have to be very careful about what level we regulate at. AI is multifaceted: at different stacks we have the infrastructure, the data level, the model level and so on. At which level are we going to regulate? We are trying to work with the community to find out what is best before we come up with a solution as far as regulation is concerned. AI is currently regulated at different user levels.
Let me continue. We are appointing AI champions to work with industry and government to drive adoption in high-growth sectors. Our new AI for Science Strategy will help accelerate breakthroughs that matter to people’s lives.
In summary, safe and controllable AI does not hinder progress; rather, it underpins it. It is integral to our goal of leveraging AI’s transformative power and securing the UK’s role as an AI innovator, not merely a user. Safety is not about stopping progress. It is about maintaining human control over advances. The true danger is not overthinking this now; it is failing to think enough.
I thank the Minister for such a generous and wide-ranging response. He said that we are going to regulate at the point of use, yet in this House we saw such a battle about the misuse of UK creative data that is protected by UK law. The UK Government wanted to give it away rather than protect UK law, so that is one example. Equally, the Minister mentioned the sovereign AI fund, but I hear again and again from UK AI companies that they are dominated by US companies in those discussions and that the UK companies are not really getting an advantage. I would like to hear the Minister’s response, given that we have a little time.
I thank the noble Baroness for that; I respect her interest and work in this area. It would take me at least 20 minutes to cover most of what was asked. There were points about regulation at different levels, about AI and copyright and about the Sovereign AI Unit’s funding of £500 million. We need to work at each different level and, as I said, regulation is vital. Personally, I think it will be very difficult for us to have one AI regulation Bill to cover everything, because we may miss something. We need evidence from speaking with academia, the players and so on to help us shape what regulation is required. I want to give the noble Baroness a comprehensive answer and I cannot do that here, so I will write to her accordingly.