Thursday 29th January 2026

(1 day, 8 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Patel Portrait Lord Patel (CB)
- Hansard - -

My Lords, I thank the noble Lord, Lord Hunt of Kings Heath, for this debate, and I wish we had more time for it. It helps sometimes if someone takes a slightly different view, so I ask noble Lords to forgive me if I deliberately do so, although I line up with what the noble Lord said about moratoriums.

In 1637 René Descartes said, “I think, therefore I am”. That is what we fear: that ASI will be able to think by itself, and therefore it will be. We fear that it will develop lethal weapons that we cannot control, let alone understand their development. I agree with that. So do all the tech company CEOs who discussed this at length at the Davos meeting and subsequently on different podcasts. So did Yuval Harari from Cambridge, a political reporter and philosopher who has identified the issues that will confront us if AGI leads to ASI.

AI is the next step to AGI, and, as the noble Lord, Lord Tarassenko, said, AGI is the next step to ASI. We are probably closer to level 2 of AGI, but the timelines are long. We are uncertain when we will get to ASI, particularly recursive ASI. If we get to that point, that will be when we have the greatest danger. After 4 billion years of evolution, we humans, the only species that can think, have got to the place that we are through lying and deviousness. We are now developing machines that can do exactly the same, and therefore we fear them. But it cannot be beyond the ingenuity of humans to try to control these developments.

I come from the position of saying that moratoriums will not work. But we can work in co-operation with other nations that have already started regulating, such as South Korea and Australia, as well as work with our AI Security Institute in the United Kingdom, to establish our own boundaries through regulations that will allow innovations to continue.

We must remember that there are benefits to developing this technology. One example that was given is the folding of proteins. Every protein in the body has been identified; we now need to learn very quickly how those proteins cause or prevent disease. We will not be able to do this unless we allow these technologies to develop more quickly than anybody else. The same applies for new energies and climate change management, so there are benefits to it. The conundrum is how to allow technology to develop these benefits while creating regulations that will not allow it to develop in areas that are dangerous to humanity.

The way forward on how we govern technology will be in how we identify its consciousness and how we work with it. Therefore, as we learn more, measured regulation and co-operation with other countries is probably the way forward.