Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Lord Strasburger Excerpts
Thursday 21st November 2024

(1 day, 22 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Strasburger Portrait Lord Strasburger (LD)
- View Speech - Hansard - -

My Lords, I congratulate the noble Baroness, Lady Stowell, and the Communications and Digital Committee on their very thorough and comprehensive report. It points out the very considerable benefits that generative AI and large language models can deliver for this country, and the human race in general. The report declares that large language models put us on the brink of epoch-defining changes, comparable to the invention of the internet, and I have no doubt about the truth of that prediction.

However, what price will we pay for these benefits? I am deeply worried about the great risks that are inherent in the breakneck pace at which this technology is being developed, without any meaningful attempts to regulate it—with the possible exception of the EU. The report identifies a plethora of potential areas of risk, from minor through to catastrophic, covering a non-exhaustive list of areas, including multiplying existing malicious capabilities, increasing the scale and pace of cyberattacks, enabling terrorism, generating synthetic child sexual abuse material, increasing disinformation via hyper-realistic bots, enabling biological or chemical release at pandemic scale, causing critical infrastructure failure or triggering an uncontrollable proliferation of AI models. I will not go on with the list, because anyone who has read the report will know what I am talking about. These are the consequences of malicious, or perhaps merely careless, uses of the technology, and they could have a very significant—perhaps catastrophic—impact on the citizens of this country, or even worldwide.

The report states in paragraph 140:

“There are … no warning indicators for a rapid and uncontrollable escalation of capabilities resulting in catastrophic risk”.


It then tries to reassure us—without much success, in my case—by saying:

“There is no cause for panic, but the implications of this intelligence blind spot deserve sober consideration”.


That is putting it very mildly.

However, this is not my main concern about the risks presented by AI, and I speak as one who had slight interaction with embryonic AI in the 1980s. The risks I have mentioned so far arise out of the probable misuse of this technology, either deliberately or accidentally. They might be mitigated by tight international regulation, although how we can prevent bad actors operating in regions devoid of regulation, I do not know. These enterprises are so competitive, so globalized and so driven by commercial pressure that anything that can be done, will be done, somewhere.

My main concern, and one to which I cannot see an obvious answer, is not what happens when the technology is misused. What worries me is the risk to humans if we lose control of the AI technology itself. The report does mention this risk, saying:

“This might occur because humans gradually hand over control to highly capable systems that vastly exceed our understanding; and/or the AI system pursues goals which are not aligned with human welfare and reduce human agency”.


That is a very polite way of saying that the AI systems might acquire greater intelligence than humans and pursue goals of their own: goals that are decidedly detrimental to the human race, such as eliminating or enslaving it. Before any noble Lords conclude that I am off with the fairies, I direct them to paragraph 154 of the report, which indicates a “non-zero likelihood”—that apparently means a remote chance—of existential risks materialising, but not, the report says, in the next three years. That is not very reassuring for those of us who hope to live longer than three years.

Some months ago, I had a conversation with Geoff Hinton—here in this House, as it happens—who is widely recognised to be one of the godfathers of AI and has just been awarded a Nobel prize. He resigned from Google to be free to warn the world about the existential risks from AI, and he is not alone in those views. His very well-informed view is that there is a risk of humans losing control of AI technology, with existential consequences. When I asked him what the good news was, he thought about it and said, “It’s a good time to be 76”. My rather flippant response was, “Well, at least we don’t have to worry about climate change”.

Seriously, the thing about existential risks is that we do not get a second chance. There is no way back. Even if the probability is very low, the consequence is so catastrophic for mankind that we cannot simply hope it does not happen. As the noble Lord, Lord Rees, the Astronomer Royal, said 10 years ago in a TED talk when discussing all cataclysmic risks:

“Our earth has existed for 45 million centuries, but this”


century

“is special—it’s the first where one species, ours, has the planet’s future in its hands … We and our political masters are in denial about catastrophic scenarios … But if an event is potentially devastating, it is worth paying a substantial premium to safeguard against it”,

rather like

“fire insurance on our house”.

The committee’s report devotes seven paragraphs out of 259 to the existential risks of the technology turning the tables on its human masters. This would suggest the committee did not take that risk all that seriously. Indeed, it says in paragraph 155:

“As our understanding of this technology grows … we hope concerns about existential risk will decline”.


I am not happy to rely on hope where existential risk is concerned, so I ask the Minister for some reassurance that this matter is in hand.

What steps are the Government taking, alone and with others, to mitigate the specific risk—albeit a small one—of humans losing control of AI systems such that they wipe out humanity?