Lord Fairfax of Cameron
Main Page: Lord Fairfax of Cameron (Conservative - Excepted Hereditary)To ask His Majesty’s Government what steps they are taking to ensure that advanced AI development remains safe and controllable, given the recent threat update warning from the Director General of MI5 that there are “potential future risks from non-human, autonomous AI systems which may evade human oversight and control”.
My Lords, I am grateful to have this opportunity to discuss the most pressing issue facing humanity: the advent of superintelligence. I am particularly grateful to have had the support of ControlAI, a company working in this area, in preparing this speech.
Several weeks ago, MI5 Director-General Ken McCallum warned in his annual lecture about
“risks from non-human … AI systems which may evade human oversight and control”.
But this warning only follows on from Nobel Prize winners, leading AI scientists and CEOs of AI companies that:
“Mitigating the risk of extinction from AI should be a global priority”.
In my opinion, it should be the global priority because of the seriousness of the situation.
The fact is that the leading AI companies are racing and competing with each other to develop superintelligent AI systems, despite the risks they acknowledge that such systems pose to humanity, possibly as early as 2030. For example, the CEO of Anthropic, which as many noble Lords will know is one of the leading AI companies, has assessed the chance of AI destroying humanity at between 10% and 25%. Most worryingly, AI companies are developing machines that can autonomously improve themselves, possibly leading to a superintelligence explosion.
The AI companies are in fact bringing into existence, for the first time, an entity that is more intelligent than humans, which is obviously extremely serious. People talk about pulling the plug out, but they simply would not allow us to pull the plug out. I do not know whether any noble Lords have seen a wonderful film about all this called “Ex Machina”, where the AI does not allow the plug to be pulled out.
In the face of these threats, I urge the Government to take the following steps: first, to publicly recognise that superintelligence poses an extinction threat to humanity; secondly, for the UK to prevent the development of superintelligence on its soil; and, thirdly, for the UK to resume its leadership in AI safety and to champion an international agreement to prohibit the development of superintelligent systems.
If noble Lords can believe it, I was terribly young when I first spoke on this subject: I was here in my 20s as a hereditary Peer. I have had a lifelong interest in this area after reading a book called The Silicon Idol by a brilliant Oxford astrophysicist, and I spoke about my concerns all those years ago. When I spoke on this subject two years ago, I quoted the well-known words of Dylan Thomas, which many noble Lords will recognise:
“Do not go gentle into that good night.
Rage, rage against the dying of the light”.
That, of course, is the dying of the human light. But now I will add WB Yeats’s equally famous poem:
“The best lack all conviction, while the worst
Are full of passionate intensity”.
I devoutly ask the Government and the Minister that they now be full of conviction and passionate intensity in protecting the UK and humanity from the risks—including extinction—of superintelligence, which, as we have heard, is being developed by the AI companies in competition with each other. I fear that we may have a window of only, say, five years in which to do this. I thank noble Lords for listening to me, and I very much look forward to hearing what other noble Lords have to say.