Superintelligent AI

Lord Tarassenko Excerpts
Thursday 29th January 2026

(1 day, 8 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Tarassenko Portrait Lord Tarassenko (CB)
- Hansard - -

My Lords, I congratulate the noble Lord, Lord Hunt, on securing this debate, but it is going to take a superhuman effort to give an intelligent speech on this topic in four minutes. Paradoxically, Google DeepMind’s paper on artificial general intelligence, AGI, has the best definition of superintelligent AI or artificial superintelligence, ASI. It stratifies AI according to five levels related to human cognitive abilities. Level 1, emerging AI, corresponds to the intelligence of an unskilled human being. Level 5, superhuman AI or ASI, outperforms all human beings.

The taxonomy, importantly, distinguishes between narrow AI, for a specific application, and general AI. It makes the evidence-based claim that superhuman narrow AI—in other words, narrow ASI—has already been achieved by AlphaFold, which used machine learning to solve the 50 year-old protein-folding problem. Crucially, general AI is still only at level 1, emerging AI. How we move from level 1 to level 5, ASI, is a matter of debate within the field of AI research. Many argue that this requires new capabilities to be developed for today’s frontier AI models—for example, increasing levels of autonomy. Other experts, such as Geoff Hinton, the Nobel Prize winner who has already been mentioned, believe that we are much closer to the cliff edge of ASI.

A minority, such as Yann LeCun, argue that language is only a small component of intelligence and that the real world is complex and messy. Therefore, in his view, superintelligence is a long way off and will not be built on LLMs. There is a wide variety of views among AI experts about the imminence of ASI, with the CEOs of Anthropic and Google DeepMind, Dario Amodei and Demis Hassabis, somewhere between Yann LeCun and Geoff Hinton. My own view, after talking with colleagues in the AI Security Institute and the Alan Turing Institute, is that a moratorium would be unenforceable.

Instead, I support the proposal made this week by the noble Baroness, Lady Harding, to set up a commission to investigate the ethical aspects of general ASI. The commission could be facilitated by the Alan Turing Institute and would consult a range of experts—for example, the four mentioned in this speech.

In the meantime, we should consider the transition from level 1 to level 2, which is much closer. General AGI carries real risks. The Minister highlighted on Monday the regulation of AI for specific fields—for example, through the MHRA for healthcare. That is an approach I welcome for narrow AI or even narrow AGI. But what we need now is for the Government to initiate a consultation process for the regulation of general AGI, which is likely to be attained by the next generation of frontier models.

Safety testing of models by the AI Security Institute at present relies on voluntary agreements with AI companies. The consultation should therefore also consider the pros and cons of putting AISI, the AI Security Institute, on a statutory footing and legally compelling AI companies to open up their models for safety testing. I very much hope that the Minister will be able to tell us when DSIT is likely to announce a consultation on regulating general AGI.