Large Language Models and Generative AI (Communications and Digital Committee Report)

Baroness Wheatcroft Excerpts
Thursday 21st November 2024

(1 week, 2 days ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Wheatcroft Portrait Baroness Wheatcroft (CB)
- View Speech - Hansard - -

My Lords, it is a pleasure to follow the noble Baroness, Lady Featherstone. I must join others and add my thanks to the noble Baroness, Lady Stowell, for the impressive manner in which she led the inquiry and introduced this debate. I cannot exaggerate the excellent service we had from our staff, as the noble Baroness, Lady Stowell, said. In particular, one must single out our brilliant clerk, Daniel Schlappa, simply because he is no longer our clerk. The committee that gets his services next is very lucky; his insights were always pertinent and helpful.

I was delighted when the committee decided on this topic because it was clearly an important subject but one on which my knowledge was limited. It would therefore provide a stimulating learning experience. That certainly proved to be the case and continues to be so. In preparing for this debate, I encountered the word “exaflop”. I am not sure that it will ever be part of my daily vocabulary, but I have no doubt that the Minister, with his background, is more than familiar with the term. The idea of one quintillion—that is, one followed by 18 zeros—is hard to grasp, but one quintillion floating point operations per second is an exaflop. The joy of being in the Lords is that one is always learning. Why that is relevant to this debate is something to which I will return.

First, I stress the committee’s conclusion that LLMs and AI can, and will, be hugely positive contributors to our lives and economy. We must therefore be careful not to allow a climate of fear to be fostered around this latest stage in the march of technology. Careful and considered regulation is essential but while nations individually can deal with some aspects of this, global co-ordination, that nirvana for so many sectors, remains the ideal.

The Bletchley declaration was a positive step in the direction of global co-operation. Signed in late 2023 by 28 countries and the EU, it pledged to establish an international network of

“research on … AI safety ... to facilitate the provision of the best science available for policy making and the public good”.

That sounds a good and noble aim, although the presence of China on the list of signatories caused me to ponder just what might emerge from this laudable pledge. If the Minister is in a position to update the House on what the results have been so far, I think we would all be grateful. The Bletchley delegates planned to meet again in 2024, so perhaps he could tell us what came out of that meeting, if it ever happened.

Our report made a sheaf of recommendations to government. The two most important, perhaps, might be summed up as follows. First, do not waste time: there is no time to waste; this is happening now and at breakneck speed. Secondly, avoid regulatory capture, but regulate proportionately, as the noble Baroness, Lady Stowell, said.

We were also concerned about the need to protect copyright. This is a creative country in which many individuals and businesses earn their living through words and ideas. They cannot afford to have them stolen, and AI is already doing that at scale. The noble Baroness, Lady Featherstone, made this case admirably, and others will no doubt address this topic, but the need for government clarity on copyright remains pressing.

I return to those exaflops. In the remainder of my speech, I will concentrate on two specific issues in our report: the lack of compute power and whether the Government should explore the possibility of a sovereign LLM capability. The technology we are discussing today consumes computer power on an unprecedented scale. The largest AI models use many exaflops of compute: many quintillions of computer power. That also requires a huge amount of energy, but that is an issue for another debate.

In his 2023 review of compute in the UK, Professor Zoubin Ghahramani concluded that:

“The UK has great talent in AI with a vibrant start-up ecosystem, but public investment in AI compute is seriously lagging”.


He made that point in evidence to the committee. He recommended in 2023 a national co-ordinating body to deliver the vision for compute, one that could provide long-term stability while adapting to the rapid pace of change. He called for immediate investment so that the UK did not fall behind its peers.

To me, the exascale computer project in Edinburgh sounded like just the thing—50 times more powerful than existing AI resource—but this Government have pulled the plug on that. We all know about the £22 billion black hole, but, without uttering that phrase, can the Minister tell us whether he thinks that that decision might not be the end of the story? After all, the new fiscal rules allow the Chancellor to borrow to invest in important infrastructure projects. Would compute come into that category?

Secondly, will he say whether there might be some fresh thinking on the idea of a sovereign LLM? The previous Government’s response to our recommendation on this was that it was too early because LLM tools were still evolving, but the dominance of just a few overseas companies puts the UK in the potentially uncomfortable position of having to rely on core data from elsewhere for government to provide essential services. As the noble Lord, Lord Knight of Weymouth, said, perhaps the UK must accept that it missed the boat on LLMs and concentrate on what it is already doing very successfully: building specialist AI models. For government, that poses particular risks. Might there be some middle way that government should be—and maybe is—examining?