Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Lord Kamall Excerpts
Thursday 21st November 2024

(1 day, 17 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Kamall Portrait Lord Kamall (Con)
- View Speech - Hansard - -

My Lords, I refer noble Lords to my interests as set out in the register. I also thank the committee staff for their work during the inquiry and in writing the report, all the witnesses who offered us a range of views on this fascinating topic, as well as our incredibly able chairperson the noble Baroness, Lady Stowell, and my committee colleagues.

I am someone who studied engineering for my first degree, so I will go back to first principles. Large language models work by learning relationships between pieces of data contained in large datasets. They use that to predict sequences, which then enables them to generate natural language text, such as articles, student essays, or even politicians’ speeches—but not this one. Finance companies use LLMs to predict market trends based on past data; marketing agencies use LLMs to analyse consumer behaviour in developing marketing campaigns; and, in health, LLMs have been used to analyse patient records and clinical notes to help diagnosis and develop treatment plans.

While this is welcome, LLMs also hallucinate. They produce a result that seems plausible but is in fact false, since the LLM’s source data in calculating the probability of that information being correct was actually incorrect. An AI expert told me that all output from an LLM, whether accurate or not, could be considered an hallucination, since the LLM itself possesses no real knowledge or intelligence. In fact, so much of what we call artificial intelligence at the moment is not yet intelligent and can be better described as machine learning. Given this, we should be careful to put too much trust in LLMs and AI, especially in automated decision-making.

In other debates, I have shared my terrible customer experiences with an airline and a fintech company, both of which seemed to use automated decision-making, but I will not repeat them now. While they got away with it, poor automated decision-making in healthcare could be dangerous and even catastrophic, leading to deaths. We need to proceed with caution when using LLMs and other AI systems for automated decision-making, something that will be raised in debate on the Data (Use and Access) Bill. We also need to consider safeguards and, possibly, an immediate human back-up on site if something goes wrong.

These examples about the good and the bad highlight the two key principles in technology legislation and regulation. You have, on the one hand, the precautionary principle and, on the other, the innovation principle. Witnesses tended to portray the US approach, certainly at the federal level, as driven mostly by the large US tech companies, while the European Union’s AI Act was described as “prescriptive”, overly precautionary and “stifling innovation”. Witnesses saw this as an opportunity for the UK to continue to host world-leading companies but, as other noble Lords have said, we cannot delay. Indeed, the report calls for the UK Government and industry to prepare now to take advantage of the opportunities, as the noble Baroness, Lady Wheatcroft, said.

At the same time, the Government should guard against regulatory capture or rent seeking by the big players, who may lobby for regulations benefiting them at a cost to other companies. We also considered the range of, and trade-offs between, open and closed models. While open models may offer greater access and competition, they may make it harder to control the proliferation of dangerous capabilities. While closed models may offer more control and security, they may give too much power to the big tech companies. What is the Government’s current thinking on the range of closed and open models and those in between? Who are they consulting to inform this thinking?

The previous Government’s AI Safety Summit was welcomed by many, and we heard calls to address the immediate risks from LLMs since malicious activities, such as fake pictures and fake news, become easier and cheaper with LLMs, as the noble Baroness, Lady Healy said. As the noble Lords, Lord Knight and Lord Strasburger, said, witnesses told us about the catastrophic risks, which are defined as about 1,000 UK deaths and tens of billions in financial damages. They believe that these were unlikely in the next few years, but not impossible, as next-generation capabilities come online. Witnesses suggested mandatory safety tests for high-risk, high-impact models. Can the Minister explain the current Government’s thinking on mandatory safety tests, especially for the high-risk, high-impact models?

At the same time, witnesses warned against a narrative of AI being mostly a safety issue. They wanted the Government to speak more about innovation and opportunity, and to focus on the three pillars of AI. The first is data training and evaluation; the second is about algorithms and the talent to write, and perhaps to rewrite, them; and the third is computing power. As other noble Lords have said, they criticise the current Government’s decision to scrap the exascale supercomputer announced by the previous Government. Can the Minister explain where he sees the UK in relation to each of the three pillars, especially on computing power?

As the noble Baroness, Lady Featherstone, and others have said, one of the trickiest issues we discussed was copyright. Rights holders want the power to check whether their data is used without their permission. At least one witness questioned whether this was technically possible. Some rights holders asked for investment in high-quality training datasets to encourage LLMs to use licensed material. In contrast, AI companies distinguished between inputs and outputs. For example, an input would be if you listened to lots of music to learn to play the blues guitar. AI companies argue that this is analogous to using copyrighted data for training. For them, an output would be if a musician plays a specific song, such as “Red House” by Jimi Hendrix. The rights holders would then be compensated, even though poor Jimi is long dead. However, rights holders criticised this distinction, arguing that it undermines the UK’s thriving creative sector, so you can see the challenge that we have. Can the Minister share the Government’s thinking on copyright material as training data?

For the overall regulatory framework, the Government have been urged to empower sector regulators to regulate proportionally, considering the careful balance and sometimes trade-off between innovation and precaution. Most witnesses recommended that the UK forge its own path on AI regulation—fewer big corporations than the US but more flexible than the EU. We should also be aware that technology is usually ahead of regulation. If you try too much a priori legislation, you risk stifling innovation. At the same time, no matter now libertarian you may be, when things go wrong voters expect politicians and regulators to act.

To end, can the Minister tell the House whether the Government plan to align with the US’s more corporate approach or the EU’s less innovative AI regulation, or to forge an independent path so that the UK can be a home for world-leading LLM and AI companies?