Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Baroness Stowell of Beeston Excerpts
Thursday 21st November 2024

(1 month ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Moved by
Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston
- View Speech - Hansard - -

That this House takes note of the Report from the Communications and Digital Committee Large language models and generative AI (1st Report, Session 2023-24, HL Paper 54).

Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston (Con)
- Hansard - -

My Lords, it is a great honour to open a debate in your Lordships’ House but an even bigger one to chair one of its Select Committees. Indeed, it is a pleasure to work with colleagues from around the House as we investigate important areas of public policy, and our inquiry on large language models and generative AI was no exception.

Several committee members are here today, and I thank them and all my colleagues for their commitment and contribution to this work. We received, and were grateful for, specialist and expert advice from Professor Mike Wooldridge. We were also supported brilliantly, as ever, by our excellent committee team. I will not run through them all individually but I take this opportunity to make a special mention of our exceptional clerk, Daniel Schlappa. After three years with the Communications and Digital Select Committee, he has this week moved on to the Intelligence and Security Joint Committee. We all owe Dan our sincere thanks and wish him well in his new role.

When we published our report on foundation models in February, we said that this technology would have a profound effect on society, comparable to the introduction of the internet. There has been a lot of hype and some disappointment about generative AI but our overall assessment looks sound. It is not going to solve all the world’s problems but nor is it going to drive widespread unemployment or societal collapse, as some of the gloomiest commentators suggest. However, it will fundamentally change the way that we interact with technology and each other, and its capabilities and the speed of change are astounding. Generative AI models are already able to produce highly sophisticated text, images, videos and computer code. Within just a few months, huge advances have been made in their ability to perform maths and reasoning tasks, and their ability to work autonomously is growing.

The committee was optimistic about the benefits of this new technology, not least because its implications for the UK economy are huge. The Government’s recent AI sector study notes that there are more than 3,000 AI companies in the UK generating more than £10 billion in revenues and employing more than 60,000 people in AI-related roles. Some estimates predict that the UK AI market could grow to over $1 trillion in value by 2035. However, to realise that potential, we have to make the right choices.

Capturing the benefits of AI involves addressing the serious risks associated with the technology’s use. These include threats to cybersecurity and the creation of child sexual abuse materials and terrorism instructions. AI can exacerbate existing challenges around discrimination and bias too. That all needs addressing at pace. We also need better early warning indicators for more catastrophic risks such as biological attacks, destructive cyber weapons or critical infrastructure failure. That is particularly important as the technical means to produce autonomous agents intensifies, meaning that AI will increasingly be able to direct itself.

I am pleased that the Government took forward some of the committee’s suggestions about the AI risk register. However, while addressing the risks of AI is critical, we cannot afford to let fear dominate the conversation. The greatest danger lies in failing to seize the opportunities that this technology presents. If the UK focuses solely on managing risks, we will fall behind international competitors who are racing ahead with bold ambition.

I do not mean just what is happening in the US and China. Government spending on AI in France since 2018 is estimated to have reached €7.2 billion, which is 60% more than in the UK. Here the Labour Government, since they were elected, cancelled investment in the Edinburgh exascale computing facility. This sends the wrong message about the UK’s ambition. Unless we are bolder and more ambitious, our reputation as an international leader in AI will become a distant memory. Our new inquiry into scaling up in AI and creative tech will investigate this topic further.

To lead on the global stage, the UK must adopt a vision of progress that attracts the brightest talent, fosters ground-breaking research and encourages a responsible AI ecosystem. I hope that the Government’s long-awaited AI opportunities action plan will be as positive as its title suggests. However, I have also heard talk of closer alignment with EU approaches, which sounds less promising. I will say more about this in a moment. I hope the Minister will confirm today that the Government will embrace a bold, optimistic vision for AI in the UK.

With that ambition in mind, let me highlight three key findings from our report, which are particularly pertinent as the Government formulate their vision for AI. These are: the importance of open market competition, the need for a proportionate approach to regulation, and the urgent issue of copyright.

I will start with competition. Ever since the inception of the internet, we have seen technology markets become dominated by very few companies, notably in cloud and search. The AI market is also consolidating. As Stability AI told us last year, there is a real risk of repeating mistakes we saw years ago. No Government should pick winners, but they should actively promote a healthy and level playing field and make open competition an explicit policy objective for AI. Lots of indicators show that the transformational benefits to society and our economy will be at the application layer of AI. We must not let the largest tech firms running the powerful foundation models create a situation where they have the power to throttle the kind of disruptive innovators that will power our future growth.

I was concerned to see the Secretary of State advocating for tech companies to be treated as if they were nation states. I appreciate that their economic heft and influence is extraordinary. Of course we value and want to attract their investment, but we need to be careful about what kind of message we send. Do we really want to say that private companies are on an equal footing with democratically elected Governments? I do not believe we do. I would be grateful if the Minister would reassure the House that the Government intend to deter bad behaviour by big tech companies, not defer to it.

Moving on, the committee called for an AI strategy that focuses on “supporting commercial opportunities”, academic research and spin-outs. As the Government consider AI legislation, they should ensure that innovation and competition are their guiding focus. They must avoid policies that limit open-source AI development or exclude innovative smaller players. When some of us were in San Francisco, we heard about recent efforts to legislate frontier models in California, which sparked varied concerns from stakeholders, ranging from big tech to small start-ups. We understand that getting these things right is a challenge, but it is one that must be met.

Future safety rules are a good example. Our report called for mandatory safety tests for high-risk, high-impact models. But the key thing here is proportionality. It is important for the riskiest and most capable models to have some safety requirements—just look at the pace of progress in Chinese LLMs, for example—and the AI Safety Institute is making progress on standards. But if the Government set the bar for these standards too low and capture too many businesses, it will curb innovation and undermine the whole purpose of having flexible rules. Again, I would be really grateful if the Minister would reassure me and the House that the Government will ensure that any new safety tests will apply only to the largest and riskiest models, and not stifle new market entrants.

Many US tech firms and investors told us the UK’s sector-led approach to AI regulation is the right route. It strikes a balance between ensuring reasonable regulatory oversight while not drowning start-ups and scale-ups in red tape. In contrast, some investors said the EU’s approach had given them pause for thought. Regulatory alignment with the EU should not be pursued for its own sake. The UK needs an AI regime that works in our national interest. Again, it would be helpful if the Minister could assure the House that he will not compromise the UK’s AI potential out by closely aligning us with Europe in this area. Our regulatory independence is a real advantage we must not lose.

Relying on existing regulators to ensure good outcomes from AI will work only if they are properly resourced and empowered. The committee was not satisfied that regulators were sufficiently prepared to take on this task. On that, we drew attention to the slow pace of setting up the Government’s central support functions which are supposed to provide expertise and co-ordination to the regulators and check they have the right tools for the job. It would be good to hear from the Minister that progress is being made on all these fronts.

We must also be careful to avoid regulatory capture by the established tech companies in an area where government and regulators will be constantly playing catch-up and needing to draw in external business expertise. I was pleased to see that DSIT has published the conflicts of interest for key senior figures. That sort of transparency within government is much needed and sets a really good example to everyone else.

Finally, I turn to copyright and the unauthorised use of data—a topic that the committee has continued to investigate in our current inquiry on the future of news. We were disappointed by the previous Government’s lack of progress on copyright. It is crucial that we create the necessary conditions to encourage AI innovation, but this should not come at the cost of the UK’s creative industries, which contribute over £100 billion a year to the UK economy. The approach of setting up round tables, led by the Intellectual Property Office, was not convincing and, predictably, it has not solved much.

But I have not been impressed with the new Government’s approach so far either. There has been little action to address this period of protracted uncertainty, one which is increasingly entrenching the status quo with negative consequences for rights holders and AI start-ups. A handful of powerful players continue to dominate and exploit their position with impunity. It is good to see more licensing deals emerging. Advocates say this is a positive development which recognises news publishers’ contribution. But critics argue that the deals are effectively an insurance policy which further cement big tech’s position. More scrutiny of this is needed. I very much hope that the Minister will tell us today when the Government will set out their next steps on copyright. We must find a way forward and one that works for the UK.

I note that the Minister in a previous role has before advocated for an opt-out approach to text and data mining. He will know that the previous Government did not adopt that approach because of how badly it went down with the content creators. Rights holders must have a way of checking whether their request to block crawlers has been respected. There need to be meaningful sanctions for developers if the rules are not followed. At the moment, the only option is a high-risk court case, probably for a very limited payout. This is not a satisfactory solution, especially when a huge disparity of legal resources exists between the publisher and tech firm. Unless these fundamental shortcomings are resolved, a new regime will be woefully inadequate. I will be disappointed if the Minister proposes an opt-out regime without also providing details of a transparency framework and an enforcement mechanism. If the Government intend to pursue that path, could the Minister explain how he has addressed the concerns of the publishers, when the previous Government could not? It is important to note—and I am very pleased to see this—that the industry itself is coming up with solutions, whether through partnerships or the development of new AI licensing marketplaces. Indeed, there have been some announcements only this week.

All that brings me back to the point that I made at the beginning. Large language models and generative AI offer huge opportunities for innovation. In turn, we must remain innovative ourselves when considering how to harness the potential impact of this technology while also mitigating the risks. We must ensure that our minds and our markets remain open to new ideas. I look forward to hearing everyone’s contribution to today’s debate, both committee members and others with an interest in this area. I am especially looking forward to hearing from the Minister and learning more about his Government’s approach to this critical technology. I beg to move.

--- Later in debate ---
Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston (Con)
- View Speech - Hansard - -

My Lords, I am very grateful to all noble Lords who have contributed to this debate. I thank all noble Lords who made such very kind and generous remarks about me and the committee’s work. I will pick up on just a couple of things because I do not want to take up much time.

As far as some of the things that the Minister has just said, I thank him very much for his comprehensive response. I will read it properly in Hansard tomorrow because there was obviously a lot there. I was pleased to hear that the action plan is coming very soon. He emphasised “very” before he said “soon”, so that was encouraging. Clearly, we should learn some more from that about plans for computing power, as he said.

As was mentioned by most noble Lords contributing today, we know that computing power is essential. I understand the point that the Minister made about exascale being different from AI-specific computing power. What the Government are doing on the latter is where it really matters in the context of this debate. It is important none the less not to forget that when commitments to compute, which people see as a signal of the country’s ambition, get cancelled that sends a rather confusing and mixed message. That was the point we were trying to emphasise.

On regulation, I hear the Minister in that there will be extensive consultation about the AI Bill. As he has heard me say already, it is clearly important in my view and that of the committee that we do not rush to regulation. It is better that it is got right.

I will say a couple of things about copyright, which many noble Lords have mentioned today, that gets such emphasis in this debate, perhaps sometimes to the surprise of the large tech businesses that are US companies. I think it does because it is a reflection of our very mixed economy over here. We are not in a position where we can put all bets on tech and know that that is where we will see growth so that we do not need to worry about anything else. As the Minister said, which will give some people in the content-creating community comfort, this technology cannot develop without content. Hearing that that is well understood by the Government is important to content creators. However, as much as it will be good to have a serious consultation on whatever proposals the Government come forward with, it is none the less essential that we get a move on on this because a lot of businesses are starting to feel very threatened by all that.

The only other thing I would add is on the question of risks. In this debate, there has been broad consensus about the opportunity of this technology and its importance. The noble Lords, Lord Strasburger and Lord Griffiths, talked about existential threat, and that was mentioned by the Minister in his reply.

Risk was not something that we treated with any lack of seriousness at all when we carried out our inquiry. The noble Lord, Lord Griffiths, is right: it is important that we listen to the range of voices that have knowledge and experience in this area. However, it is important to recognise—we took this seriously as a committee when conducting our inquiry—that this technology has been subject, and probably is continuing to be subject, to quite a significant power struggle. At the start of our inquiry, the debate about existential threat was very live. One thing that we learned throughout our inquiry was about the divide within the tech world on this debate, and how you had to be really quite tuned in to where these different threats and messages were coming from, so that we did not get sucked down a path which ended up allowing a concentration of power that was also not what many noble Lords wanted.

Overall, our view and certainly my personal view is that, with something as important as this new general-purpose technology—as the Minister said, these things come along very rarely—we make sure that we are driven by the opportunities mitigating the risks as we go along, and are not driven by the risks and miss out on the opportunities that this technology will provide. All of us today have been able to agree on that, and it is a most important conclusion for us to take away from this debate. I certainly look forward to studying again the proposal of the noble Lord, Lord Tarassenko, which sounded interesting in terms of any sovereign LLM concentrating on health issues. I am very grateful to noble Lords.

Motion agreed.