Large Language Models and Generative AI (Communications and Digital Committee Report)

Thursday 21st November 2024

(1 day, 10 hours ago)

Lords Chamber
Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Motion to Take Note
16:21
Moved by
Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston
- View Speech - Hansard - - - Excerpts

That this House takes note of the Report from the Communications and Digital Committee Large language models and generative AI (1st Report, Session 2023-24, HL Paper 54).

Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston (Con)
- Hansard - - - Excerpts

My Lords, it is a great honour to open a debate in your Lordships’ House but an even bigger one to chair one of its Select Committees. Indeed, it is a pleasure to work with colleagues from around the House as we investigate important areas of public policy, and our inquiry on large language models and generative AI was no exception.

Several committee members are here today, and I thank them and all my colleagues for their commitment and contribution to this work. We received, and were grateful for, specialist and expert advice from Professor Mike Wooldridge. We were also supported brilliantly, as ever, by our excellent committee team. I will not run through them all individually but I take this opportunity to make a special mention of our exceptional clerk, Daniel Schlappa. After three years with the Communications and Digital Select Committee, he has this week moved on to the Intelligence and Security Joint Committee. We all owe Dan our sincere thanks and wish him well in his new role.

When we published our report on foundation models in February, we said that this technology would have a profound effect on society, comparable to the introduction of the internet. There has been a lot of hype and some disappointment about generative AI but our overall assessment looks sound. It is not going to solve all the world’s problems but nor is it going to drive widespread unemployment or societal collapse, as some of the gloomiest commentators suggest. However, it will fundamentally change the way that we interact with technology and each other, and its capabilities and the speed of change are astounding. Generative AI models are already able to produce highly sophisticated text, images, videos and computer code. Within just a few months, huge advances have been made in their ability to perform maths and reasoning tasks, and their ability to work autonomously is growing.

The committee was optimistic about the benefits of this new technology, not least because its implications for the UK economy are huge. The Government’s recent AI sector study notes that there are more than 3,000 AI companies in the UK generating more than £10 billion in revenues and employing more than 60,000 people in AI-related roles. Some estimates predict that the UK AI market could grow to over $1 trillion in value by 2035. However, to realise that potential, we have to make the right choices.

Capturing the benefits of AI involves addressing the serious risks associated with the technology’s use. These include threats to cybersecurity and the creation of child sexual abuse materials and terrorism instructions. AI can exacerbate existing challenges around discrimination and bias too. That all needs addressing at pace. We also need better early warning indicators for more catastrophic risks such as biological attacks, destructive cyber weapons or critical infrastructure failure. That is particularly important as the technical means to produce autonomous agents intensifies, meaning that AI will increasingly be able to direct itself.

I am pleased that the Government took forward some of the committee’s suggestions about the AI risk register. However, while addressing the risks of AI is critical, we cannot afford to let fear dominate the conversation. The greatest danger lies in failing to seize the opportunities that this technology presents. If the UK focuses solely on managing risks, we will fall behind international competitors who are racing ahead with bold ambition.

I do not mean just what is happening in the US and China. Government spending on AI in France since 2018 is estimated to have reached €7.2 billion, which is 60% more than in the UK. Here the Labour Government, since they were elected, cancelled investment in the Edinburgh exascale computing facility. This sends the wrong message about the UK’s ambition. Unless we are bolder and more ambitious, our reputation as an international leader in AI will become a distant memory. Our new inquiry into scaling up in AI and creative tech will investigate this topic further.

To lead on the global stage, the UK must adopt a vision of progress that attracts the brightest talent, fosters ground-breaking research and encourages a responsible AI ecosystem. I hope that the Government’s long-awaited AI opportunities action plan will be as positive as its title suggests. However, I have also heard talk of closer alignment with EU approaches, which sounds less promising. I will say more about this in a moment. I hope the Minister will confirm today that the Government will embrace a bold, optimistic vision for AI in the UK.

With that ambition in mind, let me highlight three key findings from our report, which are particularly pertinent as the Government formulate their vision for AI. These are: the importance of open market competition, the need for a proportionate approach to regulation, and the urgent issue of copyright.

I will start with competition. Ever since the inception of the internet, we have seen technology markets become dominated by very few companies, notably in cloud and search. The AI market is also consolidating. As Stability AI told us last year, there is a real risk of repeating mistakes we saw years ago. No Government should pick winners, but they should actively promote a healthy and level playing field and make open competition an explicit policy objective for AI. Lots of indicators show that the transformational benefits to society and our economy will be at the application layer of AI. We must not let the largest tech firms running the powerful foundation models create a situation where they have the power to throttle the kind of disruptive innovators that will power our future growth.

I was concerned to see the Secretary of State advocating for tech companies to be treated as if they were nation states. I appreciate that their economic heft and influence is extraordinary. Of course we value and want to attract their investment, but we need to be careful about what kind of message we send. Do we really want to say that private companies are on an equal footing with democratically elected Governments? I do not believe we do. I would be grateful if the Minister would reassure the House that the Government intend to deter bad behaviour by big tech companies, not defer to it.

Moving on, the committee called for an AI strategy that focuses on “supporting commercial opportunities”, academic research and spin-outs. As the Government consider AI legislation, they should ensure that innovation and competition are their guiding focus. They must avoid policies that limit open-source AI development or exclude innovative smaller players. When some of us were in San Francisco, we heard about recent efforts to legislate frontier models in California, which sparked varied concerns from stakeholders, ranging from big tech to small start-ups. We understand that getting these things right is a challenge, but it is one that must be met.

Future safety rules are a good example. Our report called for mandatory safety tests for high-risk, high-impact models. But the key thing here is proportionality. It is important for the riskiest and most capable models to have some safety requirements—just look at the pace of progress in Chinese LLMs, for example—and the AI Safety Institute is making progress on standards. But if the Government set the bar for these standards too low and capture too many businesses, it will curb innovation and undermine the whole purpose of having flexible rules. Again, I would be really grateful if the Minister would reassure me and the House that the Government will ensure that any new safety tests will apply only to the largest and riskiest models, and not stifle new market entrants.

Many US tech firms and investors told us the UK’s sector-led approach to AI regulation is the right route. It strikes a balance between ensuring reasonable regulatory oversight while not drowning start-ups and scale-ups in red tape. In contrast, some investors said the EU’s approach had given them pause for thought. Regulatory alignment with the EU should not be pursued for its own sake. The UK needs an AI regime that works in our national interest. Again, it would be helpful if the Minister could assure the House that he will not compromise the UK’s AI potential out by closely aligning us with Europe in this area. Our regulatory independence is a real advantage we must not lose.

Relying on existing regulators to ensure good outcomes from AI will work only if they are properly resourced and empowered. The committee was not satisfied that regulators were sufficiently prepared to take on this task. On that, we drew attention to the slow pace of setting up the Government’s central support functions which are supposed to provide expertise and co-ordination to the regulators and check they have the right tools for the job. It would be good to hear from the Minister that progress is being made on all these fronts.

We must also be careful to avoid regulatory capture by the established tech companies in an area where government and regulators will be constantly playing catch-up and needing to draw in external business expertise. I was pleased to see that DSIT has published the conflicts of interest for key senior figures. That sort of transparency within government is much needed and sets a really good example to everyone else.

Finally, I turn to copyright and the unauthorised use of data—a topic that the committee has continued to investigate in our current inquiry on the future of news. We were disappointed by the previous Government’s lack of progress on copyright. It is crucial that we create the necessary conditions to encourage AI innovation, but this should not come at the cost of the UK’s creative industries, which contribute over £100 billion a year to the UK economy. The approach of setting up round tables, led by the Intellectual Property Office, was not convincing and, predictably, it has not solved much.

But I have not been impressed with the new Government’s approach so far either. There has been little action to address this period of protracted uncertainty, one which is increasingly entrenching the status quo with negative consequences for rights holders and AI start-ups. A handful of powerful players continue to dominate and exploit their position with impunity. It is good to see more licensing deals emerging. Advocates say this is a positive development which recognises news publishers’ contribution. But critics argue that the deals are effectively an insurance policy which further cement big tech’s position. More scrutiny of this is needed. I very much hope that the Minister will tell us today when the Government will set out their next steps on copyright. We must find a way forward and one that works for the UK.

I note that the Minister in a previous role has before advocated for an opt-out approach to text and data mining. He will know that the previous Government did not adopt that approach because of how badly it went down with the content creators. Rights holders must have a way of checking whether their request to block crawlers has been respected. There need to be meaningful sanctions for developers if the rules are not followed. At the moment, the only option is a high-risk court case, probably for a very limited payout. This is not a satisfactory solution, especially when a huge disparity of legal resources exists between the publisher and tech firm. Unless these fundamental shortcomings are resolved, a new regime will be woefully inadequate. I will be disappointed if the Minister proposes an opt-out regime without also providing details of a transparency framework and an enforcement mechanism. If the Government intend to pursue that path, could the Minister explain how he has addressed the concerns of the publishers, when the previous Government could not? It is important to note—and I am very pleased to see this—that the industry itself is coming up with solutions, whether through partnerships or the development of new AI licensing marketplaces. Indeed, there have been some announcements only this week.

All that brings me back to the point that I made at the beginning. Large language models and generative AI offer huge opportunities for innovation. In turn, we must remain innovative ourselves when considering how to harness the potential impact of this technology while also mitigating the risks. We must ensure that our minds and our markets remain open to new ideas. I look forward to hearing everyone’s contribution to today’s debate, both committee members and others with an interest in this area. I am especially looking forward to hearing from the Minister and learning more about his Government’s approach to this critical technology. I beg to move.

16:36
Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, while I have interests in the register relating to AI, none is directly involved with LLM development. However, the Good Future Foundation and Educate Ventures Research are both advising others on AI deployment, while Century-Tech and Goodnotes derive some product enhancement using LLM and generative AI technology.

I joined the Communications and Digital Committee after this report was concluded, so I am not marking my own homework when I say that this is an interesting and informative report that I would highly recommend to a wider group of Members of your Lordships’ House than those in their places today. I congratulate the noble Baroness, Lady Stowell, on her speech and the way in which she has introduced this, and the rest of the committee on the report.

We have a big problem of conscious incompetence in the House, with the vast majority leaving tech regulation and debate to a small group of usual suspects. We need a wider diversity of opinion. Given the high probability that this technology will impact almost all sectors, we need expertise in those sectors applied to AI policy, and I recommend reading this report as an opportunity for Members to improve their AI literacy. That is not to say we, the usual suspects, have all the answers; we simply have the confidence to be curious around our own conscious incompetence.

The report reminds us of the core ingredients needed to develop frontier AI: large markets, massive computing power and therefore access to significant energy sources, affordable high-end skills and a lot of high-quality data. All this needs a lot of money and a relatively benign regulatory environment. The report also reminds us that we risk losing out on the huge opportunity that AI gives us for economic growth, if we do not get this right, and we risk otherwise drifting once more into a reliance on just a few tech companies As Ben Brooks of Stability AI told the committee, currently the world relies on one search company, two social media companies and three cloud providers.

It is worth asking whether we have already missed the boat on the frontier of LLMs. Much of this activity lies in the US, and it is fair to ask whether we are better off building off existing open or closed foundational models at the application layer and using our unique datasets and great skills to develop models for public service outcomes in particular—in health, culture, education and elsewhere—that we and the world can trust and enjoy. Such an approach would acknowledge the limited market access that we have post Brexit, the limited compute and energy resources, and the limited amounts of investment.

However, those limitations should not constrain our ambition around other large models. This report is just about large language models, but others will come and it can help inform attitudes to frontier AI more generally. The coming together of robotics or biotechnology with generative AI and the development of quantum computing are yet to be fully realised, and we should ensure that as a nation we have capacity in some of these future frontiers. It is worth reminding noble Lords that if they thought generative AI was disruptive, some of these next frontiers will be much more so. The UK must prepare for a period of heightened technological turbulence while seeking to take advantage of the great opportunities.

As I said on Tuesday in our debate on the data Bill, we urgently need a White Paper or similar from the Government that paints the whole picture in this area of great technological opportunity and risk. The report finds that the previous Government’s attention was shifting too far towards a narrow view of high-stakes AI safety and that there is a need for a more balanced approach to drive widespread responsible innovation. I agree that the Government should set out a more positive vision for LLMs while also reflecting on risk and safety. Perhaps the Minister could set out in his wind-up when we are likely to get the wider vision that I think we need.

I agree with much of the report’s findings, such as that the Government should explore options for securing a sovereign LLM capability, particularly for public sector applications. The report also covered the likelihood of AI-triggered catastrophic events. While I agree that this is not an imminent risk, it needs a regulatory response. AI could pose an extinction risk to humanity, as recognised by world leaders, AI scientists and leading AI company CEOs. AI systems’ capabilities are growing rapidly. Superintelligent AI systems with intellectual capabilities beyond those of humans would present far greater risks than all existing AI systems currently. In other areas, such as medicine or defence, we put guard-rails around development to protect us from risks to humanity. Is this something the Minister agrees should be addressed with the flexible, risk-based rules referenced by the noble Baroness, Lady Stowell?

To conclude, this issue is urgent. We need to balance the desire to grow the economy by harnessing the potential of AI with the need to protect our critical cultural industries, as the noble Baroness referenced. It is a special feature of the British economy, and regulation is needed to protect it. On this I commend the Australian parliamentary joint committee on social media and traditional news. It calls for a number of things, including a must-carry framework to ensure the prominence of journalism across search and social media and a 15% levy on platform online advertising, including revenues technically billed offshore that would then be distributed by an independent body. Based on estimates, the rough scale of the proposal is that approximately 1 billion Australian dollars or £500 million of revenue would be generated, which is roughly two or three times the magnitude of what licensing is currently delivering in that country. That is a bold set of proposals and I share it to raise our sense of ambition about what we can do to balance regulation and the desire for growth. These are difficult choices, but the Government need to act urgently and I strongly commend this report to the House.

16:43
Baroness Featherstone Portrait Baroness Featherstone (LD)
- View Speech - Hansard - - - Excerpts

My Lords, it is a great pleasure to follow the noble Lord, Lord Knight of Weymouth, and I pay tribute to the chair of the committee, the noble Baroness, Lady Stowell of Beeston, for her first-class chairing of what was a really complex issue to those of us who are not in the AI or tech industries. It was completely fascinating and very eye-opening—a masterclass.

Today I want to address one of the most pressing and critical issues raised by the noble Baroness, Lady Stowell: the clear evidence that creatives and their living are in great danger. They are up against the overwhelming and overweening power of the big tech companies and what appeared to be a great deal of reluctance by the industry to concede that a way to remunerate for intellectual property use was vital. It was clear that the LLM industry is using the products of the creative industries to train AI and were text and data mining extensively for their own benefit without paying for it. As I listened to a cascade of excuses and rationales for not dealing with the issue, it was a real-life example of killing the goose that laid the golden egg.

At its most basic, it is critical that we recognise original work, deliver fair compensation, and respect creators’ rights and economic justice. We listened to all the arguments about who owns what, how you prove it, how it is all too difficult, that it is like reading a book or that somehow it is a public good. But in the end, creatives must be recompensed for the use of their creations. We need to ensure a sustainable creative economy. As the noble Baroness said, the creative industries are a massive economic driver for our national economy.

There is both a legal and an ethical responsibility to ensure that there is adherence to copyright laws. Those laws exist to protect the work of creators. As this field develops, and AI becomes more integrated into industries, it is a critical requirement and ethical responsibility of companies to respect intellectual property. It was clear from the evidence we heard that much of the data mining that has been going on has taken place without any permission from or compensation to the rights holders. Yes, there were esoteric discussions as to where copyright belonged: could it really be the original artist when somewhere in a black box—or maybe it was a sandbox, I cannot remember—fibres were creating something anew from the feed? That may be challenging, but the onus is on the AI industry and the Government to protect our creatives. As a group, and given their talents, they are not always paid well anyway. For them not to receive anything, when their work provides the basis for AI training for an industry that is going to grow wildly economically rich, is simply not acceptable.

Our copyright law is absolutely clear on this. Moreover, the evidence given to the committee, such as from the Society of Authors, noted that AI systems “would simply collapse” if they could not access original talent. It was equally clear from Dan Conway, CEO of the Publishers Association, in his evidence to the committee, that LLMs

“are infringing copyrighted content on an absolutely massive scale … when they collect the information”

and in

“how they store the information and how they handle it”.

There was clear evidence from model outputs that developers had used pirated content from the Books3 database, and he alleged that they were “not currently compliant” with UK law. Microsoft countered with the argument that, basically, they were offering a public good and therefore copyright laws should not apply to ideas—good try.

I was also interested to receive a briefing from UK Music, which is concerned—justly, in my view—that the Government might try to introduce further text and data mining copyright exceptions, which would allow AI service providers to train their systems on music without the consent of, or need to compensate, its creators. The oft-made suggestion, as raised by the noble Baroness, is an opt-out system. It seems relatively practical: you could opt in if you did not mind your stuff being used, or you could opt out. But it will not work. There are no existing, effective opt-out schemes that reliably opt out content from training. Doing so is quite impossible. There is no way to have control over whether downstream uses of original work are opted out of generative AI training, since there is no control for the artist over the URLs where they are hosted—perhaps we should look at extraterritorial law. The evidence suggests that the majority of people who have the option to opt out of generative AI training do not even realise that they have the option. Moreover, if opt-out schemes are adopted, publishers and copyright holders will have only the illusion of choice. If they opt out of AI training, they opt out of being findable on the internet altogether.

Record keeping has also been suggested—I do not think the committee covered this, but I stand to be corrected. Currently there is no stand-alone legal requirement in the UK to disclose the material that AI systems are trained on, beyond the data protection law framework. I believe that record keeping should be mandatory.

AI cannot create in a vacuum. It needs huge data sets, so often drawn from copyrighted materials, to function. Clearly, it would be much better to encourage collaboration between the tech industry and the creative industries, instead of AI becoming a threat or being a threat, as it is. I implore AI companies to accept this thesis and ensure that they are transparent about how their models are trained and which data is used.

There are a lot of ideas around about group licensing and so on. It would be far more productive if the LLMs worked with the creatives. A lot of creatives are individuals or small companies. They just do not have the means to enforce their IP rights through the legal process or to track how their works are being used in AI training. That is why the committee’s recommendation that the IPO code must ensure that creators are fully empowered to exercise their rights is so important, alongside the requirement for developers to make clear whether their web crawlers are being used to acquire data for generative AI training or for other purposes.

Ultimately, AI’s integration into the creative industries brings a host of economic, ethical and legal challenges, but the most essential part is protecting the rights of creators to ensure fairness in the distribution of economic value, so that creators and the AI industry can both thrive. I trust the Government will ensure that the committee’s recommendations are implemented in full.

16:51
Baroness Wheatcroft Portrait Baroness Wheatcroft (CB)
- View Speech - Hansard - - - Excerpts

My Lords, it is a pleasure to follow the noble Baroness, Lady Featherstone. I must join others and add my thanks to the noble Baroness, Lady Stowell, for the impressive manner in which she led the inquiry and introduced this debate. I cannot exaggerate the excellent service we had from our staff, as the noble Baroness, Lady Stowell, said. In particular, one must single out our brilliant clerk, Daniel Schlappa, simply because he is no longer our clerk. The committee that gets his services next is very lucky; his insights were always pertinent and helpful.

I was delighted when the committee decided on this topic because it was clearly an important subject but one on which my knowledge was limited. It would therefore provide a stimulating learning experience. That certainly proved to be the case and continues to be so. In preparing for this debate, I encountered the word “exaflop”. I am not sure that it will ever be part of my daily vocabulary, but I have no doubt that the Minister, with his background, is more than familiar with the term. The idea of one quintillion—that is, one followed by 18 zeros—is hard to grasp, but one quintillion floating point operations per second is an exaflop. The joy of being in the Lords is that one is always learning. Why that is relevant to this debate is something to which I will return.

First, I stress the committee’s conclusion that LLMs and AI can, and will, be hugely positive contributors to our lives and economy. We must therefore be careful not to allow a climate of fear to be fostered around this latest stage in the march of technology. Careful and considered regulation is essential but while nations individually can deal with some aspects of this, global co-ordination, that nirvana for so many sectors, remains the ideal.

The Bletchley declaration was a positive step in the direction of global co-operation. Signed in late 2023 by 28 countries and the EU, it pledged to establish an international network of

“research on … AI safety ... to facilitate the provision of the best science available for policy making and the public good”.

That sounds a good and noble aim, although the presence of China on the list of signatories caused me to ponder just what might emerge from this laudable pledge. If the Minister is in a position to update the House on what the results have been so far, I think we would all be grateful. The Bletchley delegates planned to meet again in 2024, so perhaps he could tell us what came out of that meeting, if it ever happened.

Our report made a sheaf of recommendations to government. The two most important, perhaps, might be summed up as follows. First, do not waste time: there is no time to waste; this is happening now and at breakneck speed. Secondly, avoid regulatory capture, but regulate proportionately, as the noble Baroness, Lady Stowell, said.

We were also concerned about the need to protect copyright. This is a creative country in which many individuals and businesses earn their living through words and ideas. They cannot afford to have them stolen, and AI is already doing that at scale. The noble Baroness, Lady Featherstone, made this case admirably, and others will no doubt address this topic, but the need for government clarity on copyright remains pressing.

I return to those exaflops. In the remainder of my speech, I will concentrate on two specific issues in our report: the lack of compute power and whether the Government should explore the possibility of a sovereign LLM capability. The technology we are discussing today consumes computer power on an unprecedented scale. The largest AI models use many exaflops of compute: many quintillions of computer power. That also requires a huge amount of energy, but that is an issue for another debate.

In his 2023 review of compute in the UK, Professor Zoubin Ghahramani concluded that:

“The UK has great talent in AI with a vibrant start-up ecosystem, but public investment in AI compute is seriously lagging”.


He made that point in evidence to the committee. He recommended in 2023 a national co-ordinating body to deliver the vision for compute, one that could provide long-term stability while adapting to the rapid pace of change. He called for immediate investment so that the UK did not fall behind its peers.

To me, the exascale computer project in Edinburgh sounded like just the thing—50 times more powerful than existing AI resource—but this Government have pulled the plug on that. We all know about the £22 billion black hole, but, without uttering that phrase, can the Minister tell us whether he thinks that that decision might not be the end of the story? After all, the new fiscal rules allow the Chancellor to borrow to invest in important infrastructure projects. Would compute come into that category?

Secondly, will he say whether there might be some fresh thinking on the idea of a sovereign LLM? The previous Government’s response to our recommendation on this was that it was too early because LLM tools were still evolving, but the dominance of just a few overseas companies puts the UK in the potentially uncomfortable position of having to rely on core data from elsewhere for government to provide essential services. As the noble Lord, Lord Knight of Weymouth, said, perhaps the UK must accept that it missed the boat on LLMs and concentrate on what it is already doing very successfully: building specialist AI models. For government, that poses particular risks. Might there be some middle way that government should be—and maybe is—examining?

16:58
Baroness Healy of Primrose Hill Portrait Baroness Healy of Primrose Hill (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, it is a pleasure to follow the noble Baroness, Lady Wheatcroft. I welcome the new Government’s determination that artificial intelligence can kickstart an era of economic growth, transform the delivery of public services and boost living standards for working people. Therefore, I hope they will welcome the recommendations in our Select Committee report, Large Language Models and Generative AI, which clearly sets out the opportunities and risks associated with this epoch-defining technology.

I, too, served on this committee under the admirable leadership of the noble Baroness, Lady Stowell of Beeston, who has set out the findings of our report so well. We are fortunate to have such a knowledgeable Minister in my noble friend Lord Vallance replying to this debate. His Pro-innovation Regulation of Technologies Review, undertaken when he was the Chief Scientific Adviser in a former life, raises important questions on copyright and regulation, both of which feature in our report.

Last week, the Minister, my noble friend Lady Jones of Whitchurch, explained to the House the difficulties in finding the right balance between fostering innovation in AI and ensuring protection for creators and the creative industries. Her acknowledgement that this must be resolved soon was welcome, but our report made clear the urgency surrounding this vexed question of copyright. Can my noble friend the Minister give any update on possible progress?

The News Media Association warns that, without incentivising a dynamic licensing market, the creative industries will be unable to invest in new content, generative AI firms will have less and less high-quality data with which to train their LLMs, and AI innovation will stall. Our report recognised the complexity of the issue but stated that

“the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation. The current legal framework is failing to ensure these outcomes occur and the Government has a duty to act”.

This was directed at the previous Government but applies equally to the present one.

Another matter of concern to the committee are the powers and abilities of regulators to ensure the safety and innovation of these LLMs. The new Labour Government clearly recognise the importance of the AI sector as a key part of their industrial strategy and I welcome the announcement of the AI action plan, which is expected to be published this month. Can my noble friend confirm that this is still the expected timescale, and when the AI opportunities unit will be set up in DSIT to implement the plan?

More details of how AI will enhance growth and productivity and support the Government’s five stated missions, including breaking down barriers to opportunity, would be welcome to better understand how AI will transform citizens’ experiences of interacting with the state and boost take-up in all parts of the public sector and the wider economy. But, before this is possible, regulatory structures to ensure responsible innovation need to be strengthened, as our report found that regulators’ staffing suggested significant variation in technical expertise. There is a pressing need for support from the Government’s central functions in providing cross-regulator co-ordination. Relying on existing regulators to ensure good outcomes from AI will work only if they are properly resourced and empowered. As our report said:

“The Government should introduce standardised powers for the main regulators who are expected to lead on AI oversight to ensure they can gather information relating to AI processes and conduct technical, empirical and governance audits. It should also ensure there are meaningful sanctions to provide credible deterrents against egregious wrongdoing”.


Can my noble friend clarify how the new Bill will support growth and innovation by ending current regulatory uncertainty for AI developers, strengthening public trust and boosting business confidence? Is this part of the new regulatory innovation office’s role?

I welcome the Secretary of State’s commitment that the promised legislation will focus on the most advanced LLM models and not seek to regulate the entire industry, but rather make existing agreements between technology companies and the Government legally binding and turn the AI Safety Institute from a directorate of DSIT into an arm’s-length body, which he has said

“will give it a level of independence and a long-term future, because safety is so important”.

In conclusion, how confident is my noble friend that the recommendations of his March 2023 review will be implemented? It recognised that:

“Regulator behaviour and culture is a major determinant of whether innovators can effectively navigate adapting regulatory frameworks … the challenge for government is to keep pace with the speed of technological change: unlocking the enormous benefits of digital technologies, while minimising the risks they present both now and in the future”.


I wish the new Government well in this daunting task. As our report said:

“Capturing the benefits will require addressing risks. Many are formidable, including credible threats to public safety, societal values, open market competition and UK economic competitiveness. Farsighted, nuanced and speedy action is therefore needed to catalyse innovation responsibly and mitigate risks proportionately”.


Mitigation is essential, and I welcome the Government’s announcement of research grants to commence the important work of

“boosting society’s resilience against AI risks such as deepfakes, misinformation and cyberattacks”,

and

“the threat of AI systems failing unexpectedly, for example in the financial sector”.

Our report outlined the wide-ranging nature of these risks:

“The most immediate security concerns from LLMs come from making existing malicious activities easier, rather than qualitatively new risks. The Government should work with industry at pace to scale existing mitigations in the areas of cyber security (including systems vulnerable to voice cloning), child sexual abuse material, counter-terror, and counter-disinformation”.


I trust that the new AI strategy can be truly effective in countering risk and encouraging developments of real benefit to society.

17:05
Lord Strasburger Portrait Lord Strasburger (LD)
- View Speech - Hansard - - - Excerpts

My Lords, I congratulate the noble Baroness, Lady Stowell, and the Communications and Digital Committee on their very thorough and comprehensive report. It points out the very considerable benefits that generative AI and large language models can deliver for this country, and the human race in general. The report declares that large language models put us on the brink of epoch-defining changes, comparable to the invention of the internet, and I have no doubt about the truth of that prediction.

However, what price will we pay for these benefits? I am deeply worried about the great risks that are inherent in the breakneck pace at which this technology is being developed, without any meaningful attempts to regulate it—with the possible exception of the EU. The report identifies a plethora of potential areas of risk, from minor through to catastrophic, covering a non-exhaustive list of areas, including multiplying existing malicious capabilities, increasing the scale and pace of cyberattacks, enabling terrorism, generating synthetic child sexual abuse material, increasing disinformation via hyper-realistic bots, enabling biological or chemical release at pandemic scale, causing critical infrastructure failure or triggering an uncontrollable proliferation of AI models. I will not go on with the list, because anyone who has read the report will know what I am talking about. These are the consequences of malicious, or perhaps merely careless, uses of the technology, and they could have a very significant—perhaps catastrophic—impact on the citizens of this country, or even worldwide.

The report states in paragraph 140:

“There are … no warning indicators for a rapid and uncontrollable escalation of capabilities resulting in catastrophic risk”.


It then tries to reassure us—without much success, in my case—by saying:

“There is no cause for panic, but the implications of this intelligence blind spot deserve sober consideration”.


That is putting it very mildly.

However, this is not my main concern about the risks presented by AI, and I speak as one who had slight interaction with embryonic AI in the 1980s. The risks I have mentioned so far arise out of the probable misuse of this technology, either deliberately or accidentally. They might be mitigated by tight international regulation, although how we can prevent bad actors operating in regions devoid of regulation, I do not know. These enterprises are so competitive, so globalized and so driven by commercial pressure that anything that can be done, will be done, somewhere.

My main concern, and one to which I cannot see an obvious answer, is not what happens when the technology is misused. What worries me is the risk to humans if we lose control of the AI technology itself. The report does mention this risk, saying:

“This might occur because humans gradually hand over control to highly capable systems that vastly exceed our understanding; and/or the AI system pursues goals which are not aligned with human welfare and reduce human agency”.


That is a very polite way of saying that the AI systems might acquire greater intelligence than humans and pursue goals of their own: goals that are decidedly detrimental to the human race, such as eliminating or enslaving it. Before any noble Lords conclude that I am off with the fairies, I direct them to paragraph 154 of the report, which indicates a “non-zero likelihood”—that apparently means a remote chance—of existential risks materialising, but not, the report says, in the next three years. That is not very reassuring for those of us who hope to live longer than three years.

Some months ago, I had a conversation with Geoff Hinton—here in this House, as it happens—who is widely recognised to be one of the godfathers of AI and has just been awarded a Nobel prize. He resigned from Google to be free to warn the world about the existential risks from AI, and he is not alone in those views. His very well-informed view is that there is a risk of humans losing control of AI technology, with existential consequences. When I asked him what the good news was, he thought about it and said, “It’s a good time to be 76”. My rather flippant response was, “Well, at least we don’t have to worry about climate change”.

Seriously, the thing about existential risks is that we do not get a second chance. There is no way back. Even if the probability is very low, the consequence is so catastrophic for mankind that we cannot simply hope it does not happen. As the noble Lord, Lord Rees, the Astronomer Royal, said 10 years ago in a TED talk when discussing all cataclysmic risks:

“Our earth has existed for 45 million centuries, but this”


century

“is special—it’s the first where one species, ours, has the planet’s future in its hands … We and our political masters are in denial about catastrophic scenarios … But if an event is potentially devastating, it is worth paying a substantial premium to safeguard against it”,

rather like

“fire insurance on our house”.

The committee’s report devotes seven paragraphs out of 259 to the existential risks of the technology turning the tables on its human masters. This would suggest the committee did not take that risk all that seriously. Indeed, it says in paragraph 155:

“As our understanding of this technology grows … we hope concerns about existential risk will decline”.


I am not happy to rely on hope where existential risk is concerned, so I ask the Minister for some reassurance that this matter is in hand.

What steps are the Government taking, alone and with others, to mitigate the specific risk—albeit a small one—of humans losing control of AI systems such that they wipe out humanity?

17:13
Lord Kamall Portrait Lord Kamall (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I refer noble Lords to my interests as set out in the register. I also thank the committee staff for their work during the inquiry and in writing the report, all the witnesses who offered us a range of views on this fascinating topic, as well as our incredibly able chairperson the noble Baroness, Lady Stowell, and my committee colleagues.

I am someone who studied engineering for my first degree, so I will go back to first principles. Large language models work by learning relationships between pieces of data contained in large datasets. They use that to predict sequences, which then enables them to generate natural language text, such as articles, student essays, or even politicians’ speeches—but not this one. Finance companies use LLMs to predict market trends based on past data; marketing agencies use LLMs to analyse consumer behaviour in developing marketing campaigns; and, in health, LLMs have been used to analyse patient records and clinical notes to help diagnosis and develop treatment plans.

While this is welcome, LLMs also hallucinate. They produce a result that seems plausible but is in fact false, since the LLM’s source data in calculating the probability of that information being correct was actually incorrect. An AI expert told me that all output from an LLM, whether accurate or not, could be considered an hallucination, since the LLM itself possesses no real knowledge or intelligence. In fact, so much of what we call artificial intelligence at the moment is not yet intelligent and can be better described as machine learning. Given this, we should be careful to put too much trust in LLMs and AI, especially in automated decision-making.

In other debates, I have shared my terrible customer experiences with an airline and a fintech company, both of which seemed to use automated decision-making, but I will not repeat them now. While they got away with it, poor automated decision-making in healthcare could be dangerous and even catastrophic, leading to deaths. We need to proceed with caution when using LLMs and other AI systems for automated decision-making, something that will be raised in debate on the Data (Use and Access) Bill. We also need to consider safeguards and, possibly, an immediate human back-up on site if something goes wrong.

These examples about the good and the bad highlight the two key principles in technology legislation and regulation. You have, on the one hand, the precautionary principle and, on the other, the innovation principle. Witnesses tended to portray the US approach, certainly at the federal level, as driven mostly by the large US tech companies, while the European Union’s AI Act was described as “prescriptive”, overly precautionary and “stifling innovation”. Witnesses saw this as an opportunity for the UK to continue to host world-leading companies but, as other noble Lords have said, we cannot delay. Indeed, the report calls for the UK Government and industry to prepare now to take advantage of the opportunities, as the noble Baroness, Lady Wheatcroft, said.

At the same time, the Government should guard against regulatory capture or rent seeking by the big players, who may lobby for regulations benefiting them at a cost to other companies. We also considered the range of, and trade-offs between, open and closed models. While open models may offer greater access and competition, they may make it harder to control the proliferation of dangerous capabilities. While closed models may offer more control and security, they may give too much power to the big tech companies. What is the Government’s current thinking on the range of closed and open models and those in between? Who are they consulting to inform this thinking?

The previous Government’s AI Safety Summit was welcomed by many, and we heard calls to address the immediate risks from LLMs since malicious activities, such as fake pictures and fake news, become easier and cheaper with LLMs, as the noble Baroness, Lady Healy said. As the noble Lords, Lord Knight and Lord Strasburger, said, witnesses told us about the catastrophic risks, which are defined as about 1,000 UK deaths and tens of billions in financial damages. They believe that these were unlikely in the next few years, but not impossible, as next-generation capabilities come online. Witnesses suggested mandatory safety tests for high-risk, high-impact models. Can the Minister explain the current Government’s thinking on mandatory safety tests, especially for the high-risk, high-impact models?

At the same time, witnesses warned against a narrative of AI being mostly a safety issue. They wanted the Government to speak more about innovation and opportunity, and to focus on the three pillars of AI. The first is data training and evaluation; the second is about algorithms and the talent to write, and perhaps to rewrite, them; and the third is computing power. As other noble Lords have said, they criticise the current Government’s decision to scrap the exascale supercomputer announced by the previous Government. Can the Minister explain where he sees the UK in relation to each of the three pillars, especially on computing power?

As the noble Baroness, Lady Featherstone, and others have said, one of the trickiest issues we discussed was copyright. Rights holders want the power to check whether their data is used without their permission. At least one witness questioned whether this was technically possible. Some rights holders asked for investment in high-quality training datasets to encourage LLMs to use licensed material. In contrast, AI companies distinguished between inputs and outputs. For example, an input would be if you listened to lots of music to learn to play the blues guitar. AI companies argue that this is analogous to using copyrighted data for training. For them, an output would be if a musician plays a specific song, such as “Red House” by Jimi Hendrix. The rights holders would then be compensated, even though poor Jimi is long dead. However, rights holders criticised this distinction, arguing that it undermines the UK’s thriving creative sector, so you can see the challenge that we have. Can the Minister share the Government’s thinking on copyright material as training data?

For the overall regulatory framework, the Government have been urged to empower sector regulators to regulate proportionally, considering the careful balance and sometimes trade-off between innovation and precaution. Most witnesses recommended that the UK forge its own path on AI regulation—fewer big corporations than the US but more flexible than the EU. We should also be aware that technology is usually ahead of regulation. If you try too much a priori legislation, you risk stifling innovation. At the same time, no matter now libertarian you may be, when things go wrong voters expect politicians and regulators to act.

To end, can the Minister tell the House whether the Government plan to align with the US’s more corporate approach or the EU’s less innovative AI regulation, or to forge an independent path so that the UK can be a home for world-leading LLM and AI companies?

17:20
Lord Ranger of Northwood Portrait Lord Ranger of Northwood (Con)
- View Speech - Hansard - - - Excerpts

My Lords, it is truly an honour to follow my noble friend Lord Kamall. I begin by acknowledging the excellent work of the House of Lords Communications and Digital Committee, led with great dedication by my noble friend Lady Stowell, in producing this thorough report on large language models and generative AI.

I note my entry in the register of interests, especially my last role at Atos, where over six years ago I led a thought leadership campaign on the digital vision for AI. Six years is an exceptionally long time in the world of tech. Since then, we have accelerated into the early adoption and application era of AI. Now, as a Member of your Lordships’ House, I am delighted to be vice-chair of the AI APPG.

The pace of both development and adoption in the last 24 months has been breathtaking, and a key moment for the AI industry was obviously the launch of ChatGPT on 30 November 2022. That date will no doubt go down in history, not just for technologists but because of how it transformed the awareness and accessibility of LLM-based gen AI services to the public. It was the AI users’ big bang moment. It is because of the pace of commercial and technological change that I have been meeting with businesses and AI organisations during the past six months to hear at first hand what they see as the main issues and opportunities, as well as taking part in the evidence sessions that the AI APPG has held.

It has become clear to me that the UK’s AI market and particularly native AI businesses—those that develop and directly deliver AI capabilities and services—are seeing their growth turbocharged. They are recruiting, expanding and developing a skilled workforce, receiving investment and harnessing opportunities locally and internationally faster than they can think. This is an exciting time for our AI industry.

What do they want from Government? It is a case not of what we can do for them but of what these native artificial intelligence businesses can do for us. They want to be able to inform, influence and raise awareness of the key factors impacting them and their industry: how they are witnessing at first hand the adoption and implementation of AI systems and services; the investment landscape and growth opportunities that are available and how government policy can further support them; and the need to support investment in industry skills and academic research to ensure medium to long-term sustainability of their workforce and capabilities. As part of the development programme for the much-anticipated government AI action plan, what engagement has there been with the industry on these specific topics?

There are also various macro factors that the industry is clear on and that must be part of any AI plan or growth strategy for the UK. First, the availability of large datasets, as has been mentioned in this debate, is critical to the development of LLMs. Secondly, increasing the national availability of compute power will be directly proportionate to the advancement of generative AI. Thirdly, energy requirements to support compute must be considered as part of the investment landscape and even as part of national critical infrastructure. That is why there was such disappointment at the decision by this Government to cancel the investment into delivering the exascale computer in Edinburgh. I echo the words of the noble Baroness, Lady Wheatcroft, and ask the Minister how the Government will mitigate the impact of the loss of this compute power in the UK.

There is one other major consideration that has been mentioned already, and that businesses all raised—regulation. The AI industry is desperately keen to input into any development of regulatory considerations and is looking for signals from this Government as to their approach.

In July the Secretary of State for DSIT, Mr Kyle, said in a Written Statement that in line with the Labour Party’s manifesto, some AI companies will be regulated. Legislation would be

“highly targeted and will support growth and innovation by ending regulatory uncertainty”.—[Official Report, Commons, 26/7/24; col. 34WS.]

Four months later, on 6 November at the Future of AI Summit, the Secretary of State said that legislation would be introduced “next year”—that is a large 12-month window. In August, Mr Kyle said that legislation would focus on the most advanced LLM models and would not regulate the entire industry. It feels a bit like a trail of breadcrumbs being laid, with the industry not knowing when or where the next crumb indicating a sense of regulatory direction will be found.

As I mentioned, every AI business and sector partner I have met has requested both clarity and the opportunity to input into regulatory development, but has felt uncertain about how the Government are developing their thinking. The industry has been clear on the need for any regulation to be proportionate, to avoid duplication with existing technology-neutral rules and to minimise regulatory fragmentation. For example, this is key to the UK financial services industry’s international competitiveness and its role as an enabler of economic growth across the UK. For a leading tech-enabled industry that has been using advanced technologies safely and effectively for years, disproportionate legislation would create unnecessary regulatory burdens and stifle operations and trade, slowing innovation and productivity, and putting our firms at a global competitive disadvantage. Have the Government established a series of clear principles that will be used in the development of targeted and proportionate regulation?

As my noble friend Lady Stowell highlighted, I am also aware, through discussions with major investors, that the development of the regulatory environment in the UK is being closely viewed to assess how attractive our industry is and how much international investment may flow into it. Global investors clearly see an opportunity for the UK to learn from the US light-touch approach but also from what appears to be the vice-like grip approach the EU has taken with the development of its landmark AI Act.

By the way, I do not take this view on EU regulation as my own without input from others. Notably, I attended the AI CogX summit in London at the beginning of October, where an MEP who had worked on the Act stated that he believed the EU had

“created a barrier to entry”

and established a law that had such a high cost that it was creating compliance problems. It appears that there is a sense that the EU AI Act has taken a wrong turn and is already diminishing both innovation and the flow of investment into the region. What assessment are the Government making of the Act, and has its early impact on the region been discussed with EU counterparts?

To conclude, I have a few quick-fire questions. The previous Government had committed to a pro-innovation regulatory approach—will this Government too? Will the Government’s AI action plan include a suggested regulatory approach for the UK? When will it be published?

17:28
Lord Tarassenko Portrait Lord Tarassenko (CB)
- View Speech - Hansard - - - Excerpts

My Lords, I draw the House’s attention to my registered interests as a director of Oxehealth, a University of Oxford spin-out that uses AI for healthcare applications.

It is a great pleasure to follow the noble Lord, Lord Ranger, in this debate. Like him, in the time available I will speak mostly about the opportunities for the UK, more specifically in one sector. I congratulate the noble Baroness, Lady Stowell, on her excellent report, which she would probably like to know has been discussed positively in the common rooms in Oxford with the inquiry’s expert, Professor Mike Wooldridge.

It is not even 10 months since the report was published, but we already have new data points on the likely trajectories of large language models. The focus is shifting away from the pre-training of LLMs on ever-bigger datasets to what happens at inference time, when these models are used to generate answers to users’ queries. We are seeing the introduction of chain-of-reasoning techniques, for example in GPT-4o1, to encourage models to reason in a structured, logical and interpretable way. This approach may help users to understand the LLM’s reasoning process and increase their trust in the answers.

We are also seeing the emphasis shifting from text-only inputs into LLMs to multimodal inputs. Models are now being trained with images, videos and audio content; in fact, we should no longer call them large language models but large multimodal models—LMMs.

We are still awaiting the report on the AI opportunities action plan, written by Matt Clifford, the chair of ARIA, but we already know that the UK has some extraordinary datasets and a strong tradition of trusted governance, which together represent a unique opportunity for the application of generative AI.

The Sudlow review, Uniting the UK’s Health Data: A Huge Opportunity for Society, published two weeks ago tomorrow, hints at what could be achieved though linking multiple NHS data sources. The review stresses the need to recognize that national health data is part of the critical national infrastructure; we should go beyond this by identifying it as a sovereign data asset for the UK. As 98% of the 67 million UK citizens receive most of their healthcare from the NHS, this data is the most comprehensive large-scale healthcare dataset worldwide.

Generative AI has the potential to extract the full value from this unique, multimodal dataset and deliver a step change in disease prevention, diagnosis and treatment. To unlock insights from the UK’s health data, we need to build a sovereign AI capability that is pre-trained on the linked NHS datasets and does not rely on closed, proprietary models such as Open AI’s GPT-4 or Google’s Gemini, which are pre-trained on the entire content of the internet. This sovereign AI capability will be a suite of medium-scale sovereign LMMs, or HealthGPTs if you will, applied to different combinations of de-identified vital-sign data, laboratory tests, diagnostics tests, CT scans, MR scans, pathology images, discharge summaries and outcomes data, all available from the secure data environments—SDEs—currently being assembled within the NHS.

Linked datasets enable the learning of new knowledge within a large multimodal model; for example, an LMM pre-trained on linked digital pathology data and CT scans will be able to learn how different pathologies appear on those CT scans. Of course, very few patients will have a complete dataset, but generative AI algorithms can naturally handle the variability of each linked record. In addition, each LMM dataset can be augmented by text from medical textbooks, research papers and content from trusted websites such as those maintained by, for example, NHS England or Diabetes UK.

A simple example of such an LMM will help to illustrate the power of this approach for decision support—not decision-making—in healthcare. Imagine a patient turning up at her GP practice with a hard-to-diagnose autoimmune disease. With a description of her symptoms, together with the results of lab tests and her electronic patient record—EPR—data, DrugGPT, which is currently under development, will not only suggest a diagnosis to the GP but also recommend the right drugs and the appropriate dosage for that patient. It will also highlight any possible drug-drug interactions from knowledge of the patient’s existing medications in her EPR.

Of course, to build this sovereign LMM capability, the suite of HealthGPTs such as DrugGPT will require initial investment, but within five years such a capability should be self-funding. Access to any HealthGPT from an NHS log-in would be free; academic researchers, UK SMEs and multinationals would pay to access it through a suitable API, with a variable tariff according to the type of user. This income could be used to fund the HealthGPT lab, as well as the data wrangling and data curation activities of the teams maintaining the NHS’s secure data environments, with the surplus going to NHS trusts and GP practices in proportion to the amount of de-identified data which they will have supplied to pre-train the HealthGPTs.

For the general public to understand the value of the insights generated by these sovereign LMMs, a quarterly report on key insights would be sent out through the NHS app to all its 34 million users—three-quarters of the adult population in the UK—except to those who have opted out of having their data used for research.

The time is right to build a sovereign AI capability for health based on a suite of large multimodal models, which will improve the health of the nation, delivering more accurate diagnoses and better-targeted treatments while maximising the value of our NHS sovereign data asset. I hope the Minister will agree with me that this is an opportunity which the UK cannot afford to miss.

17:36
Lord Griffiths of Burry Port Portrait Lord Griffiths of Burry Port (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, it is with even greater apprehension that I stand after hearing that learned disquisition, which I shall re-read in Hansard to make sure that a person whose intellectual background is in medieval literature, theology and Caribbean history might have a chance to get hold of some of the key concepts. I am most grateful that the noble Lord has exemplified the progress that is being made in science to give us tools at our disposal that might greatly enhance many aspects of contemporary living.

I say my own word of thanks to the noble Baroness, Lady Stowell, for her man and woman management of the committee. I am not an easy man to manage at any time and am capable of eruptions, which she gave me some scope for. I also share her commendation for Dan, who has now moved on to higher things.

The noble Baroness will remember that, at the outset of the committee’s concerns, Professor Wooldridge from Oxford, a colleague of the noble Lord, Lord Tarassenko, urged us to take a positive look at this subject—granting that there were both positives and negatives but not to dwell on the negatives but try to maintain a focus on the positive. The noble Baroness, and other members of the committee, will remember that I expressed a dissentient view at that time. I was very worried by some of the implications of the science we were looking at and the developments in the field under study. It was rather nice to hear the noble Lord, Lord Strasburger, make his contribution and create the same tone of approach to this subject as I would want to make.

I bored the committee more than once by explaining how, 30 years ago, I met a man called Joseph Rotblat. Professor Rotblat was a nuclear physicist who had been recruited for the Manhattan Project and had withdrawn from it when he realised that the need that had created the project—the fact that Germany was developing atomic weapons—had ceased. He had given his help to the Manhattan Project, which would give the Allied cause the possibility of replying to, or deterring, the use of such weapons, but when the Germans ceased their operations and research he withdrew from the project because he did not want his science to be used in this military way and for those purposes. He set up a series of conferences—the Pugwash conferences—that took place regularly down the years and for which he was awarded the Nobel Peace Prize in 1995.

My conversations with Professor Rotblat, when I was just a young callous youth and knew little about these things, has educated me in one principle: I can only admire the findings of science and the work of scientists, but I recognise that the findings of science when monetised or militarised take the very brilliance that has been unearthed and developed in potentially very dangerous directions. It took someone who knew about that from the inside when it came to nuclear developments to warn the world and to keep alive the flame of understanding of the dangers of such uses.

I mention that in this debate because my point in offering that memory was the conviction that out of the very engine room of those developing the present scientific advances, we need to hear the voices that are going to help us because what can we do? Legislation, as we have heard, is already behind the curve. Elon Musk, with his spaceships and the rest of it, is taking something in a direction nobody can calculate. I wonder whether his middle name is Icarus. We will see, will we not?

For all that, this very day the American newspapers are announcing that Brendan Carr has been appointed by Donald Trump as his Federal Communications Commission man—a man with a long record of deregulation, of taking all the constraints off what he calls the constitutional freedom of Americans of free speech and free intellectual activity. So we may be overregulated, but soon we are going to be quite heavily underregulated, and once the world is in that cauldron of competing and keeping up with each other or outrunning each other, we will be in a dangerous place.

I know this makes me seem like Eeyore, the depressed donkey in Winnie-the-Pooh. Indeed, if I took the time, I could go around the committee—their faces are in my head—and give equivalents for them. I can certainly see the honey searcher over there, but there are also Rabbit, Kanga, Roo, Tigger and Piglet. They were all sitting there discussing large language machines. We now know that we can put an extra M in, and no doubt by the time we have a debate next year there may be yet another M to put in because things are advancing fast.

I urge Members of this House and our sector of British society to try to encourage people like Geoffrey Hinton, who has been mentioned, who know what they are talking about, are at the front edge of it all and see the pluses and the minuses to help the rest of us, the Eeyores, the depressed donkeys of this world, to have a better grasp of things and to feel that we are genuinely safe in the world that we are living in.

17:42
Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

My Lords, what a pleasure it is to address this compelling, balanced and, in my opinion, excellent report on large language models and generative AI. I thank not just my noble friend Lady Stowell but all noble Lords who were involved in its creation. Indeed, it was my pleasure at one point to appear before the committee in my former ministerial role. As ever, we are having an excellent debate today. I note the view of the noble Lord, Lord Knight, that it tends to be the usual suspects in these things, but very good they are too.

We have heard, particularly from my noble friend Lady Stowell and the noble Baroness, Lady Featherstone, about the need to foster competition. We have also heard about the copyright issue from a number of noble Lords, including the noble Baronesses, Lady Featherstone, Lady Wheatcroft and Lady Healy, and I will devote some more specific remarks to that shortly.

A number of speakers, and I agree with them, regretted the cancellation of the exascale project and got more deeply into the matter of compute and the investment and energy required for it. I hope the Minister will address that without rehearsing all the arguments about the black hole, which we can all probably recite for ourselves.

We had a very good corrective from the noble Lords, Lord Strasburger and Lord Griffiths of Bury Port, and my noble friend Lord Kamall, that the risks are far-reaching and too serious to treat lightly. In particular, I note the risk of deliberate misuse by powers out of our control. We heard about the need going forward for, if possible, greater clarity about regulatory plans and comparisons with the EU AI Act from my noble friend Lord Ranger. I very much enjoyed and respond to the remarks by the noble Lord, Lord Tarassenko, about data as a sovereign asset for the UK, whether in healthcare or anything else.

These points and all the points raised in the report underscore the immense potential of AI to revolutionise key sectors of our economy and our society, while also highlighting critical risks that must be addressed. I think we all recognise at heart the essential trade-off in AI policy. How do we foster the extraordinary innovation and growth that AI promises while ensuring it is deployed in ways that keep us safe?

However, today I shall focus more deeply on two areas. The first is copyright offshoring and the second is regulation strategy overall.

The issue of copyright and AI is deeply complex for many reasons. Many of them were very ably set out by my noble friend Lord Kamall. I am concerned that any solution that does not address the offshoring problem is not very far from pointless. Put simply, we could create between us the most exquisitely balanced, perfectly formed and simply explained AI regulation, but any AI lab that did not like it could, in many cases, scrape the same copyrighted content in another jurisdiction with regulations more to its liking. The EU’s AI Act addresses this problem by forbidding the use in the EU of AI tools that have infringed copyright during their training.

Even if this is workable in the EU—frankly, I have my doubts about that—there is a key ingredient missing that would make it workable anywhere. That ingredient is an internationally recognised technical standard to indicate copyright status, ownership and licence terms. Such a standard would allow content owners to watermark copyrighted materials. Whether the correct answer is pursuing an opt in or opt out of TDM is a topic for another day, but it would at least enable that to go forward technically. Crucially, it would allow national regulators to identify copyright infringements globally. Will the Minister say whether he accepts this premise and, if so, what progress he is aware of towards the development of an international technical standard of this kind?

I turn now to the topic of AI regulation strategy. I shall make two brief points. First, as a number of noble Lords put it very well, AI regulation has to adapt to fast-moving technology changes. That means that it has to target principles, rather than specific use cases where possible. Prescriptive regulation of technology does not just face early obsolescence, but relies fatally on necessarily rigid definitions of highly dynamic concepts.

Secondly, the application of AI is completely different across sectors. That means that the bulk of regulatory heavy lifting needs to be done by existing sector regulators. As set out in the previous Government’s White Paper, this work needs to be supported by central functions. Those include horizon scanning for future developments, co-ordination where AI cuts across sectors, supporting AI skills development, the provision of regulatory sandboxes and the development of data and other standards such as the ATRS. If these and other functions were to end up as the work of a single AI regulatory body, then so much the better, but I do not believe that such an incorporation is mission critical at this stage.

I was pleased that the committee’s report was generally supportive of this position and, indeed, refined it to great effect. Do the Government remain broadly aligned to this approach? If not, where will the differences lie?

While many of us may disagree to one degree or another on AI policy, I do not believe there is really any disagreement about what we are trying to achieve. We must seize this moment to champion a forward-looking AI strategy—one that places the UK at the forefront of global innovation while preserving our values of fairness, security, and opportunity for all.

Like the committee—or as we have heard from the noble Lord, Lord Griffiths, like many members of the committee—I remain at heart deeply optimistic. We can together ensure that AI serves as a tool to enhance lives, strengthen our economy, and secure our national interests. This is a hugely important policy area, so let me close by asking the Minister if he can update this House as regularly and frequently as possible on the regulation of AI and LLMs.

17:51
Lord Vallance of Balham Portrait The Minister of State, Department for Science, Innovation and Technology (Lord Vallance of Balham) (Lab)
- View Speech - Hansard - - - Excerpts

We have heard really wonderful insights and thoughtful contributions from across your Lordships’ House this afternoon and I am really grateful to the noble Baroness, Lady Stowell, for organising this engaging debate on such an important topic. It is probably the only debate I am going to take part in which has LLMs, SLMs, exaflops, Eeyore and Tigger in the same sitting.

The excellent report from the Communications and Digital Committee was clear that AI presents an opportunity, and it is one that this Government wish to seize. Although the report specified LLMs and generative AI, as has been pointed out by many, including the noble Lord, Lord Knight, AI is of course broader than just that. It represents a route to stronger economic growth and a safer, healthier and more prosperous society, as the noble Viscount, Lord Camrose, has just said, and we must harness it—it is incredibly important for the country.

Breakthroughs in general-purpose technologies are rare—the steam engine, electricity and the internet—and AI is set to be one such technology. The economic opportunities are already impressive. The AI market contributed £5.8 billion in GVA to our economy in 2023, it employs over 60,000 people and is predicted to grow rapidly in size and value over the next decade. Investing in technology has always been important for growth, and investing in AI is no exception.

Today, already, a new generation of UK-founded companies is ensuring that we are at the forefront of many of these approaches, and leading AI companies have their European headquarters in London. We have attracted significant investment from global tech giants—AWS, Microsoft, CoreWeave and Google—amounting to over £10 billion. This has bolstered our AI infrastructure, supported thousands of jobs and enhanced capacity for innovation.

The investment summit last month resulted in commitments of £63 billion, of which £24.3 billion was directly related to AI investment. The UK currently ranks third globally in several key areas: elite AI talent, the number of AI start-ups, inward investment into AI, and readiness for AI adoption. But we need to go further. In July, DSIT’s Secretary of State asked Matt Clifford to develop an ambitious AI opportunities action plan. This will be published very soon and will set out the actions for government to grow the UK’s AI sector, drive adoption of AI across the economy, which will boost growth and improve products and services, and harness AI’s power to enhance the quality and efficiency of public services. Of course, as was raised early in this debate, this also has to be about creating spin-outs and start-ups and allowing them to grow.

One of the largest near-term economic benefits of AI is the adoption of existing tools to transform businesses and improve the quality of work—a point raised very clearly by the noble Lord, Lord Ranger. AI tools are already being used to optimise complex rotas, reduce administrative burdens and support analytical capabilities and information gathering, and in healthcare to interpret medical scans, giving back more time for exchanges that truly need a human touch. Government will continue to support organisations to strengthen the foundations required to adopt AI; this includes knowledge, data, skills, talent, intellectual property protections and assurance measures. I shall return to some of those points.

In the public sector, AI could unlock a faster, more efficient and more personalised offer to its citizens, at better value to the taxpayer. In an NHS fit for the future—the noble Lord, Lord Tarassenko, made these points very eloquently—AI technology could transform diagnostics and reduce simpler things, such as administrative burdens, improving knowledge and information flows within and between institutions. It could accelerate the discovery and development of new treatments—and valuable datasets, such as the UK Biobank, will be absolutely essential.

The noble Lord, Lord Tarassenko, rightly identified the importance of building large multimodal models on trusted data and the opportunity that that presents for the UK—a point that the noble Lord, Lord Knight, also raised. Several NHS trusts are already running trials on the use of automated transcription software. The NHS and DHSC are developing guidance to ensure responsible use of these tools and how they can be rolled out more widely.

The noble Lord, Lord Kamall, rightly pointed out the role of the human in the loop, as we start to move these things into the healthcare sector. The Government can and should act as an influential customer to the UK AI sector by stimulating demand and providing procurement. That procurement pool will be increasingly important as companies scale.

DSIT, as the new digital centre of government, is working to identify promising AI use cases and rapidly scale them, and is supporting businesses across the UK to be able to do the same. The new Incubator for Artificial Intelligence is one example.

The Government recently announced that they intend to develop an AI assurance platform, which should help simplify the complex AI assurance and governance landscape for businesses, so that many more businesses can start with some confidence.

Many noble Lords touched on trust, and AI does require trust; it is a prerequisite for adopting AI. That is why we have committed to introducing new, binding requirements on the handful of companies developing the most advanced AI models, as we move towards the potential of true artificial general intelligence. We are not there yet, as has been pointed out. This legislation will build on the voluntary commitments secured at the Seoul and Bletchley Park AI safety summits and will strengthen the role of the AI Safety Institute, putting it on a statutory footing.

We want to avoid creating new rules for those using AI tools in specific sectors—a point that the noble Viscount, Lord Camrose, raised—and will instead deal with that in the usual way, through existing expert regulators. For example, the Office for Nuclear Regulation and the Environment Agency ran a joint AI sandbox last year, looking at AI and the nuclear industry. The Medicines and Healthcare Products Regulatory Agency, or MHRA, launched one on AI medical devices. We have also launched the Regulatory Innovation Office to try to streamline the regulatory approach, which will be particularly important for AI, ensuring that we have the skills necessary for regulators to be able to undertake this new work. That point was raised by several people, including the noble Baroness, Lady Healy.

New legislation will instead apply to the small number of developers of the most far-reaching AI models, with a focus on those systems that are coming tomorrow, not the ones we have today. It will build on the important work that the AI Safety Institute has undertaken to date. Several people asked whether that approach is closer to the USA or the EU. It is closer to the US approach, because we are doing it for new technologies. We are not proposing specific regulation in the individual sectors, which will be looked after by the existing regulators. The noble Lords, Lord Knight and Lord Kamall, raised those points.

It is important—everyone has raised this—that we do not introduce measures that restrict responsible innovation. At the recent investment summit, leaders in the field were clear: some guidelines are important. They create some clarity for companies. Companies currently do not have enough certainty and cannot progress. Getting that balance right will be essential and that is why, as part of this AI Bill, we will be launching an extensive consultation, leading to input, I hope, from experts from industry, academia and, of course, from this House, where many people have indicated today the very insightful points they have to make.

I was asked by the noble Lord, Lord Ranger, whether pro-innovation regulation would be the theme. That was a topic of a review that I undertook in my last role and that will certainly be a theme of what we wish to do. We will continue to lead the development of international standards through the AI Standards Hub—a partnership between the Alan Turing Institute, the British Standards Institution and the National Physical Laboratory—and by working with international bodies. Indeed, I went to speak to one of the international standards bodies on this topic a few weeks ago.

I turn to some other specific points that were raised during the debate. The AI Safety Institute’s core goal is to make frontier AI safer. It works in partnership with businesses, Governments and academia to develop research on the safety of AI and to evaluate the most capable models. It has secured privileged access to top AI models from leading companies, including test models pre deployment and post deployment with OpenAI, Google DeepMind and Anthropic among others. The institute has worked very closely with the US to launch the international network of AI safety institutes, enabling the development and adoption of interoperable principles, policies and best practice. That meeting has taken place in California this week. The noble Baroness, Lady Wheatcroft, asked for an update and I think we will have the update when the readout of that meeting is known. Just this week the AI Safety Institute shared a detailed report outlining pre-deployment of Anthropic’s upgraded Claude 3.5 Sonnet model. This will help advance the development of shared scientific benchmarks and best practices of safety testing and is an important step because it begins to show exactly how these things can also be made public.

I was asked about mandatory safety testing. I think this model, which has been a voluntary one and has engaged big companies so that they want to come to the AI Safety Institute, is the correct one. I have also noted that there are some other suggestions as to how people may report safety issues. That is an important thing to consider for the future.

To respond to the points raised by the noble Lords, Lord Strasburger and Lord Griffiths, the question of the existential threat is hotly debated among experts. Meta scientist Yann LeCun states that fears that AI will pose a threat to humanity are “preposterously ridiculous”. In contrast, Geoffrey Hinton has said it is time to confront the existential dangers of artificial intelligence. Another British Nobel prize winner, Demis Hassabis, the CEO of DeepMind, one of the most important AI companies in the world, suggests a balanced view. He has expressed optimism about AI, with its potential to revolutionise many fields, but emphasises the need to find a middle way for managing the technology.

To better understand these challenges, the Government have established a central AI risk function which brings together policymakers and AI experts with a mission to continuously monitor, identify, assess and prepare for AI-associated risks. That must include in the long term the question of whether what I will call “autonomous harm” is a feature that will emerge and, if so, over what time and what the impact of that might be.

I turn to data, the very feedstock for AI. First, data protection law applies to any processing of personal data, regardless of the technology, and we are committed to maintaining the UK’s strong data protection framework. The national data library will be the key to unlocking public data in a safe and secure way, and many speakers this afternoon have indicated how important it will be to have the data to ensure that we get training of the models. There is a huge opportunity, particularly, as has been indicated, in relation to areas such as the NHS.

The Information Commissioner’s Office has published guidance that outlines how organisations developing and using AI can ensure that AI systems that process personal data do so in ways that are accountable, transparent and fair.

On copyright, I will not list the numerous noble Lords who have made comments on copyright. It is a crucial area, and the application of copyright law to AI is as disputed globally as it is in the UK. Addressing uncertainty about the UK’s copyright framework for AI is a priority for DSIT and DCMS. We are determined to continue to enable growth in our AI and creative industries, and it is worth noting that those two are related. It is not that the creative industries are on one side and AI on the other; many creative individuals are using AI for their work. Let me say up front that the Government are committed to supporting the power of human-centred creativity as well as the potential of AI to unlock new horizons.

As the noble Baroness, Lady Featherstone, has rightly pointed out, rights holders of copyright material have called for greater control over their content and remuneration where it is used to train AI models, as well as for greater transparency. At the same time, AI developers see access to high-quality material as a prerequisite to being able to train world-leading models in the UK. Developing an approach that addresses these concerns is not straightforward, and there are issues of both the input to models and the assessment of the output from models, including the possibility of watermarking. The Government intend to engage widely, and I can confirm today that we will shortly launch a formal consultation to get input from all stakeholders and experts. I hope that this starts to address the questions that have been raised, including at the beginning by the noble Baroness, Lady Stowell, as well as the comments by the noble Baroness, Lady Healy.

On the important points that the noble Viscount, Lord Camrose, raises about offshoring and the need for international standards, I completely agree that this is a crucial area to look at. International co-operation will be crucial and we are working with partners.

We have talked about the need for innovation, which requires fair and open competition. The Digital Markets, Competition and Consumers Act received Royal Assent in May, and the Government are working closely with the Competition and Markets Authority to ensure that the measures in the Act commence by January 2025. It equips the CMA with more tools to tackle competition in the digital and AI markets. The CMA itself undertook work last year that identified the issues in some of the models that need to be looked at.

Demand for computing resource is growing very quickly. It is not just a matter of size but of configuration and systems architecture. Two compute clusters are being delivered as part of the AI research resource in Bristol and Cambridge. They will be fully operational next year and will expand the UK’s capacity thirtyfold. Isambard-AI is made up of more than 5,500 Nvidia GPUs and will be the UK’s most powerful public AI compute facility once it is fully operational next year. The AI opportunities action plan will set out further requirements for compute, which we will take forward as part of the multiyear spending review. I just say in passing that it is quite important not to conflate exascale with AI compute; they are different forms of computing, both of which are very important and need to be looked at, but it is the AI compute infrastructure that is most relevant to this.

The noble Lord, Lord Tarassenko, and the noble Baroness, Lady Wheatcroft, asked about sovereign LLMs and highlighted the opportunity to build new models based on really specific trusted data sources in the UK. This point was also raised in the committee report and is a crucial one.

I have tried to answer all the questions. I hope that I have but, if I have not, I will try to do so afterwards. This is a really crucial area and I am happy to come back and update as this goes on, as the noble Viscount, Lord Camrose, asked me to. We know that this is about opportunity, but we also know that people are concerned, rightly, about socioeconomic risks, labour market rights and infringement of rights.

There are several other points I would make. It is why we have signed the Council of Europe’s convention on AI and human rights, why we are funding the Fairness Innovation Challenge to develop solutions to AI bias, why the algorithmic transparency recording standard is being rolled out across all departments, why the Online Safety Act has powers to protect against illegal content and specifically to prevent harms to children and why the central AI risk function is working with the AI Safety Institute to identify and reduce the broader risks. The Government will drive private and public sector AI development, deployment and adoption in a safe, responsible and trustworthy way including, of course, with international partners.

I thank noble Lords for their comments today. It is with great urgency that we start to rebuild Britain, using the technology we have today, and prepare for the technologies of tomorrow. We are determined, as the noble Viscount, Lord Camrose, said, that everyone in society should benefit from this revolutionary technology. I look forward very much to continuing engagement on this important topic with what I hope is an increasing number of noble Lords who may find this rather relevant to everyday life.

18:10
Baroness Stowell of Beeston Portrait Baroness Stowell of Beeston (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I am very grateful to all noble Lords who have contributed to this debate. I thank all noble Lords who made such very kind and generous remarks about me and the committee’s work. I will pick up on just a couple of things because I do not want to take up much time.

As far as some of the things that the Minister has just said, I thank him very much for his comprehensive response. I will read it properly in Hansard tomorrow because there was obviously a lot there. I was pleased to hear that the action plan is coming very soon. He emphasised “very” before he said “soon”, so that was encouraging. Clearly, we should learn some more from that about plans for computing power, as he said.

As was mentioned by most noble Lords contributing today, we know that computing power is essential. I understand the point that the Minister made about exascale being different from AI-specific computing power. What the Government are doing on the latter is where it really matters in the context of this debate. It is important none the less not to forget that when commitments to compute, which people see as a signal of the country’s ambition, get cancelled that sends a rather confusing and mixed message. That was the point we were trying to emphasise.

On regulation, I hear the Minister in that there will be extensive consultation about the AI Bill. As he has heard me say already, it is clearly important in my view and that of the committee that we do not rush to regulation. It is better that it is got right.

I will say a couple of things about copyright, which many noble Lords have mentioned today, that gets such emphasis in this debate, perhaps sometimes to the surprise of the large tech businesses that are US companies. I think it does because it is a reflection of our very mixed economy over here. We are not in a position where we can put all bets on tech and know that that is where we will see growth so that we do not need to worry about anything else. As the Minister said, which will give some people in the content-creating community comfort, this technology cannot develop without content. Hearing that that is well understood by the Government is important to content creators. However, as much as it will be good to have a serious consultation on whatever proposals the Government come forward with, it is none the less essential that we get a move on on this because a lot of businesses are starting to feel very threatened by all that.

The only other thing I would add is on the question of risks. In this debate, there has been broad consensus about the opportunity of this technology and its importance. The noble Lords, Lord Strasburger and Lord Griffiths, talked about existential threat, and that was mentioned by the Minister in his reply.

Risk was not something that we treated with any lack of seriousness at all when we carried out our inquiry. The noble Lord, Lord Griffiths, is right: it is important that we listen to the range of voices that have knowledge and experience in this area. However, it is important to recognise—we took this seriously as a committee when conducting our inquiry—that this technology has been subject, and probably is continuing to be subject, to quite a significant power struggle. At the start of our inquiry, the debate about existential threat was very live. One thing that we learned throughout our inquiry was about the divide within the tech world on this debate, and how you had to be really quite tuned in to where these different threats and messages were coming from, so that we did not get sucked down a path which ended up allowing a concentration of power that was also not what many noble Lords wanted.

Overall, our view and certainly my personal view is that, with something as important as this new general-purpose technology—as the Minister said, these things come along very rarely—we make sure that we are driven by the opportunities mitigating the risks as we go along, and are not driven by the risks and miss out on the opportunities that this technology will provide. All of us today have been able to agree on that, and it is a most important conclusion for us to take away from this debate. I certainly look forward to studying again the proposal of the noble Lord, Lord Tarassenko, which sounded interesting in terms of any sovereign LLM concentrating on health issues. I am very grateful to noble Lords.

Motion agreed.
House adjourned at 6.17 pm.