Large Language Models and Generative AI (Communications and Digital Committee Report) Debate
Full Debate: Read Full DebateViscount Camrose
Main Page: Viscount Camrose (Conservative - Excepted Hereditary)Department Debates - View all Viscount Camrose's debates with the Department for Science, Innovation & Technology
(1 day, 21 hours ago)
Lords ChamberMy Lords, what a pleasure it is to address this compelling, balanced and, in my opinion, excellent report on large language models and generative AI. I thank not just my noble friend Lady Stowell but all noble Lords who were involved in its creation. Indeed, it was my pleasure at one point to appear before the committee in my former ministerial role. As ever, we are having an excellent debate today. I note the view of the noble Lord, Lord Knight, that it tends to be the usual suspects in these things, but very good they are too.
We have heard, particularly from my noble friend Lady Stowell and the noble Baroness, Lady Featherstone, about the need to foster competition. We have also heard about the copyright issue from a number of noble Lords, including the noble Baronesses, Lady Featherstone, Lady Wheatcroft and Lady Healy, and I will devote some more specific remarks to that shortly.
A number of speakers, and I agree with them, regretted the cancellation of the exascale project and got more deeply into the matter of compute and the investment and energy required for it. I hope the Minister will address that without rehearsing all the arguments about the black hole, which we can all probably recite for ourselves.
We had a very good corrective from the noble Lords, Lord Strasburger and Lord Griffiths of Bury Port, and my noble friend Lord Kamall, that the risks are far-reaching and too serious to treat lightly. In particular, I note the risk of deliberate misuse by powers out of our control. We heard about the need going forward for, if possible, greater clarity about regulatory plans and comparisons with the EU AI Act from my noble friend Lord Ranger. I very much enjoyed and respond to the remarks by the noble Lord, Lord Tarassenko, about data as a sovereign asset for the UK, whether in healthcare or anything else.
These points and all the points raised in the report underscore the immense potential of AI to revolutionise key sectors of our economy and our society, while also highlighting critical risks that must be addressed. I think we all recognise at heart the essential trade-off in AI policy. How do we foster the extraordinary innovation and growth that AI promises while ensuring it is deployed in ways that keep us safe?
However, today I shall focus more deeply on two areas. The first is copyright offshoring and the second is regulation strategy overall.
The issue of copyright and AI is deeply complex for many reasons. Many of them were very ably set out by my noble friend Lord Kamall. I am concerned that any solution that does not address the offshoring problem is not very far from pointless. Put simply, we could create between us the most exquisitely balanced, perfectly formed and simply explained AI regulation, but any AI lab that did not like it could, in many cases, scrape the same copyrighted content in another jurisdiction with regulations more to its liking. The EU’s AI Act addresses this problem by forbidding the use in the EU of AI tools that have infringed copyright during their training.
Even if this is workable in the EU—frankly, I have my doubts about that—there is a key ingredient missing that would make it workable anywhere. That ingredient is an internationally recognised technical standard to indicate copyright status, ownership and licence terms. Such a standard would allow content owners to watermark copyrighted materials. Whether the correct answer is pursuing an opt in or opt out of TDM is a topic for another day, but it would at least enable that to go forward technically. Crucially, it would allow national regulators to identify copyright infringements globally. Will the Minister say whether he accepts this premise and, if so, what progress he is aware of towards the development of an international technical standard of this kind?
I turn now to the topic of AI regulation strategy. I shall make two brief points. First, as a number of noble Lords put it very well, AI regulation has to adapt to fast-moving technology changes. That means that it has to target principles, rather than specific use cases where possible. Prescriptive regulation of technology does not just face early obsolescence, but relies fatally on necessarily rigid definitions of highly dynamic concepts.
Secondly, the application of AI is completely different across sectors. That means that the bulk of regulatory heavy lifting needs to be done by existing sector regulators. As set out in the previous Government’s White Paper, this work needs to be supported by central functions. Those include horizon scanning for future developments, co-ordination where AI cuts across sectors, supporting AI skills development, the provision of regulatory sandboxes and the development of data and other standards such as the ATRS. If these and other functions were to end up as the work of a single AI regulatory body, then so much the better, but I do not believe that such an incorporation is mission critical at this stage.
I was pleased that the committee’s report was generally supportive of this position and, indeed, refined it to great effect. Do the Government remain broadly aligned to this approach? If not, where will the differences lie?
While many of us may disagree to one degree or another on AI policy, I do not believe there is really any disagreement about what we are trying to achieve. We must seize this moment to champion a forward-looking AI strategy—one that places the UK at the forefront of global innovation while preserving our values of fairness, security, and opportunity for all.
Like the committee—or as we have heard from the noble Lord, Lord Griffiths, like many members of the committee—I remain at heart deeply optimistic. We can together ensure that AI serves as a tool to enhance lives, strengthen our economy, and secure our national interests. This is a hugely important policy area, so let me close by asking the Minister if he can update this House as regularly and frequently as possible on the regulation of AI and LLMs.