Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Baroness Healy of Primrose Hill Excerpts
Thursday 21st November 2024

(1 day, 17 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Baroness Healy of Primrose Hill Portrait Baroness Healy of Primrose Hill (Lab)
- View Speech - Hansard - -

My Lords, it is a pleasure to follow the noble Baroness, Lady Wheatcroft. I welcome the new Government’s determination that artificial intelligence can kickstart an era of economic growth, transform the delivery of public services and boost living standards for working people. Therefore, I hope they will welcome the recommendations in our Select Committee report, Large Language Models and Generative AI, which clearly sets out the opportunities and risks associated with this epoch-defining technology.

I, too, served on this committee under the admirable leadership of the noble Baroness, Lady Stowell of Beeston, who has set out the findings of our report so well. We are fortunate to have such a knowledgeable Minister in my noble friend Lord Vallance replying to this debate. His Pro-innovation Regulation of Technologies Review, undertaken when he was the Chief Scientific Adviser in a former life, raises important questions on copyright and regulation, both of which feature in our report.

Last week, the Minister, my noble friend Lady Jones of Whitchurch, explained to the House the difficulties in finding the right balance between fostering innovation in AI and ensuring protection for creators and the creative industries. Her acknowledgement that this must be resolved soon was welcome, but our report made clear the urgency surrounding this vexed question of copyright. Can my noble friend the Minister give any update on possible progress?

The News Media Association warns that, without incentivising a dynamic licensing market, the creative industries will be unable to invest in new content, generative AI firms will have less and less high-quality data with which to train their LLMs, and AI innovation will stall. Our report recognised the complexity of the issue but stated that

“the principles remain clear. The point of copyright is to reward creators for their efforts, prevent others from using works without permission, and incentivise innovation. The current legal framework is failing to ensure these outcomes occur and the Government has a duty to act”.

This was directed at the previous Government but applies equally to the present one.

Another matter of concern to the committee are the powers and abilities of regulators to ensure the safety and innovation of these LLMs. The new Labour Government clearly recognise the importance of the AI sector as a key part of their industrial strategy and I welcome the announcement of the AI action plan, which is expected to be published this month. Can my noble friend confirm that this is still the expected timescale, and when the AI opportunities unit will be set up in DSIT to implement the plan?

More details of how AI will enhance growth and productivity and support the Government’s five stated missions, including breaking down barriers to opportunity, would be welcome to better understand how AI will transform citizens’ experiences of interacting with the state and boost take-up in all parts of the public sector and the wider economy. But, before this is possible, regulatory structures to ensure responsible innovation need to be strengthened, as our report found that regulators’ staffing suggested significant variation in technical expertise. There is a pressing need for support from the Government’s central functions in providing cross-regulator co-ordination. Relying on existing regulators to ensure good outcomes from AI will work only if they are properly resourced and empowered. As our report said:

“The Government should introduce standardised powers for the main regulators who are expected to lead on AI oversight to ensure they can gather information relating to AI processes and conduct technical, empirical and governance audits. It should also ensure there are meaningful sanctions to provide credible deterrents against egregious wrongdoing”.


Can my noble friend clarify how the new Bill will support growth and innovation by ending current regulatory uncertainty for AI developers, strengthening public trust and boosting business confidence? Is this part of the new regulatory innovation office’s role?

I welcome the Secretary of State’s commitment that the promised legislation will focus on the most advanced LLM models and not seek to regulate the entire industry, but rather make existing agreements between technology companies and the Government legally binding and turn the AI Safety Institute from a directorate of DSIT into an arm’s-length body, which he has said

“will give it a level of independence and a long-term future, because safety is so important”.

In conclusion, how confident is my noble friend that the recommendations of his March 2023 review will be implemented? It recognised that:

“Regulator behaviour and culture is a major determinant of whether innovators can effectively navigate adapting regulatory frameworks … the challenge for government is to keep pace with the speed of technological change: unlocking the enormous benefits of digital technologies, while minimising the risks they present both now and in the future”.


I wish the new Government well in this daunting task. As our report said:

“Capturing the benefits will require addressing risks. Many are formidable, including credible threats to public safety, societal values, open market competition and UK economic competitiveness. Farsighted, nuanced and speedy action is therefore needed to catalyse innovation responsibly and mitigate risks proportionately”.


Mitigation is essential, and I welcome the Government’s announcement of research grants to commence the important work of

“boosting society’s resilience against AI risks such as deepfakes, misinformation and cyberattacks”,

and

“the threat of AI systems failing unexpectedly, for example in the financial sector”.

Our report outlined the wide-ranging nature of these risks:

“The most immediate security concerns from LLMs come from making existing malicious activities easier, rather than qualitatively new risks. The Government should work with industry at pace to scale existing mitigations in the areas of cyber security (including systems vulnerable to voice cloning), child sexual abuse material, counter-terror, and counter-disinformation”.


I trust that the new AI strategy can be truly effective in countering risk and encouraging developments of real benefit to society.