Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Lord Knight of Weymouth Excerpts
Thursday 21st November 2024

(1 day, 22 hours ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Knight of Weymouth Portrait Lord Knight of Weymouth (Lab)
- View Speech - Hansard - -

My Lords, while I have interests in the register relating to AI, none is directly involved with LLM development. However, the Good Future Foundation and Educate Ventures Research are both advising others on AI deployment, while Century-Tech and Goodnotes derive some product enhancement using LLM and generative AI technology.

I joined the Communications and Digital Committee after this report was concluded, so I am not marking my own homework when I say that this is an interesting and informative report that I would highly recommend to a wider group of Members of your Lordships’ House than those in their places today. I congratulate the noble Baroness, Lady Stowell, on her speech and the way in which she has introduced this, and the rest of the committee on the report.

We have a big problem of conscious incompetence in the House, with the vast majority leaving tech regulation and debate to a small group of usual suspects. We need a wider diversity of opinion. Given the high probability that this technology will impact almost all sectors, we need expertise in those sectors applied to AI policy, and I recommend reading this report as an opportunity for Members to improve their AI literacy. That is not to say we, the usual suspects, have all the answers; we simply have the confidence to be curious around our own conscious incompetence.

The report reminds us of the core ingredients needed to develop frontier AI: large markets, massive computing power and therefore access to significant energy sources, affordable high-end skills and a lot of high-quality data. All this needs a lot of money and a relatively benign regulatory environment. The report also reminds us that we risk losing out on the huge opportunity that AI gives us for economic growth, if we do not get this right, and we risk otherwise drifting once more into a reliance on just a few tech companies As Ben Brooks of Stability AI told the committee, currently the world relies on one search company, two social media companies and three cloud providers.

It is worth asking whether we have already missed the boat on the frontier of LLMs. Much of this activity lies in the US, and it is fair to ask whether we are better off building off existing open or closed foundational models at the application layer and using our unique datasets and great skills to develop models for public service outcomes in particular—in health, culture, education and elsewhere—that we and the world can trust and enjoy. Such an approach would acknowledge the limited market access that we have post Brexit, the limited compute and energy resources, and the limited amounts of investment.

However, those limitations should not constrain our ambition around other large models. This report is just about large language models, but others will come and it can help inform attitudes to frontier AI more generally. The coming together of robotics or biotechnology with generative AI and the development of quantum computing are yet to be fully realised, and we should ensure that as a nation we have capacity in some of these future frontiers. It is worth reminding noble Lords that if they thought generative AI was disruptive, some of these next frontiers will be much more so. The UK must prepare for a period of heightened technological turbulence while seeking to take advantage of the great opportunities.

As I said on Tuesday in our debate on the data Bill, we urgently need a White Paper or similar from the Government that paints the whole picture in this area of great technological opportunity and risk. The report finds that the previous Government’s attention was shifting too far towards a narrow view of high-stakes AI safety and that there is a need for a more balanced approach to drive widespread responsible innovation. I agree that the Government should set out a more positive vision for LLMs while also reflecting on risk and safety. Perhaps the Minister could set out in his wind-up when we are likely to get the wider vision that I think we need.

I agree with much of the report’s findings, such as that the Government should explore options for securing a sovereign LLM capability, particularly for public sector applications. The report also covered the likelihood of AI-triggered catastrophic events. While I agree that this is not an imminent risk, it needs a regulatory response. AI could pose an extinction risk to humanity, as recognised by world leaders, AI scientists and leading AI company CEOs. AI systems’ capabilities are growing rapidly. Superintelligent AI systems with intellectual capabilities beyond those of humans would present far greater risks than all existing AI systems currently. In other areas, such as medicine or defence, we put guard-rails around development to protect us from risks to humanity. Is this something the Minister agrees should be addressed with the flexible, risk-based rules referenced by the noble Baroness, Lady Stowell?

To conclude, this issue is urgent. We need to balance the desire to grow the economy by harnessing the potential of AI with the need to protect our critical cultural industries, as the noble Baroness referenced. It is a special feature of the British economy, and regulation is needed to protect it. On this I commend the Australian parliamentary joint committee on social media and traditional news. It calls for a number of things, including a must-carry framework to ensure the prominence of journalism across search and social media and a 15% levy on platform online advertising, including revenues technically billed offshore that would then be distributed by an independent body. Based on estimates, the rough scale of the proposal is that approximately 1 billion Australian dollars or £500 million of revenue would be generated, which is roughly two or three times the magnitude of what licensing is currently delivering in that country. That is a bold set of proposals and I share it to raise our sense of ambition about what we can do to balance regulation and the desire for growth. These are difficult choices, but the Government need to act urgently and I strongly commend this report to the House.