Large Language Models and Generative AI (Communications and Digital Committee Report) Debate

Full Debate: Read Full Debate
Department: Department for Science, Innovation & Technology

Large Language Models and Generative AI (Communications and Digital Committee Report)

Lord Ranger of Northwood Excerpts
Thursday 21st November 2024

(1 month ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Ranger of Northwood Portrait Lord Ranger of Northwood (Con)
- View Speech - Hansard - -

My Lords, it is truly an honour to follow my noble friend Lord Kamall. I begin by acknowledging the excellent work of the House of Lords Communications and Digital Committee, led with great dedication by my noble friend Lady Stowell, in producing this thorough report on large language models and generative AI.

I note my entry in the register of interests, especially my last role at Atos, where over six years ago I led a thought leadership campaign on the digital vision for AI. Six years is an exceptionally long time in the world of tech. Since then, we have accelerated into the early adoption and application era of AI. Now, as a Member of your Lordships’ House, I am delighted to be vice-chair of the AI APPG.

The pace of both development and adoption in the last 24 months has been breathtaking, and a key moment for the AI industry was obviously the launch of ChatGPT on 30 November 2022. That date will no doubt go down in history, not just for technologists but because of how it transformed the awareness and accessibility of LLM-based gen AI services to the public. It was the AI users’ big bang moment. It is because of the pace of commercial and technological change that I have been meeting with businesses and AI organisations during the past six months to hear at first hand what they see as the main issues and opportunities, as well as taking part in the evidence sessions that the AI APPG has held.

It has become clear to me that the UK’s AI market and particularly native AI businesses—those that develop and directly deliver AI capabilities and services—are seeing their growth turbocharged. They are recruiting, expanding and developing a skilled workforce, receiving investment and harnessing opportunities locally and internationally faster than they can think. This is an exciting time for our AI industry.

What do they want from Government? It is a case not of what we can do for them but of what these native artificial intelligence businesses can do for us. They want to be able to inform, influence and raise awareness of the key factors impacting them and their industry: how they are witnessing at first hand the adoption and implementation of AI systems and services; the investment landscape and growth opportunities that are available and how government policy can further support them; and the need to support investment in industry skills and academic research to ensure medium to long-term sustainability of their workforce and capabilities. As part of the development programme for the much-anticipated government AI action plan, what engagement has there been with the industry on these specific topics?

There are also various macro factors that the industry is clear on and that must be part of any AI plan or growth strategy for the UK. First, the availability of large datasets, as has been mentioned in this debate, is critical to the development of LLMs. Secondly, increasing the national availability of compute power will be directly proportionate to the advancement of generative AI. Thirdly, energy requirements to support compute must be considered as part of the investment landscape and even as part of national critical infrastructure. That is why there was such disappointment at the decision by this Government to cancel the investment into delivering the exascale computer in Edinburgh. I echo the words of the noble Baroness, Lady Wheatcroft, and ask the Minister how the Government will mitigate the impact of the loss of this compute power in the UK.

There is one other major consideration that has been mentioned already, and that businesses all raised—regulation. The AI industry is desperately keen to input into any development of regulatory considerations and is looking for signals from this Government as to their approach.

In July the Secretary of State for DSIT, Mr Kyle, said in a Written Statement that in line with the Labour Party’s manifesto, some AI companies will be regulated. Legislation would be

“highly targeted and will support growth and innovation by ending regulatory uncertainty”.—[Official Report, Commons, 26/7/24; col. 34WS.]

Four months later, on 6 November at the Future of AI Summit, the Secretary of State said that legislation would be introduced “next year”—that is a large 12-month window. In August, Mr Kyle said that legislation would focus on the most advanced LLM models and would not regulate the entire industry. It feels a bit like a trail of breadcrumbs being laid, with the industry not knowing when or where the next crumb indicating a sense of regulatory direction will be found.

As I mentioned, every AI business and sector partner I have met has requested both clarity and the opportunity to input into regulatory development, but has felt uncertain about how the Government are developing their thinking. The industry has been clear on the need for any regulation to be proportionate, to avoid duplication with existing technology-neutral rules and to minimise regulatory fragmentation. For example, this is key to the UK financial services industry’s international competitiveness and its role as an enabler of economic growth across the UK. For a leading tech-enabled industry that has been using advanced technologies safely and effectively for years, disproportionate legislation would create unnecessary regulatory burdens and stifle operations and trade, slowing innovation and productivity, and putting our firms at a global competitive disadvantage. Have the Government established a series of clear principles that will be used in the development of targeted and proportionate regulation?

As my noble friend Lady Stowell highlighted, I am also aware, through discussions with major investors, that the development of the regulatory environment in the UK is being closely viewed to assess how attractive our industry is and how much international investment may flow into it. Global investors clearly see an opportunity for the UK to learn from the US light-touch approach but also from what appears to be the vice-like grip approach the EU has taken with the development of its landmark AI Act.

By the way, I do not take this view on EU regulation as my own without input from others. Notably, I attended the AI CogX summit in London at the beginning of October, where an MEP who had worked on the Act stated that he believed the EU had

“created a barrier to entry”

and established a law that had such a high cost that it was creating compliance problems. It appears that there is a sense that the EU AI Act has taken a wrong turn and is already diminishing both innovation and the flow of investment into the region. What assessment are the Government making of the Act, and has its early impact on the region been discussed with EU counterparts?

To conclude, I have a few quick-fire questions. The previous Government had committed to a pro-innovation regulatory approach—will this Government too? Will the Government’s AI action plan include a suggested regulatory approach for the UK? When will it be published?