Large Language Models and Generative AI (Communications and Digital Committee Report)

(Limited Text - Ministerial Extracts only)

Read Full debate
Thursday 21st November 2024

(1 month ago)

Lords Chamber
Read Hansard Text Watch Debate
Lord Vallance of Balham Portrait The Minister of State, Department for Science, Innovation and Technology (Lord Vallance of Balham) (Lab)
- View Speech - Hansard - - - Excerpts

We have heard really wonderful insights and thoughtful contributions from across your Lordships’ House this afternoon and I am really grateful to the noble Baroness, Lady Stowell, for organising this engaging debate on such an important topic. It is probably the only debate I am going to take part in which has LLMs, SLMs, exaflops, Eeyore and Tigger in the same sitting.

The excellent report from the Communications and Digital Committee was clear that AI presents an opportunity, and it is one that this Government wish to seize. Although the report specified LLMs and generative AI, as has been pointed out by many, including the noble Lord, Lord Knight, AI is of course broader than just that. It represents a route to stronger economic growth and a safer, healthier and more prosperous society, as the noble Viscount, Lord Camrose, has just said, and we must harness it—it is incredibly important for the country.

Breakthroughs in general-purpose technologies are rare—the steam engine, electricity and the internet—and AI is set to be one such technology. The economic opportunities are already impressive. The AI market contributed £5.8 billion in GVA to our economy in 2023, it employs over 60,000 people and is predicted to grow rapidly in size and value over the next decade. Investing in technology has always been important for growth, and investing in AI is no exception.

Today, already, a new generation of UK-founded companies is ensuring that we are at the forefront of many of these approaches, and leading AI companies have their European headquarters in London. We have attracted significant investment from global tech giants—AWS, Microsoft, CoreWeave and Google—amounting to over £10 billion. This has bolstered our AI infrastructure, supported thousands of jobs and enhanced capacity for innovation.

The investment summit last month resulted in commitments of £63 billion, of which £24.3 billion was directly related to AI investment. The UK currently ranks third globally in several key areas: elite AI talent, the number of AI start-ups, inward investment into AI, and readiness for AI adoption. But we need to go further. In July, DSIT’s Secretary of State asked Matt Clifford to develop an ambitious AI opportunities action plan. This will be published very soon and will set out the actions for government to grow the UK’s AI sector, drive adoption of AI across the economy, which will boost growth and improve products and services, and harness AI’s power to enhance the quality and efficiency of public services. Of course, as was raised early in this debate, this also has to be about creating spin-outs and start-ups and allowing them to grow.

One of the largest near-term economic benefits of AI is the adoption of existing tools to transform businesses and improve the quality of work—a point raised very clearly by the noble Lord, Lord Ranger. AI tools are already being used to optimise complex rotas, reduce administrative burdens and support analytical capabilities and information gathering, and in healthcare to interpret medical scans, giving back more time for exchanges that truly need a human touch. Government will continue to support organisations to strengthen the foundations required to adopt AI; this includes knowledge, data, skills, talent, intellectual property protections and assurance measures. I shall return to some of those points.

In the public sector, AI could unlock a faster, more efficient and more personalised offer to its citizens, at better value to the taxpayer. In an NHS fit for the future—the noble Lord, Lord Tarassenko, made these points very eloquently—AI technology could transform diagnostics and reduce simpler things, such as administrative burdens, improving knowledge and information flows within and between institutions. It could accelerate the discovery and development of new treatments—and valuable datasets, such as the UK Biobank, will be absolutely essential.

The noble Lord, Lord Tarassenko, rightly identified the importance of building large multimodal models on trusted data and the opportunity that that presents for the UK—a point that the noble Lord, Lord Knight, also raised. Several NHS trusts are already running trials on the use of automated transcription software. The NHS and DHSC are developing guidance to ensure responsible use of these tools and how they can be rolled out more widely.

The noble Lord, Lord Kamall, rightly pointed out the role of the human in the loop, as we start to move these things into the healthcare sector. The Government can and should act as an influential customer to the UK AI sector by stimulating demand and providing procurement. That procurement pool will be increasingly important as companies scale.

DSIT, as the new digital centre of government, is working to identify promising AI use cases and rapidly scale them, and is supporting businesses across the UK to be able to do the same. The new Incubator for Artificial Intelligence is one example.

The Government recently announced that they intend to develop an AI assurance platform, which should help simplify the complex AI assurance and governance landscape for businesses, so that many more businesses can start with some confidence.

Many noble Lords touched on trust, and AI does require trust; it is a prerequisite for adopting AI. That is why we have committed to introducing new, binding requirements on the handful of companies developing the most advanced AI models, as we move towards the potential of true artificial general intelligence. We are not there yet, as has been pointed out. This legislation will build on the voluntary commitments secured at the Seoul and Bletchley Park AI safety summits and will strengthen the role of the AI Safety Institute, putting it on a statutory footing.

We want to avoid creating new rules for those using AI tools in specific sectors—a point that the noble Viscount, Lord Camrose, raised—and will instead deal with that in the usual way, through existing expert regulators. For example, the Office for Nuclear Regulation and the Environment Agency ran a joint AI sandbox last year, looking at AI and the nuclear industry. The Medicines and Healthcare Products Regulatory Agency, or MHRA, launched one on AI medical devices. We have also launched the Regulatory Innovation Office to try to streamline the regulatory approach, which will be particularly important for AI, ensuring that we have the skills necessary for regulators to be able to undertake this new work. That point was raised by several people, including the noble Baroness, Lady Healy.

New legislation will instead apply to the small number of developers of the most far-reaching AI models, with a focus on those systems that are coming tomorrow, not the ones we have today. It will build on the important work that the AI Safety Institute has undertaken to date. Several people asked whether that approach is closer to the USA or the EU. It is closer to the US approach, because we are doing it for new technologies. We are not proposing specific regulation in the individual sectors, which will be looked after by the existing regulators. The noble Lords, Lord Knight and Lord Kamall, raised those points.

It is important—everyone has raised this—that we do not introduce measures that restrict responsible innovation. At the recent investment summit, leaders in the field were clear: some guidelines are important. They create some clarity for companies. Companies currently do not have enough certainty and cannot progress. Getting that balance right will be essential and that is why, as part of this AI Bill, we will be launching an extensive consultation, leading to input, I hope, from experts from industry, academia and, of course, from this House, where many people have indicated today the very insightful points they have to make.

I was asked by the noble Lord, Lord Ranger, whether pro-innovation regulation would be the theme. That was a topic of a review that I undertook in my last role and that will certainly be a theme of what we wish to do. We will continue to lead the development of international standards through the AI Standards Hub—a partnership between the Alan Turing Institute, the British Standards Institution and the National Physical Laboratory—and by working with international bodies. Indeed, I went to speak to one of the international standards bodies on this topic a few weeks ago.

I turn to some other specific points that were raised during the debate. The AI Safety Institute’s core goal is to make frontier AI safer. It works in partnership with businesses, Governments and academia to develop research on the safety of AI and to evaluate the most capable models. It has secured privileged access to top AI models from leading companies, including test models pre deployment and post deployment with OpenAI, Google DeepMind and Anthropic among others. The institute has worked very closely with the US to launch the international network of AI safety institutes, enabling the development and adoption of interoperable principles, policies and best practice. That meeting has taken place in California this week. The noble Baroness, Lady Wheatcroft, asked for an update and I think we will have the update when the readout of that meeting is known. Just this week the AI Safety Institute shared a detailed report outlining pre-deployment of Anthropic’s upgraded Claude 3.5 Sonnet model. This will help advance the development of shared scientific benchmarks and best practices of safety testing and is an important step because it begins to show exactly how these things can also be made public.

I was asked about mandatory safety testing. I think this model, which has been a voluntary one and has engaged big companies so that they want to come to the AI Safety Institute, is the correct one. I have also noted that there are some other suggestions as to how people may report safety issues. That is an important thing to consider for the future.

To respond to the points raised by the noble Lords, Lord Strasburger and Lord Griffiths, the question of the existential threat is hotly debated among experts. Meta scientist Yann LeCun states that fears that AI will pose a threat to humanity are “preposterously ridiculous”. In contrast, Geoffrey Hinton has said it is time to confront the existential dangers of artificial intelligence. Another British Nobel prize winner, Demis Hassabis, the CEO of DeepMind, one of the most important AI companies in the world, suggests a balanced view. He has expressed optimism about AI, with its potential to revolutionise many fields, but emphasises the need to find a middle way for managing the technology.

To better understand these challenges, the Government have established a central AI risk function which brings together policymakers and AI experts with a mission to continuously monitor, identify, assess and prepare for AI-associated risks. That must include in the long term the question of whether what I will call “autonomous harm” is a feature that will emerge and, if so, over what time and what the impact of that might be.

I turn to data, the very feedstock for AI. First, data protection law applies to any processing of personal data, regardless of the technology, and we are committed to maintaining the UK’s strong data protection framework. The national data library will be the key to unlocking public data in a safe and secure way, and many speakers this afternoon have indicated how important it will be to have the data to ensure that we get training of the models. There is a huge opportunity, particularly, as has been indicated, in relation to areas such as the NHS.

The Information Commissioner’s Office has published guidance that outlines how organisations developing and using AI can ensure that AI systems that process personal data do so in ways that are accountable, transparent and fair.

On copyright, I will not list the numerous noble Lords who have made comments on copyright. It is a crucial area, and the application of copyright law to AI is as disputed globally as it is in the UK. Addressing uncertainty about the UK’s copyright framework for AI is a priority for DSIT and DCMS. We are determined to continue to enable growth in our AI and creative industries, and it is worth noting that those two are related. It is not that the creative industries are on one side and AI on the other; many creative individuals are using AI for their work. Let me say up front that the Government are committed to supporting the power of human-centred creativity as well as the potential of AI to unlock new horizons.

As the noble Baroness, Lady Featherstone, has rightly pointed out, rights holders of copyright material have called for greater control over their content and remuneration where it is used to train AI models, as well as for greater transparency. At the same time, AI developers see access to high-quality material as a prerequisite to being able to train world-leading models in the UK. Developing an approach that addresses these concerns is not straightforward, and there are issues of both the input to models and the assessment of the output from models, including the possibility of watermarking. The Government intend to engage widely, and I can confirm today that we will shortly launch a formal consultation to get input from all stakeholders and experts. I hope that this starts to address the questions that have been raised, including at the beginning by the noble Baroness, Lady Stowell, as well as the comments by the noble Baroness, Lady Healy.

On the important points that the noble Viscount, Lord Camrose, raises about offshoring and the need for international standards, I completely agree that this is a crucial area to look at. International co-operation will be crucial and we are working with partners.

We have talked about the need for innovation, which requires fair and open competition. The Digital Markets, Competition and Consumers Act received Royal Assent in May, and the Government are working closely with the Competition and Markets Authority to ensure that the measures in the Act commence by January 2025. It equips the CMA with more tools to tackle competition in the digital and AI markets. The CMA itself undertook work last year that identified the issues in some of the models that need to be looked at.

Demand for computing resource is growing very quickly. It is not just a matter of size but of configuration and systems architecture. Two compute clusters are being delivered as part of the AI research resource in Bristol and Cambridge. They will be fully operational next year and will expand the UK’s capacity thirtyfold. Isambard-AI is made up of more than 5,500 Nvidia GPUs and will be the UK’s most powerful public AI compute facility once it is fully operational next year. The AI opportunities action plan will set out further requirements for compute, which we will take forward as part of the multiyear spending review. I just say in passing that it is quite important not to conflate exascale with AI compute; they are different forms of computing, both of which are very important and need to be looked at, but it is the AI compute infrastructure that is most relevant to this.

The noble Lord, Lord Tarassenko, and the noble Baroness, Lady Wheatcroft, asked about sovereign LLMs and highlighted the opportunity to build new models based on really specific trusted data sources in the UK. This point was also raised in the committee report and is a crucial one.

I have tried to answer all the questions. I hope that I have but, if I have not, I will try to do so afterwards. This is a really crucial area and I am happy to come back and update as this goes on, as the noble Viscount, Lord Camrose, asked me to. We know that this is about opportunity, but we also know that people are concerned, rightly, about socioeconomic risks, labour market rights and infringement of rights.

There are several other points I would make. It is why we have signed the Council of Europe’s convention on AI and human rights, why we are funding the Fairness Innovation Challenge to develop solutions to AI bias, why the algorithmic transparency recording standard is being rolled out across all departments, why the Online Safety Act has powers to protect against illegal content and specifically to prevent harms to children and why the central AI risk function is working with the AI Safety Institute to identify and reduce the broader risks. The Government will drive private and public sector AI development, deployment and adoption in a safe, responsible and trustworthy way including, of course, with international partners.

I thank noble Lords for their comments today. It is with great urgency that we start to rebuild Britain, using the technology we have today, and prepare for the technologies of tomorrow. We are determined, as the noble Viscount, Lord Camrose, said, that everyone in society should benefit from this revolutionary technology. I look forward very much to continuing engagement on this important topic with what I hope is an increasing number of noble Lords who may find this rather relevant to everyday life.