Monday 24th July 2023

(1 year, 4 months ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Moved by
Lord Ravensdale Portrait Lord Ravensdale
- View Speech - Hansard - -

That this House takes note of the ongoing development of advanced artificial intelligence, associated risks and potential approaches to regulation within the UK and internationally.

Lord Ravensdale Portrait Lord Ravensdale (CB)
- Hansard - -

My Lords, I first declare my interest as a project director working for Atkins and by noting that this is not just my debate: a number of noble Lords right across the Cross Benches put forward submissions on this topic, including the noble Baroness, Lady Kidron, the noble and right reverend Lord, Lord Harries, and the noble Lord, Lord Patel.

There are a couple of reasons why I was keen to put this forward. First, as we have seen recently, the rapid advancement of AI has been brought into sharp relief by the ongoing development of large language models, some of which are likely to be able to pass the famous Turing test for machine intelligence, which has been something of a benchmark for it over the past 50 years. The questions of existential risk of that technology have also resurfaced. AI is the most important technology of our generation. It will cause significant upheaval right across the economy, good and bad. We in Parliament need to be thinking about this area and talking about it a lot more than we are. It should be right at the top of our agenda. Secondly, there is the matter of timing. The Government released their White Paper earlier this year but, in some respects, this has been overtaken by events and the Government appear to be rethinking aspects of their strategy. Therefore, this is the perfect time for this House to express its views on the issues before us, to help inform forthcoming government responses.

In my work as an engineering consultant, I have worked on and seen the continued advancement of these technologies over the years. Several years ago, one of my projects involved the transfer of large amounts of engineering data—the complete design definition of a nuclear reactor, hundreds of thousands of documents —from an old to a new computer system. A proposal was developed to get eight graduate engineers sitting at desks and manually transferring the data, which would have been a terrible waste of talent. We sat down with our brightest young engineers, and they investigated and developed a smart algorithm which worked in the graphical user interface, just as a human would. It was effectively a software robot to undertake this work and replace human workers. This was able to crunch through the entire task in minimal time—a matter of months—saving hundreds of thousands of pounds and thousands of hours of engineering effort.

Across the industry, we are starting to see automation coupled with AI and machine learning to learn how to resolve discrepancies and data from past experience, continuing that process of freeing up humans in clerical jobs for more value-added work. This is one example of the huge benefits that the AI revolution is having and will continue to have on society. Anyone who has read Adam Smith’s The Wealth of Nations and his description of the pin factory sees the logic and economic benefits of increasing specialisation—but also the drudgery of work that can result from that. Among many other benefits, AI will continue that process of freeing people up from repetitive, inane tasks and on to more value-added work, increasing human happiness along with it.

However, then we have the flip side, the concerns around risks, all the way up to existential risks. We live in a radically uncertain world, a terminology coined by the noble Lord, Lord King of Lothbury, and John Kay. There has been much hyperbole around AI risk in recent months, but we need to take those risks seriously. Just as Martin Weitzman put forward his very elegant argument around the rationale for investing large amounts of money today on climate change, based on the tail risks of worst-case scenarios, so too we should consider the tail risks of where massive increases in digital compute and the potential emergence of a superintelligence —something that far exceeds human intellectual capabilities in every area—will take us, and invest appropriately based on that.

There are no historical parallels for the technological singularity that AI could unleash. Perhaps one instructive episode would be the fate of the Aztec civilisation. The Aztecs had existed in the world as they knew it for many thousands of years, with no significant change from one year to the next. Then one day in 1519, the white sails of the fleet of Cortés appeared on the horizon, and nothing was ever the same again. Within months, Cortés and his few hundred men had conquered the vast Aztec empire, with its population of millions, one of the most remarkable and tragic feats in human history. To avoid perhaps one day in the coming decades seeing a version of the white sails of Cortés on our own horizon, we must carefully consider our approaches now to this rapidly developing technology and manage the risks. That means regulation. It just will not do to let the private sector get on with it and hope for the best.

What should this mean for regulation and legislation development? The key point for me is that the Government cannot effectively regulate something that they do not adequately understand. I may be wrong, but I do not think that the Government, or any noble Lord here today, will have a fully thought-through plan of action for the regulation of AI. We are in a highly unpredictable situation. To this end, the first thing that we need to think about is how we can implement a sovereign research capability in AI which will develop regulation in parallel.

Research and regulation are different sides of the same coin in this instance. We need to learn by doing, we need agencies that can attract top-class people and we need new models of governance that enable the agility and flexibility that will be required for public investment into AI research and regulation. Any attempt to fold this effort into a government department or a traditional public research organisation is simply not going to work.

So how should we go about this? It was a great privilege a few years back to help shape the Advanced Research and Invention Agency Act, and I am very pleased to see ARIA moving forward at this albeit early stage. There are a number of things we can draw from it regarding how we approach AI capability. AI capability is exactly the sort of high-risk, high-reward technology we would expect ARIA to be investing in. But if we agree that AI needs a research focus, we could perhaps set up a different organisation in the same way as ARIA, but with a specific focus on AI, and call it ARIA-AI; or we could even increase funding and provide that focus to an existing part of ARIA’s organisational set-up.

In Committee on the ARIA Bill, we debated extensively the potential to give ARIA a focus or aim similar to that of the United States’ Defence Advanced Research Projects Agency, and ARPA-E. The Government wanted ARIA to maintain the freedom to choose its own goals, but there is an opportunity now to look at this again and use the strengths of the ARIA concept—its set-up, governance structures and freedom of action—to help the UK move forward in this area.

This is similar in some respects to the “national laboratory” for AI proposed by Tony Blair and the noble Lord, Lord Hague, in their recent report. This research organisation, along with the AI task force, if set up in the right way, would advance research alongside regulation, enable a unique competitive advantage for the UK in this area and begin the process of solving AI safety problems.

This will all need to be backed up by the right levels of government support. This is one of those areas where we should fully commit as a nation to this effort, or not press on with it at all. I can think of a number of such examples. The Government’s aspiration to build up to exascale levels of computing by 2026 is very welcome but would give the entire British state the ability to train only one GPT-4 scale model four years after OpenAl did. In addition, before DeepMind was acquired by Google, it had an annual budget of approximately £l billion a year, which gives a view of the scale of investment required. Can the Minister in summing up say what plans there are to scale up the Government’s ambitions in this area?

Finally, the Government’s recent White Paper outlines a pretty sensible approach in balancing management of the risks and opportunities of the technology, but as I said at the start, there are areas where it has perhaps been overtaken by events, in a field that is moving at breakneck speed—and that speed is the problem here. Unlike climate change, the full effects of which will not manifest over decades, or even centuries, AI is developing at an incredible pace. We therefore need to start thinking immediately about the initial regulatory frameworks. The Government could consider as a minimum putting their five principles in the White Paper on a statutory footing in the near term to provide regulators with enhanced powers to address the risks of AI.

Here is the bones of an AI Bill, perhaps legislating to set up a new research organisation, providing regulators with the right initial powers and the funding to sit behind all of this, which would at the same time build upon the world-leading AI development capabilities we now have in the UK. I beg to move.

--- Later in debate ---
Lord Ravensdale Portrait Lord Ravensdale (CB)
- View Speech - Hansard - -

My Lords, I thank the Minister for his reply, and thank all noble Lords who have taken part in what has been a most illuminating debate. The debate has achieved exactly what I hoped it would: in no other organisation would we get such a diverse range of knowledge and expertise applied to this question, as the noble Baroness, Lady Primarolo, said. We have touched on: ethics; the effects on society across a whole range of areas including business, healthcare and defence; risks; and regulation, where the noble Lord, Lord Clement-Jones, reminded us that regulation can be key to stimulating innovation and not just a barrier to innovation.

As we went through the debate and the subject of risks came up—particularly the point made by the noble Lord, Lord Fairfax—I was reminded of something my eight year-old child said to me on the subject of AI. After a short discussion we had on the current state of play—the positives and negatives—he said, “We should stop inventing it, Daddy. I think we would be all right”. I think that, sometimes, we should listen to the wisdom of our children and reflect upon it.

Another key aspect was brought up by the noble Lord, Lord Rees, and others, on international collaboration and the need for an enforceable regulatory system with global range. As the noble Lord, Lord Watson, noted, we are presently stuck in something of a prisoner’s dilemma between nations. How do we break that down and find common interests between nations to resolve that? I would go back to the 1955 Russell-Einstein Manifesto”. In the early days of the nuclear age, when we were thinking about the existential risks of nuclear weapons, a key quote from that manifesto which brought together scientists and policymakers to solve that issue was:

“Remember your humanity, and forget the rest”.


I look forward to the autumn conference the Government have organised. I also look forward to the King’s Speech, where I hope to see an AI Bill or at least some concrete steps forward on the topics discussed today. Regardless, going forward, we need to see a much greater input from Parliament on these questions.

Motion agreed.