3 Lord Fairfax of Cameron debates involving the Department for Science, Innovation & Technology

King’s Speech (4th Day)

Lord Fairfax of Cameron Excerpts
Monday 22nd July 2024

(3 months, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- Hansard - -

My Lords, I too congratulate the two main speakers on their impressive —or, in the case of the noble Lord, Lord Petitgas, perhaps it is better to say “formidable”—maiden speeches. I also declare an interest as the co-editor of a forthcoming book, to be published by Springer Nature, on the future of artificial intelligence.

In the King’s Speech, the new Government said that they will

“seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.

That is more than the last Government were willing to do, but it is still not an AI Bill. Given that the leading tech companies continue to compete fiercely against each other to be the first to achieve artificial general intelligence, for the sake of the world I sincerely hope that this proves to be the right approach, especially as the Government’s safety institutes and tech ministries are still playing catch-up on AI, especially its frontier models.

However, what I really want to address very briefly this evening is the future of jobs and work as the AI wave starts to wash over the world, and to hope that this new Government will devote the necessary time and resources to think about and plan for the potentially massive employment, economic and societal consequences of that wave.

Discussions about the possible need for a universal basic income have been around for years. More recently, McKinsey wrote about the future of work in the new world of AI eight years ago and has just done so again, including a finding that by 2030 up to 30% of hours worked in the world could be automated. Goldman Sachs wrote last year about the possible loss of 300 million jobs worldwide to AI and automation, and just last week my noble friend Lady Moyo wrote:

“many fear that AI will contribute to long-term structural unemployment, creating a jobless class that will include both skilled and unskilled”

labour. At the extreme, Elon Musk has expressed the opinion that in future AI will take all human jobs, because it will be able to do them more cheaply and efficiently than humans can.

The societal impact of any of these scenarios is potentially seismic for this country and the world. I emphasise that I am not falling into the trap of not taking into account the many new jobs that AI will create that did not previously exist—head of AI positions and prompt engineers, to name but two. However, many very informed people consider that this time is different, so because the societal and employment impacts of this AI wave will be so seismic, I hope that this new Government will start to think seriously and deeply about planning for the new world of AI, which we and the rest of the world are just entering.

Artificial Intelligence (Regulation) Bill [HL]

Lord Fairfax of Cameron Excerpts
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- View Speech - Hansard - -

My Lords, I too congratulate my noble friend Lord Holmes on bringing forward this AI regulation Bill, in the context of the continuing failure of the Government to do so. At the same time, I declare my interest as a long-term investor in at least one fund that invests in AI and tech companies.

A year ago, one of the so-called godfathers of AI, Geoffrey Hinton, cried “fire” about where AI was going and, more importantly, when. Just last week, following the International Dialogue on AI Safety in Beijing, a joint statement was issued by leading western and Chinese figures in the field, including Chinese Turing award winner Andrew Yao, Yoshua Bengio and Stuart Russell. Among other things, that statement said:

“Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes … We should immediately implement domestic registration for AI models and training runs above certain compute or capability thresholds”.


Of course, we are talking about not only extinction risks but other very concerning risks, some of which have been mentioned by my noble friend Lord Holmes: extreme concentration of power, deepfakes and disinformation, wholesale copyright infringement and data-scraping, military abuse of AI in the nuclear area, the risk of bioterrorism, and the opacity and unreliability of some AI decision-making, to say nothing of the risk of mass unemployment. Ian Hogarth, the head of the UK AI Safety Institute, has written in the past about some of these concerns and risks.

Nevertheless, despite signing the Center for AI Safety statement and publicly admitting many of these serious concerns, the leading tech companies continue to race against each other towards the holy grail of artificial general intelligence. Why is this? Well, as they say, “It’s the money, stupid”. It is estimated that, between 2020 and 2022, $600 billion in total was invested in AI development, and much more has been since. This is to be compared with the pitifully small sums invested by the AI industry in AI safety. We have £10 million from this Government now. These factors have led many people in the world to ask how it is that they have accidentally outsourced their entire futures to a few tech companies and their leaders. Ordinary people have a pervading sense of powerlessness in the face of AI development.

These facts also raise the question of why the Government continue to delay putting in place proper and properly funded regulatory frameworks. Others, such as the EU, US, Italy, Canada and Brazil, are taking steps towards regulation, while, as noble Lords have heard, China has already regulated and India plans to regulate this summer. Here, the shadow IT Minister has indicated that, if elected, a new Labour Government would regulate AI. Given that a Government’s primary duty is to keep their country safe, as we so often heard recently in relation to the defence budget, this is both strange and concerning.

Why is this? There is a strong suspicion in some quarters that the Prime Minister, having told the public immediately before the Bletchley conference that AI brings national security risks that could end our way of life, and that AI could pose an extinction risk to humanity, has since succumbed to regulatory capture. Some also think that the Government do not want to jeopardise relations with leading tech companies while the AI Safety Institute is gaining access to their frontier models. Indeed, the Government proudly state that they

“will not rush to legislate”,

reinforcing the concern that the Prime Minister may have gone native on this issue. In my view, this deliberate delay on the part of the Government is seriously misconceived and very dangerous.

What have the Government done to date? To their credit, they organised and hosted Bletchley, and importantly got China to attend too. Since then, they have narrowed the gap between themselves and the tech companies—but the big issues remain, particularly the critical issue of regulation versus self-regulation. Importantly, and to their credit, the Government have also set up the UK AI Safety Institute, with some impressive senior hires. However, no one should be in any doubt that this body is not a regulator. On the critical issue of the continuing absence of a dedicated unitary AI regulator, it is simply not good enough for the Government to say that the various relevant government bodies will co-operate together on oversight of AI. It is obvious to almost everyone, apart from the Government themselves, that a dedicated, unitary, high-expertise and very well-funded UK AI regulator is required now.

The recent Gladstone AI report, commissioned by the US Government, has highlighted similar risks to US national security from advanced AI development. Against this concerning background, I strongly applaud my noble friend Lord Holmes for bringing forward the Bill. It may of course be able to be improved, but its overall intention and thrust are absolutely right.

Advanced Artificial Intelligence

Lord Fairfax of Cameron Excerpts
Monday 24th July 2023

(1 year, 3 months ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Fairfax of Cameron Portrait Lord Fairfax of Cameron (Con)
- View Speech - Hansard - -

My Lords, it is a great pleasure to follow the right reverend Prelate. I declare my interest as a member of the AI in Weapon Systems Committee. I very much thank the noble Lord, Lord Ravensdale, for choosing for debate a subject that arguably now trumps all others in the world in importance. I also owe a debt of gratitude to a brilliant young AI researcher at Cambridge who is researching AI risk and impacts.

I could, but do not have the time to, discuss the enormous benefits that AI may bring and some of the well-known risks: possible extreme concentration of power; mass surveillance; disinformation and manipulation, for example of elections; and the military misuse of AI—to say nothing of the possible loss, as estimated by Goldman Sachs, of 300 million jobs globally to AI. Rather, in my five minutes I will focus on the existential risks that may flow from humans creating an entity that is more intelligent than we are. Five minutes is not long to discuss the possible extinction of humanity, but I will do my best.

Forty years ago, if you said some of the things I am about to say, you were called a fruitcake and a Luddite, but no longer. What has changed in that time? The changes result mainly from the enormous development in the last 10 years of machine learning and its very broad applicability—for example, distinguishing images and audio, learning patterns in language, and simulating the folding of proteins—as long as you have the enormous financial resources necessary to do it.

Where is all this going? Richard Ngo, a researcher at OpenAI and previously at DeepMind, has publicly said that there is a 50:50 chance that by 2025 neural nets will, among other things, be able to understand that they are machines and how their actions interface with the world, and to autonomously design, code and distribute all but the most complex apps. Of course, the world knows all about ChatGPT.

At the extreme, artificial systems could solve strategic-level problems better than human institutions, disempower humanity and lead to catastrophic loss of life and value. Godfathers of AI, such as Geoffrey Hinton and Yoshua Bengio, now predict that such things may become possible within the next five to 20 years. Despite two decades of concentrated effort, there has been no significant progress on, nor consensus among AI researchers about, credible proposals on the problems of alignment and control. This led many senior AI academics—including some prominent Chinese ones, I emphasise—as well as the leaderships of Microsoft, Google, OpenAI, DeepMind and Anthropic, among others, recently to sign a short public statement, hosted by the Center for AI Safety:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.


In other words, they are shouting “Fire!” and the escape window may be closing fast.

As a result, many Governments and international institutions, such as the UN and the EU, are suddenly waking up to the threats posed by advanced AI. The Government here are to host a global AI safety summit this autumn, apparently, but Governments, as some have said, are starting miles behind the start line. It will be critical for that summit to get the right people in the room and in particular not to allow the tech giants to regulate themselves. As Nick Bostrom wrote:

“The best path towards the development of beneficial superintelligence is one where AI developers and AI safety researchers are on the same side”.


What might the shape of AI regulation look like? Among other things, as the noble Lord, Lord Ravensdale, said, Governments need to significantly increase the information they have about the technological frontiers. The public interest far outweighs commercial secrecy. This challenge is international and global; AI, like a pandemic, knows no boundaries.

Regulation should document the well-known and anticipated harms of societal-scale AI and incentivise developers to address these harms. Best practice for the trustworthy development of advanced AI systems should include regular risk assessments, red teaming, third-party audits, mandatory public consultation, post-deployment monitoring, incident reporting and redress.

There are those who say that this is all unwarranted scaremongering, as some have touched on this afternoon, and that “there is nothing to see here”. But that is not convincing because those people—and they know who they are—are transparently just talking their own commercial and corporate book. I also pray in aid the following well-known question: would you board an aeroplane if the engineers who designed it said it had a 5% chance of crashing? Some, such as Eliezer Yudkowsky, say that we are already too late and that, as with Oppenheimer, the genie is already out of the bottle; all humanity can do is to die with dignity in the face of superhuman AI.

Nevertheless, there are some very recent possible causes for hope, such as the just-announced White House voluntary commitments by the large tech companies and the Prime Minister’s appointment of Ian Hogarth as the chair of the UK Government’s AI Foundation Model Taskforce.

For the sake of humanity, I end with the words of Dylan Thomas:

“Do not go gentle into that good night …

Rage, rage against the dying of the light”.

Given that we are starting miles behind the start line, I refer to Churchill’s well-known exhortation: “Action this day”.