Artificial Intelligence (Regulation) Bill [HL]

Lord Vaizey of Didcot Excerpts
2nd reading
Friday 22nd March 2024

(1 month, 1 week ago)

Lords Chamber
Read Full debate Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Vaizey of Didcot Portrait Lord Vaizey of Didcot (Con)
- View Speech - Hansard - -

My Lords, I thought that this would be one of the rare debates where I did not have an interest to declare, but then I heard the noble Lord, Lord Young, talking about AI and education and realised that I am a paid adviser to Common Sense Media, a large US not-for-profit that campaigns for internet safety and has published the first ever ratings of AI applications used in schools. I refer the noble Lord to its excellent work in this area.

It is a pleasure to speak in the debate on this Bill, so ably put forward by the noble Lord, Lord Holmes. It is pretty clear from the reaction to his speech how much he is admired in this House for his work on this issue and so many others to do with media and technology, where he is one of the leading voices in public affairs. Let me say how humiliating it is for me to follow the noble Baronesses, Lady Stowell and Lady Kidron, both of whom are experts in this area and have done so much to advance public policy.

I am a regulator and in favour of regulation. I strongly supported the Online Safety Act, despite the narrow byways and culs-de-sac it ended up in, because I believe that platforms and technology need to be accountable in some way. I do not support people who say that the job is too big to be attempted—we must attempt it. What I always say about the Online Safety Act is that the legislation itself is irrelevant; what is relevant is the number of staff and amount of expertise that Ofcom now has, which will make it one of the world’s leaders in this space.

We talk about AI now because it has come to the forefront of consumers’ minds through applications such as ChatGPT, but large language models and the use of AI have been around for many years. As AI becomes ubiquitous, it is right that we now consider how we could or should regulate it. Indeed, with the approaching elections, not just here in the UK but in the United States and other areas around the world, we will see the abuse of artificial intelligence, and many people will wring their hands about how on earth to cope with the plethora of disinformation that is likely to emerge.

I am often asked at technology events, which I attend assiduously, what the Government’s policy is on artificial intelligence. To a certain extent I have to make it up, but to a certain extent I think that, broadly speaking, I have it right. On the one hand, there is an important focus on safety for artificial intelligence to make it as safe as possible for consumers, which in itself begs the question of whether that is possible; on the other, there is a need to ensure that the UK remains a wonderful place for AI innovation. We are rightly proud that DeepMind, although owned by Google, wishes to stay in the UK. Indeed, in a tweet yesterday the Chancellor himself bigged up Mustafa Suleyman for taking on the role of leading AI at Microsoft. It is true that the UK remains a second-tier nation in AI after China and the US, but it is the leading second-tier nation.

The question now is: what do we mean by regulation? I do not necessarily believe that now is the moment to create an AI safety regulator. I was interested to hear the contribution of the noble and learned Lord, Lord Thomas, who referred to the 19th century. I refer him to the late 20th century and the early 21st century: the internet itself has long been self-regulated, at least in terms of the technology and the global standards that exist, so it is possible for AI to proceed largely on the basis of self-regulation.

The Government’s approach to regulation is the right one. We have, for example, the Digital Regulation Cooperation Forum, which brings together all the regulators that either obviously, such as Ofcom, or indirectly, such as the FCA, have skin the game when it comes to digital. My specific request to the Minister is to bring the House up to date on the work of that forum and how he sees it developing.

I was surprised by the creation of the AI Safety Institute as a stand-alone body with such generous funding. It seems to me that the Government do not need legislation to do an examination of the plethora of bodies that have sprung up over the last 10 or 15 years. Many of them do excellent work, but where their responsibilities begin and end is confusing. They include the Ada Lovelace Institute, the Alan Turing Institute, the AI Safety Institute, Ofcom and DSIT, but how do they all fit together into a clear narrative? That is the essential task that the Government must now undertake.

I will pick up on one remark that the noble Baroness, Lady Stowell, made. While we look at the flashy stuff, if you like, such as disinformation and copyright, she is quite right to say that we have to look at the picks and shovels as AI becomes more prevalent and as the UK seeks to maintain our lead. Boring but absolutely essential things such as power networks for data centres will be important, so they must also be part of the Government’s task.