Read Bill Ministerial Extracts
Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateLord Leong
Main Page: Lord Leong (Labour - Life peer)Department Debates - View all Lord Leong's debates with the Department for Science, Innovation & Technology
(7 months, 1 week ago)
Lords ChamberMy Lords, I declare my technology interests as adviser to Boston Ltd. I thank all the organisations and individuals that took the trouble to meet with me ahead of the Second Reading of my Bill, shared their expertise and insight, and added to the positive, almost unified, voice that we had at Second Reading in March. I thank colleagues around the House and in the other place for their support, and particularly thank the Labour and Liberal Democrat Front Benches for their support on all the principles set out in the Bill. I also thank my noble friend the Minister for the time he took to meet with me at all stages of the Bill.
It is clear that, when it comes to artificial intelligence, it is time to legislate—it is time to lead. We know what we need to do, and we know what we need to know, to legislate. We know the impact that AI is already having on our creatives, on our IP, on our copyright, across all that important part of our economy. We know the impact that having no labelling on IP products is having. Crucially, we know the areas where there is no competent legislation or regulator when it comes to AI decisions. Thus, there is no right of redress for consumers, individuals and citizens.
Similarly, it is also time to legislate to end the illogicality that grew out of the Bletchley summit—successful of itself, but strange to put only a voluntary code, rather than something statutory, in place as a result of that summit. It was strange also to have stood up such a successful summit and then not sought to legislate for all the other areas of artificial intelligence already impacting people’s lives—oftentimes without them even knowing that AI is involved.
It is time to bring forward good legislation and the positive powers of right-size regulation. What this always brings is clarity, certainty, consistency, security and safety. When it comes to artificial intelligence, we do not currently have that in the United Kingdom. Clarity and certainty, craved by consumers and businesses, is a driver of innovation, inward investment, pro-consumer protection and pro-citizen rights. If we do not legislate, the most likely, and certainly unintended, consequence is that businesses and organisations looking for a life raft will understandably, but unfortunately, align to the EU AI Act. That is not the optimal outcome that we can secure.
It is clear that when it comes to AI legislation and regulation things are moving internationally, across our Parliament and—dare I say—in No. 10. With sincere thanks again to all those who have helped so much to get the Bill to this stage, I say again that it is time to legislate—it is time to lead #OurAIFutures.
My Lords, I regret that I was unable to speak at Second Reading of the Bill. I am grateful to the government Benches for allowing my noble friend Lady Twycross to speak on my behalf on that occasion. However, I am pleased to be able to return to your Lordships’ House with a clean bill of health, to speak at Third Reading of this important Bill. I congratulate the noble Lord, Lord Holmes of Richmond, on the progress of his Private Member’s Bill.
Having read the whole debate in Hansard, I think it is clear that there is consensus about the need for some kind of AI regulation. The purpose, form and extent of this regulation will, of course, require further debate. AI has the potential to transform the world and deliver life-changing benefits for working people: whether delivering relief through earlier cancer diagnosis or relieving traffic congestion for more efficient deliveries, AI can be a force for good. However, the most powerful AI models could, if left unchecked, spread misinformation, undermine elections and help terrorists to build weapons.
A Labour Government would urgently introduce binding regulation and establish a new regulatory innovation office for AI. This would make Britain the best place in the world to innovate, by speeding up decisions and providing clear direction based on our modern industrial strategy. We believe this will enable us to harness the enormous power of AI, while limiting potential damage and malicious use, so that it can contribute to our plans to get the economy growing and give Britain its future back.
The Bill sends an important message about the Government’s responsibility to acknowledge and address how AI affects people’s jobs, lives, data and privacy, in the rapidly changing technological environment in which we live. Once again, I thank the noble Lord, Lord Holmes of Richmond, for bringing it forward, and I urge His Majesty’s Government to give proper consideration to the issues raised. As ever, I am grateful to noble Lords across the House for their contributions. We support and welcome the principles behind the Bill, and we wish it well as it goes to the other place.
My Lords, I too sincerely thank my noble friend Lord Holmes for bringing forward the Bill. Indeed, I thank all noble Lords who have participated in what has been, in my opinion, a brilliant debate.
I want to reassure noble Lords that, since Second Reading of the Bill in March, the Government have continued to make progress in their regulatory approach to artificial intelligence. I will take this opportunity to provide an update on just a few developments in this space, some of which speak to the measures proposed by the Bill.
First, the Government want to build public visibility of what regulators are doing to implement our pro-innovation approach to AI. Noble Lords may recall that we wrote to key regulators in February asking them for an update on this. Regulators have now published their updates, which include an analysis of AI-related opportunities and risks in the areas that they regulate, and the actions that they are taking to address these. On 1 May, we published a GOV.UK page where people can access each regulator’s update.
We have taken steps to establish a multidisciplinary risk-monitoring function within the Department for Science, Innovation and Technology, bringing together expertise in risk, regulation and AI. This expertise will provide continuous examination of cross-cutting AI risks, including evaluating the effectiveness of interventions by government and regulators.