Read Bill Ministerial Extracts
Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateViscount Camrose
Main Page: Viscount Camrose (Conservative - Excepted Hereditary)Department Debates - View all Viscount Camrose's debates with the Department for Science, Innovation & Technology
(9 months ago)
Lords ChamberI join my thanks to those of others to my noble friend Lord Holmes for bringing forward this Bill. I thank all noble Lords who have taken part in this absolutely fascinating debate of the highest standard. We have covered a wide range of topics today. I will do my best to respond, hopefully directly, to as many points as possible, given the time available.
The Government recognise the intent of the Bill and the differing views on how we should go about regulating artificial intelligence. For reasons I will now set out, the Government would like to express reservations about my noble friend’s Bill.
First, with the publication of our AI White Paper in March 2023, we set out proposals for a regulatory framework that is proportionate, adaptable and pro-innovation. Rather than designing a new regulatory system from scratch, the White Paper proposed five cross-sectoral principles, which include safety, transparency and fairness, for our existing regulators to apply within their remits. The principles-based approach will enable regulators to keep pace with the rapid technological change of AI.
The strength of this approach is that regulators can act now on AI within their own remits. This common-sense, pragmatic approach has won endorsement from leading voices across civil society, academia and business, as well as many of the companies right at the cutting edge of frontier AI development. Last month we published an update through the Government’s response to the consultation on the AI White Paper. The White Paper response outlines a range of measures to support existing regulators to deliver against the AI regulatory framework. This includes providing further support to regulators to deliver the regulatory framework through a boost of more than £100 million to upskill regulators and help unlock new AI research and innovation.
As part of this, we announced a £10 million package to jump-start regulators’ AI capabilities, preparing and upskilling regulators to address the risks and to harness the opportunities of this defining technology. It also includes publishing new guidance to support the coherent implementation of the principles. To ensure robust implementation of the framework, we will continue our work to establish the central function.
Let me reassure noble Lords that the Government take mitigating AI risks extremely seriously. That is why several aspects of the central function have already been established, such as the central AI risk function, which will shortly be consulting on its cross-economy AI risk register. Let me reassure the noble Lord, Lord Empey, that the AI risk function will maintain a holistic view of risks across the AI ecosystem, including misuse risks, such as where AI capabilities may be leveraged to undermine cybersecurity.
Specifically on criminality, the Government recognise that the use of AI in criminal activity is a very important issue. We are working with a range of stakeholders, including regulators, and a range of legal experts to explore ways in which liability, including criminal liability, is currently allocated through the AI value chain.
In the coming months we will set up a new steering committee, which will support and guide the activities of a formal regulator co-ordination structure within government. We also wrote to key regulators, requesting that they publish their AI plans by 30 April, setting out how they are considering, preparing for and addressing AI risks and opportunities in their domain.
As for the next steps for ongoing policy development, we are developing our thinking on the regulation of highly capable general-purpose models. Our White Paper consultation response sets out key policy questions related to possible future binding measures, which we are exploring with experts and our international partners. We plan to publish findings from this expert engagement and an update on our thinking later this year.
We also confirmed in the White Paper response that we believe legislative action will be required in every country once the understanding of risks from the most capable AI systems has matured. However, legislating too soon could easily result in measures that are ineffective against the risks, are disproportionate or quickly become out of date.
Finally, we make clear that our approach is adaptable and iterative. We will continue to work collaboratively with the US, the EU and others across the international landscape to both influence and learn from international development.
I turn to key proposals in the Bill that the noble Lord has tabled. On the proposal to establish a new AI authority, it is crucial that we put in place agile and effective mechanisms that will support the coherent and consistent implementation of the AI regulatory framework and principles. We believe that a non-statutory central function is the most appropriate and proportionate mechanism for delivering this at present, as we observe a period of non-statutory implementation across our regulators and conduct our review of regulator powers and remits.
In the longer term, we recognise that there may be a case for reviewing how and where the central function has delivered, once its functions have become more clearly defined and established, including whether the function is housed within central government or in a different form. However, the Government feel that this would not be appropriate for the first stage of implementation. To that end, as I mentioned earlier, we are delivering the central function within DSIT, to bring coherence to the regulatory framework. The work of the central function will provide clarity and ensure that the framework is working as intended and that joined-up and proportionate action can be taken if there are gaps in our approach.
We recognise the need to assess the existing powers and remits of the UK’s regulators to ensure they are equipped to address AI risks and opportunities in their domains and to implement the principles consistently and comprehensively. We anticipate having to introduce a statutory duty on regulators requiring them to have due regard to the principles after an initial period of non-statutory implementation. For now, however, we want to test and iterate our approach. We believe this approach offers critical adaptability, but we will keep it under review; for example, by assessing the updates on strategic approaches to AI that several key regulators will publish by the end of April. We will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.
Like many noble Lords, we see approaches such as regulatory sandboxes as a crucial way of helping businesses navigate the AI regulatory landscape. That is why we have funded the four regulators in the Digital Regulation Cooperation Forum to pilot a new, multiagency advisory service known as the AI and digital hub. We expect the hub to launch in mid-May and will provide further details in the coming weeks on when this service will be open for applications from innovators.
One of the principles at the heart of the AI regulatory framework is accountability and governance. We said in the White Paper that a key part of implementation of this principle is to ensure effective oversight of the design and use of AI systems. We have recognised that additional binding measures may be required for developers of the most capable AI systems and that such measures could include requirements related to accountability. However, it would be too soon to mandate measures such as AI-responsible officers, even for these most capable systems, until we understand more about the risks and the effectiveness of potential mitigations. This could quickly become burdensome in a way that is disproportionate to risk for most uses of AI.
Let me reassure my noble friend Lord Holmes that we continue to work across government to ensure that we are ready to respond to the risks to democracy posed by deep fakes; for example, through the Defending Democracy Taskforce, as well as through existing criminal offences that protect our democratic processes. However, we should remember that AI labelling and identification technology is still at an early stage. No specific technology has yet been proven to be both technically and organisationally feasible at scale. It would not be right to mandate labelling in law until the potential benefits and risks are better understood.
Noble Lords raised the importance of protecting intellectual property, a profoundly important subject. In the AI White Paper consultation response, the Government committed to provide an update on their approach to AI and copyright issues soon. I am confident that, when we do so, it will address many of the issues that noble Lords have raised today.
In summary, our approach, combining a principles-based framework, international leadership and voluntary measures on developers, is right for today, as it allows us to keep pace with rapid and uncertain advances in AI. The UK has successfully positioned itself as a global leader on AI, in recognition of the fact that AI knows no borders and that its complexity demands nuanced international governance. In addition to spearheading thought leadership through the AI Safety Summit, the UK has supported effective action through the G7, the Council of Europe, the OECD, the G5, the G20 and the UN, among other bodies. We look forward to continuing to engage with all noble Lords on these critical issues as we continue to develop our regulatory approach.
Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateViscount Camrose
Main Page: Viscount Camrose (Conservative - Excepted Hereditary)Department Debates - View all Viscount Camrose's debates with the Department for Science, Innovation & Technology
(7 months, 1 week ago)
Lords ChamberMy Lords, I regret that I was unable to speak at Second Reading of the Bill. I am grateful to the government Benches for allowing my noble friend Lady Twycross to speak on my behalf on that occasion. However, I am pleased to be able to return to your Lordships’ House with a clean bill of health, to speak at Third Reading of this important Bill. I congratulate the noble Lord, Lord Holmes of Richmond, on the progress of his Private Member’s Bill.
Having read the whole debate in Hansard, I think it is clear that there is consensus about the need for some kind of AI regulation. The purpose, form and extent of this regulation will, of course, require further debate. AI has the potential to transform the world and deliver life-changing benefits for working people: whether delivering relief through earlier cancer diagnosis or relieving traffic congestion for more efficient deliveries, AI can be a force for good. However, the most powerful AI models could, if left unchecked, spread misinformation, undermine elections and help terrorists to build weapons.
A Labour Government would urgently introduce binding regulation and establish a new regulatory innovation office for AI. This would make Britain the best place in the world to innovate, by speeding up decisions and providing clear direction based on our modern industrial strategy. We believe this will enable us to harness the enormous power of AI, while limiting potential damage and malicious use, so that it can contribute to our plans to get the economy growing and give Britain its future back.
The Bill sends an important message about the Government’s responsibility to acknowledge and address how AI affects people’s jobs, lives, data and privacy, in the rapidly changing technological environment in which we live. Once again, I thank the noble Lord, Lord Holmes of Richmond, for bringing it forward, and I urge His Majesty’s Government to give proper consideration to the issues raised. As ever, I am grateful to noble Lords across the House for their contributions. We support and welcome the principles behind the Bill, and we wish it well as it goes to the other place.
My Lords, I too sincerely thank my noble friend Lord Holmes for bringing forward the Bill. Indeed, I thank all noble Lords who have participated in what has been, in my opinion, a brilliant debate.
I want to reassure noble Lords that, since Second Reading of the Bill in March, the Government have continued to make progress in their regulatory approach to artificial intelligence. I will take this opportunity to provide an update on just a few developments in this space, some of which speak to the measures proposed by the Bill.
First, the Government want to build public visibility of what regulators are doing to implement our pro-innovation approach to AI. Noble Lords may recall that we wrote to key regulators in February asking them for an update on this. Regulators have now published their updates, which include an analysis of AI-related opportunities and risks in the areas that they regulate, and the actions that they are taking to address these. On 1 May, we published a GOV.UK page where people can access each regulator’s update.
We have taken steps to establish a multidisciplinary risk-monitoring function within the Department for Science, Innovation and Technology, bringing together expertise in risk, regulation and AI. This expertise will provide continuous examination of cross-cutting AI risks, including evaluating the effectiveness of interventions by government and regulators.
Before the noble Viscount sits down, he listed a whole series of activities that are very welcome, but I said at Second Reading that I felt the Government were losing momentum, because the Prime Minister had set an international lead: the United Kingdom was going to lead the world and would be an example to everybody. It seems, with the Minister’s statement, that we have slipped back now. The European Union has set out its stall. If we are not going to have a legislative framework, we need to know that. I just hope the Government will reflect that the position the Prime Minister adopted at the beginning of this process was innovative, positive and good for the United Kingdom as a whole, but I fear that the loss of momentum means we will be slipping back down at a very rapid rate.
I thank the noble Lord for his comments. I am not sure I accept the characterisation of a loss of momentum. We are, after all, co-hosting the AI safety summit along with our Korean friends in a couple of weeks. On moving very quickly to legislation, it has always been the Government’s position that it is better to have a deeper understanding of the specific risks of AI across each sector and all sectors before legislating too narrowly, and that there is a real advantage to waiting for the right moment to have judicious legislation that addresses specific risks, rather than blanket legislation that goes to all of them.