(2 months ago)
Lords ChamberWe have signed the convention and will bring it forward in the usual way—it will not happen overnight—providing a chance for wide consultation and consideration in Committee as it is laid before Parliament. The AI Bill itself is of course a different proposition.
My Lords, Article 3 of the framework convention, at the insistence of the United States, is discretionary in nature, offering signatory states a choice as to how to apply the convention’s principles to private actors, including those operating at the state’s behest. Given this and the somewhat vague nature of the enforcement procedures contained in Article 23, how does my noble friend the Minister envisage this convention affecting the operations of private firms contracted to supply, for example, facial recognition software—much flawed—to the Home Office and police forces?
(7 months, 1 week ago)
Lords ChamberWell, some of the enforcement measures under the Online Safety Act do allow for very significant moves against social media platforms that misuse their scale and presence to malign ends in this way, but of course the noble Lord is absolutely right and we will continue to look closely at the moves by the Biden Administration to see what we can learn from them for our approach.
My Lords, I pay tribute to Andy Street for the way he responded to the circumstances in what was an incredibly close race. He must have been hugely disappointed. Sadly, another candidate in that race has since made false accusations of racism against a Labour volunteer, posting the volunteer’s name, picture and social media account, with the result that the volunteer subsequently received death threats in both calls and emails. Will the Minister join all noble Lords in condemning this kind of behaviour and confirm that, in his view, attacking party volunteers falls fully within the range of threats to the democratic process?
First, let me absolutely endorse the noble Lord’s sentiment: this is a deplorable way to behave that should not be tolerated. From hearing the noble Lord speak of the actions, my assumption is that they would fall foul of the false communications offence under Section 179 of the Online Safety Act. As I say, these actions are absolutely unacceptable.
(10 months ago)
Lords ChamberI start by acknowledging that the creation of intimate image deepfakes using AI or other means is abusive and deeply distressing to anyone concerned and very disturbing to all of us. The Law Commission consulted widely on this, looking at the process of taking, making, possessing and sharing deepfakes, and its conclusion was that the focus of legislative effort ought to be on sharing, which it now is. That said, this is a fast-moving space. The capabilities of these tools are growing rapidly and, sadly, the number of users is growing rapidly, so we will continue to monitor that.
My Lords, the applications referred to in the excellent Question put by the noble Baroness, Lady Owen, represent a dangerous and overwhelmingly misogynistic trend of non-consensual deepfake pornography. They are able to be developed and distributed only because of advances in AI, and sit alongside the use of deepfakes for political disinformation and fraud. Polling suggests public ambivalence towards AI but near unanimity around deepfakes, with 80% of people supporting a ban, according to a recent YouGov survey. Cloud computing and services hosting AI models are essential for deepfake creation, and the fact that all major cloud suppliers have a presence in the UK empowers our Government uniquely to enforce best practice. Does the Minister agree that our regulatory system should not merely ban deepfakes but go further, imposing upon the developers a duty to show how and in what way they are applying existing techniques and restrictions that could prevent their creation in the first place?
An outright ban on the creation of any deepfake material presents a number of challenges, but obviously I applaud the sentiment behind the question. With respect particularly to deepfakes involved in intimate image abuse, we are clearly putting in place the offence of sharing, whether as part of the new intimate image abuse offences in the Online Safety Act that commenced two weeks ago, as part of the Criminal Justice Bill shortly to come before your Lordships’ House, or indeed under the existing child sexual exploitation and abuse offences. There are severe penalties for the sharing of intimate image abuse deepfakes, but it is a fast-moving space and we have to continue to monitor it.
(1 year ago)
Lords ChamberTo ask His Majesty’s Government, further to the Bletchley Declaration, what timescale they believe is appropriate for the introduction of further UK legislation to regulate artificial intelligence.
Regulators have existing powers that enable them to regulate AI within their remits and are already actively doing so. For example, the CMA has now published its initial review of foundation models. The AI regulation White Paper set out our adaptive, evidence-based regulatory framework, which allows government to respond to new risks as AI develops. We will be setting out an update on our regulatory approach through the White Paper consultation response shortly.
My Lords, two weeks ago, France, Germany and Italy published a joint paper on AI regulation, executive orders have already committed the US to specific regulatory guardrails, and the debate about the EU’s AI Act is ongoing. By contrast, we appear to have adopted a policy that may generously be described as masterly inactivity. Apart from waiting for Professor Bengio’s report, what steps are the Government taking to give the AI sector and the wider public some idea of the approach the UK will take to mitigate and regulate risk in AI? I hope the Minister can answer this: in the meantime, what is the legal basis for the use of AI in sensitive areas of the public sector?
I think I would regret a characterisation of AI regulation in this country as non-existent. All regulators and their sponsoring government departments are empowered to act on AI and are actively doing so. They are supported and co-ordinated in this activity by new and existing central AI functions: the central AI risk function, the CDEI, the AI standards hub and others. That is ongoing. It is an adaptive model which puts us not behind anyone in regulating AI that I am aware of. It is an adaptive model, and as evidence emerges we will adapt it further, which will allow us to maintain the balance of AI safety and innovation. With respect to the noble Lord’s second question, I will happily write to him.
(1 year, 1 month ago)
Lords ChamberI am pleased to reassure the noble Lord that I am not embarrassed in the slightest. Perhaps I can come back with a quotation from Yann LeCun, one of the three godfathers of AI, who said in an interview the other week that regulating AI now would be like regulating commercial air travel in 1925. We can more or less theoretically grasp what it might do, but we simply do not have the grounding to regulate properly because we lack the evidence. Our path to the safety of AI is to search for the evidence and, based on the evidence, to regulate accordingly.
My Lords, an absence of regulation in an area that holds such enormous repercussions for the whole of society will not spur innovation but may impede it. The US executive order and the EU’s AI Act gave AI innovators and companies in both these substantial markets greater certainty. Will it not be the case that innovators and companies in this country will comply with that regulation because they will want to trade in that market, and we will then be left with external regulation and none of our own? Why are the Government not doing something about this?
I think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.
(1 year, 4 months ago)
Lords ChamberMy Lords, it is a distinct pleasure to follow the noble Baroness, Lady Stowell of Beeston. I associate myself with her words of commendation and congratulation to the noble Lord, Lord Ravensdale; it is entirely appropriate that this debate be led by someone with the lived experience of an engineer, and in the noble Lord we have found that person.
Mindful of time, I will limit myself to asking a few diagnostic questions with which I hope the Minister will engage. I believe that they raise issues which are essential to harnessing the benefits of AI, if it is to be done in a manner which is both sustainable and enjoys public consent and trust. Using the incremental process of legislation to govern a technology characterised by chronic exponential technological leaps is not easy. Though tempting, the answer is not to oscillate between the poles of a false dichotomy, with regulatory rigour on one side and innovation on the other. Like climate change, AI is a potentially existential risk that is chartered by ever deepening scientific understanding, emerging opportunity and emerging demonstrable risks.
It is not always true that an absence of government means liberation for business or innovation, especially when business and innovation know that more comprehensive regulation is on the horizon. Clear signals from the Government of regulation, even in advance of that legislation, will not inhibit the AI sector but give it greater confidence in planning, resourcing and pursuing technological advances. My first question is: given that the Prime Minister last month announced his intention for the UK to play a role in leading the world in AI regulation, how does he plan to shape an international legal framework when our own is still largely hypothetical? When do the Government plan to devote parliamentary time to bringing forward some instrument or statement which will deal squarely with the future direction of domestic AI regulation? The President of the United States seems to be doing this these days to ensure, in his own words, that
“innovation doesn’t come at the expense of Americans’ rights and safety”.
I am mindful too of machinery of government issues. Like climate change, AI cuts across apparently discrete areas and will have consequences for all areas of government policy-making. Of course, as a member of the AI in Weapons Systems Select Committee of your Lordships’ House, I am conscious that the ethical implications of AI for national defence are sparking great concern. But, as the Government’s White Paper made clear, we envisage a role for AI in everything, from health and energy policy to law enforcement and intelligence gathering. It is therefore imperative that the Government establish clear lines of accountability within Whitehall so that these intersections between discrete areas of policy-making are monitored and only appropriate innovation is encouraged.
Briefings we all received in anticipation of this debate highlight growing concern over the lack of transparency and accountability about the existing use of AI in areas such as policing and justice, with particular emphasis on pursuing alleged benefit fraud. The Dutch example should be a lesson to us all.
I should be grateful if the Minister would describe how the current formal structures interact, as well as the degree to which No. 10 provides a central co-ordinating role. As the AI Council recedes from view and as the Centre for Data Ethics and Innovation’s newly appointed executive director and apparently refreshed board get to grips with how to support the delivery of priorities set out in the Government’s National Data Strategy, my second question to the Minister is whether is he feels that the recommendation in the recent joint Blair-Hague report should be under active consideration—especially having the Foundation Model Taskforce report directly to the Prime Minister. That may be a useful step to achieving better co-ordination on AI across government. If not, why not?
In preparing for today, I had the pleasure of tracking the Government’s publications on this issue for the past three years or so. In each of those, they quite rightly emphasise the importance of public trust and consent. From my experience as a member of the AI in Weapons Systems Select Committee, I note that in the first section of the executive summary of the Defence Artificial Intelligence Strategy, the Government’s vision is to be “the world’s most … trusted” organisation for AI in defence. An essential element of that trust, we are told, is that the use of AI-enabled weapons systems will be restricted to the extent of the tolerance of the UK public. The maintenance of public trust and support will be a constant qualification of the principles that will inform the use of AI-enabled systems. As there has never been any public consultation on the defence AI strategy, how will the Government on our behalf determine the limits of the tolerance of the public? My own research has revealed that this is a very difficult thing to measure if it is not informed tolerance or opinion. The Centre for Data Ethics and Innovation’s polling corroborates that. What steps are the Government taking to educate the public so they can have an informed base to decide their individual or collective tolerance or level or trust in the use of AI for any purpose, never mind defence?