(6 months, 2 weeks ago)
Lords ChamberWell, some of the enforcement measures under the Online Safety Act do allow for very significant moves against social media platforms that misuse their scale and presence to malign ends in this way, but of course the noble Lord is absolutely right and we will continue to look closely at the moves by the Biden Administration to see what we can learn from them for our approach.
My Lords, I pay tribute to Andy Street for the way he responded to the circumstances in what was an incredibly close race. He must have been hugely disappointed. Sadly, another candidate in that race has since made false accusations of racism against a Labour volunteer, posting the volunteer’s name, picture and social media account, with the result that the volunteer subsequently received death threats in both calls and emails. Will the Minister join all noble Lords in condemning this kind of behaviour and confirm that, in his view, attacking party volunteers falls fully within the range of threats to the democratic process?
First, let me absolutely endorse the noble Lord’s sentiment: this is a deplorable way to behave that should not be tolerated. From hearing the noble Lord speak of the actions, my assumption is that they would fall foul of the false communications offence under Section 179 of the Online Safety Act. As I say, these actions are absolutely unacceptable.
(9 months, 1 week ago)
Lords ChamberI start by acknowledging that the creation of intimate image deepfakes using AI or other means is abusive and deeply distressing to anyone concerned and very disturbing to all of us. The Law Commission consulted widely on this, looking at the process of taking, making, possessing and sharing deepfakes, and its conclusion was that the focus of legislative effort ought to be on sharing, which it now is. That said, this is a fast-moving space. The capabilities of these tools are growing rapidly and, sadly, the number of users is growing rapidly, so we will continue to monitor that.
My Lords, the applications referred to in the excellent Question put by the noble Baroness, Lady Owen, represent a dangerous and overwhelmingly misogynistic trend of non-consensual deepfake pornography. They are able to be developed and distributed only because of advances in AI, and sit alongside the use of deepfakes for political disinformation and fraud. Polling suggests public ambivalence towards AI but near unanimity around deepfakes, with 80% of people supporting a ban, according to a recent YouGov survey. Cloud computing and services hosting AI models are essential for deepfake creation, and the fact that all major cloud suppliers have a presence in the UK empowers our Government uniquely to enforce best practice. Does the Minister agree that our regulatory system should not merely ban deepfakes but go further, imposing upon the developers a duty to show how and in what way they are applying existing techniques and restrictions that could prevent their creation in the first place?
An outright ban on the creation of any deepfake material presents a number of challenges, but obviously I applaud the sentiment behind the question. With respect particularly to deepfakes involved in intimate image abuse, we are clearly putting in place the offence of sharing, whether as part of the new intimate image abuse offences in the Online Safety Act that commenced two weeks ago, as part of the Criminal Justice Bill shortly to come before your Lordships’ House, or indeed under the existing child sexual exploitation and abuse offences. There are severe penalties for the sharing of intimate image abuse deepfakes, but it is a fast-moving space and we have to continue to monitor it.
(11 months, 3 weeks ago)
Lords ChamberTo ask His Majesty’s Government, further to the Bletchley Declaration, what timescale they believe is appropriate for the introduction of further UK legislation to regulate artificial intelligence.
Regulators have existing powers that enable them to regulate AI within their remits and are already actively doing so. For example, the CMA has now published its initial review of foundation models. The AI regulation White Paper set out our adaptive, evidence-based regulatory framework, which allows government to respond to new risks as AI develops. We will be setting out an update on our regulatory approach through the White Paper consultation response shortly.
My Lords, two weeks ago, France, Germany and Italy published a joint paper on AI regulation, executive orders have already committed the US to specific regulatory guardrails, and the debate about the EU’s AI Act is ongoing. By contrast, we appear to have adopted a policy that may generously be described as masterly inactivity. Apart from waiting for Professor Bengio’s report, what steps are the Government taking to give the AI sector and the wider public some idea of the approach the UK will take to mitigate and regulate risk in AI? I hope the Minister can answer this: in the meantime, what is the legal basis for the use of AI in sensitive areas of the public sector?
I think I would regret a characterisation of AI regulation in this country as non-existent. All regulators and their sponsoring government departments are empowered to act on AI and are actively doing so. They are supported and co-ordinated in this activity by new and existing central AI functions: the central AI risk function, the CDEI, the AI standards hub and others. That is ongoing. It is an adaptive model which puts us not behind anyone in regulating AI that I am aware of. It is an adaptive model, and as evidence emerges we will adapt it further, which will allow us to maintain the balance of AI safety and innovation. With respect to the noble Lord’s second question, I will happily write to him.
(1 year ago)
Lords ChamberI am pleased to reassure the noble Lord that I am not embarrassed in the slightest. Perhaps I can come back with a quotation from Yann LeCun, one of the three godfathers of AI, who said in an interview the other week that regulating AI now would be like regulating commercial air travel in 1925. We can more or less theoretically grasp what it might do, but we simply do not have the grounding to regulate properly because we lack the evidence. Our path to the safety of AI is to search for the evidence and, based on the evidence, to regulate accordingly.
My Lords, an absence of regulation in an area that holds such enormous repercussions for the whole of society will not spur innovation but may impede it. The US executive order and the EU’s AI Act gave AI innovators and companies in both these substantial markets greater certainty. Will it not be the case that innovators and companies in this country will comply with that regulation because they will want to trade in that market, and we will then be left with external regulation and none of our own? Why are the Government not doing something about this?
I think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.