Monday 4th December 2023

(5 months ago)

Lords Chamber
Read Hansard Text Read Debate Ministerial Extracts
Question
14:54
Asked by
Lord Browne of Ladyton Portrait Lord Browne of Ladyton
- Hansard - - - Excerpts

To ask His Majesty’s Government, further to the Bletchley Declaration, what timescale they believe is appropriate for the introduction of further UK legislation to regulate artificial intelligence.

Viscount Camrose Portrait The Parliamentary Under-Secretary of State, Department for Science, Innovation and Technology (Viscount Camrose) (Con)
- Hansard - - - Excerpts

Regulators have existing powers that enable them to regulate AI within their remits and are already actively doing so. For example, the CMA has now published its initial review of foundation models. The AI regulation White Paper set out our adaptive, evidence-based regulatory framework, which allows government to respond to new risks as AI develops. We will be setting out an update on our regulatory approach through the White Paper consultation response shortly.

Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - - - Excerpts

My Lords, two weeks ago, France, Germany and Italy published a joint paper on AI regulation, executive orders have already committed the US to specific regulatory guardrails, and the debate about the EU’s AI Act is ongoing. By contrast, we appear to have adopted a policy that may generously be described as masterly inactivity. Apart from waiting for Professor Bengio’s report, what steps are the Government taking to give the AI sector and the wider public some idea of the approach the UK will take to mitigate and regulate risk in AI? I hope the Minister can answer this: in the meantime, what is the legal basis for the use of AI in sensitive areas of the public sector?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I think I would regret a characterisation of AI regulation in this country as non-existent. All regulators and their sponsoring government departments are empowered to act on AI and are actively doing so. They are supported and co-ordinated in this activity by new and existing central AI functions: the central AI risk function, the CDEI, the AI standards hub and others. That is ongoing. It is an adaptive model which puts us not behind anyone in regulating AI that I am aware of. It is an adaptive model, and as evidence emerges we will adapt it further, which will allow us to maintain the balance of AI safety and innovation. With respect to the noble Lord’s second question, I will happily write to him.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, the Government have just conducted a whole summit about the risks of AI, so why in the new data protection Bill are they weakening the already limited legal safeguards that currently exist to protect individuals from AI systems making automated decisions about them in ways that could lead to discrimination or disadvantage? Is this not perverse even by this Government’s standards?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I do not think “perverse” is justified. GDPR Article 22 addresses automated individual decision-making, but, as I am sure the noble Lord knows, the DPDI Bill recasts Article 22 as the right to specific safeguards rather than a general prohibition on automated decision-making, so that subjects have to be informed about it and can seek a human review of decisions. It also defines meaningful human involvement.

Viscount Colville of Culross Portrait Viscount Colville of Culross (CB)
- Hansard - - - Excerpts

When I asked the Minister in October why deepfakes could not be banned, he replied that he could not see a pathway to do so, as they were developed anywhere in the world. In the Online Safety Act, tech companies all over the world are now required not to disseminate harms to children. Why can the harms of deepfakes not be similarly proscribed?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I remember the question. It is indeed very important. There are two pieces to preventing deepfakes being presented to British users: one is where they are created and the second is how they are presented to those users. They are created to a great extent overseas, and we can do very little about that. As the noble Viscount said, the Online Safety Act creates a great many barriers to the dissemination and presentation of deepfakes to a British audience.

Baroness Goldie Portrait Baroness Goldie (Con)
- Hansard - - - Excerpts

My Lords, the MoD published its AI strategy in June 2022. Among other priorities, the MoD aspired to be, on AI, the world’s most effective defence organisation for its size, through the delivery of battle-winning capability and supporting functions and our ability to collaborate and partner with UK allies and AI ecosystems. Can my noble friend the Minister confirm to me that nothing in current discussions will compromise the commendable and critical delivery of that objective?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I thank my noble friend for that question on the important area of AI usage in defence. As she will recall, AI in defence is principally conducted within the remit of the Ministry of Defence itself. My role has very little oversight of that, but I will take steps with government colleagues to confirm an answer for my noble friend.

Lord Bassam of Brighton Portrait Lord Bassam of Brighton (Lab)
- Hansard - - - Excerpts

My Lords, the Minister referred earlier to new risks. Sadly, the rapid development of AI has given rise to deepfake video and audio of political leaders, most recently the London Mayor, Sadiq Khan. We debated such issues during the passage of the Online Safety Act, but many were left feeling that the challenges that AI poses to our democratic processes were not sufficiently addressed. With a general election on the horizon who knows when, what steps are the Minister and his ministerial colleagues taking to protect our proud democratic traditions from bad actors and their exploitation of these new technologies? This is urgent.

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I thank the noble Lord for raising this; it is extremely urgent. In my view, few things could be more catastrophic than the loss of faith in our electoral process. In addition to the protections that will be in place through the Online Safety Act, the Government have set up the Defending Democracy Taskforce under the chairmanship of the Minister for Security, with a range of ministerial and official activities around it. That task force will engage closely, both nationally, with Parliament and other groups and stakeholders, and internationally, to learn from allies who are also facing elections over the same period.

Baroness Stuart of Edgbaston Portrait Baroness Stuart of Edgbaston (CB)
- Hansard - - - Excerpts

My Lords, I declare an interest as the First Civil Service Commissioner. If we want to regulate and to introduce legislation, the Government themselves will require a set of skills that they currently may not have. Can the Minister assure the House that we will have within government the skills to regulate artificial intelligence?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

When the then frontier model task force was set up, we had in senior officialdom a total of three years of PhD-level experience in AI safety. I am pleased to say that that number is now 150. We have probably the greatest concentration of AI safety researchers and scientists of any nation working currently for the United Kingdom Government on this crucial issue of AI safety.

Lord Vaizey of Didcot Portrait Lord Vaizey of Didcot (Con)
- Hansard - - - Excerpts

My Lords, I congratulate my noble friend the Minister on the recent AI safety summit. It is interesting that the EU is currently debating an AI regulation and tying itself up in knots about whether to regulate large language models or the application of AI. Can the Minister give an indication, first, of which direction the Government are heading, and, secondly, what discussions he has had with our colleagues in Brussels on the future of AI regulation?

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

I thank my noble friend for his congratulations with respect to the AI safety summit. We continue to engage internationally, not just with the larger international AI fora but very regularly with our colleagues in the US and the EU, both at ministerial and official level. The eventual landing zone of international interoperable AI regulations needs to be very harmonious between nations; we are pursuing that goal avidly. I may say that we are at this point more closely aligned to the US approach, which closely mirrors our own.

Viscount Stansgate Portrait Viscount Stansgate (Lab)
- Hansard - - - Excerpts

My Lords, in answer to my noble friend Lord Bassam on the Front Bench a moment ago, the Minister referred to the Defending Democracy Taskforce. When you consider that the National Cyber Security Centre, which is part of GCHQ, has recently publicly warned that in the next general election we will be subjected to a great many deepfakes along the lines indicated—we have seen them in action already—will the Minister agree to bring to the House, at an early stage, evidence of what the Defending Democracy Taskforce is doing? There is a sense of urgency here. As everyone knows, there will probably be a general election next year. On behalf of the electorate, we want to know that they will be able to understand what is real and what is not.

Viscount Camrose Portrait Viscount Camrose (Con)
- Hansard - - - Excerpts

Indeed. I should point out that the NCSC and other cyber actors are also involved in the Defending Democracy Taskforce. I will liaise with the task force to understand what exactly the communications and engagement arrangements are with Parliament and elsewhere. I will take steps to make that happen.