Artificial Intelligence: Regulation Debate
Full Debate: Read Full DebateLord Clement-Jones
Main Page: Lord Clement-Jones (Liberal Democrat - Life peer)Department Debates - View all Lord Clement-Jones's debates with the Department for Science, Innovation & Technology
(1 year ago)
Lords ChamberI think I would regret a characterisation of AI regulation in this country as non-existent. All regulators and their sponsoring government departments are empowered to act on AI and are actively doing so. They are supported and co-ordinated in this activity by new and existing central AI functions: the central AI risk function, the CDEI, the AI standards hub and others. That is ongoing. It is an adaptive model which puts us not behind anyone in regulating AI that I am aware of. It is an adaptive model, and as evidence emerges we will adapt it further, which will allow us to maintain the balance of AI safety and innovation. With respect to the noble Lord’s second question, I will happily write to him.
My Lords, the Government have just conducted a whole summit about the risks of AI, so why in the new data protection Bill are they weakening the already limited legal safeguards that currently exist to protect individuals from AI systems making automated decisions about them in ways that could lead to discrimination or disadvantage? Is this not perverse even by this Government’s standards?
I do not think “perverse” is justified. GDPR Article 22 addresses automated individual decision-making, but, as I am sure the noble Lord knows, the DPDI Bill recasts Article 22 as the right to specific safeguards rather than a general prohibition on automated decision-making, so that subjects have to be informed about it and can seek a human review of decisions. It also defines meaningful human involvement.