Read Bill Ministerial Extracts
Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateBaroness Kidron
Main Page: Baroness Kidron (Crossbench - Life peer)Department Debates - View all Baroness Kidron's debates with the Department for Science, Innovation & Technology
(8 months ago)
Lords ChamberMy Lords, I too congratulate the noble Lord, Lord Holmes, on his wonderful speech. I declare my interests as an adviser to the Oxford Institute for Ethics in AI and the UN Secretary-General’s AI Advisory Body.
When I read the Bill, I asked myself three questions. Do we need an AI regulation Bill? Is this the Bill we need? What happens if we do not have a Bill? It is arguable that it would be better to deal with AI sector by sector—in education, the delivery of public services, defence, media, justice and so on—but that would require an enormous legislative push. Like others, I note that we are in the middle of a legislative push, with digital markets legislation, media legislation, data protection legislation and online harms legislation, all of which resolutely ignore both existing and future risk.
The taxpayer has been asked to make a £100 million investment in launching the world’s first AI safety institute, but as the Ada Lovelace Institute says:
“We are concerned that the Government’s approach to AI regulation is ‘all eyes, no hands’”,
with plenty of “horizon scanning” but no
“powers and resources to prevent those risks or even to react to them effectively after the fact”.
So yes, we need an AI regulation Bill.
Is this the Bill we need? Perhaps I should say to the House that I am a fan of the Bill. It covers testing and sandboxes, it considers what the public want, and it deals with a very important specific issue that I have raised a number of times in the House, in the form of creating AI-responsible officers. On that point, the CEO of the International Association of Privacy Professionals came to see me recently and made an enormously compelling case that, globally, we need hundreds of thousands of AI professionals, as the systems become smarter and more ubiquitous, and that those professionals will need standards and norms within which to work. He also made the case that the UK would be very well-placed to create those professionals at scale.
I have a couple of additions. Unless the Minister is going to make a surprise announcement, I think we are allowed to consider that he is going to take the Bill on in full. In addition, under Clause 2, which sets out regulatory principles, I would like to see consideration of children’s rights and development needs; employment rights, concerning both management by AI and job displacement; a public interest case; and more clarity that material that is an offence—such as creating viruses, CSAM or inciting violence—is also an offence, whether created by AI or not, with specific responsibilities that accrue to users, developers and distributors.
The Stanford Internet Observatory recently identified hundreds of known images of child sexual abuse material in an open dataset used to train popular AI text-to-image models, saying:
“It is challenging to clean or stop the distribution of publicly distributed datasets as it has been widely disseminated. Future datasets could use freely available detection tools to prevent the collection of known CSAM”.
The report illustrates that it is very possible to remove such images, but that it did not bother, and now those images are proliferating at scale.
We need to have rules upon which AI is developed. It is poised to transform healthcare, both diagnosis and treatment. It will take the weight out of some of the public services we can no longer afford, and it will release money to make life better for many. However, it brings forward a range of dangers, from fake images to lethal autonomous weapons and deliberate pandemics. AI is not a case of good or bad; it is a question of uses and abuses.
I recently hosted Geoffrey Hinton, whom many will know as the “godfather of AI”. His address to parliamentarians was as chilling as it was compelling, and he put timescales on the outcomes that leave no time to wait. I will not stray into his points about the nature of human intelligence, but he was utterly clear that the concentration of power, the asymmetry of benefit and the control over resources—energy, water and hardware—needed to run these powerful systems would be, if left until later, in so few hands that they, and not we, would be doing the rule setting.
My final question is: if we have no AI Bill, can the Government please consider putting the content of the AI regulation Bill into the data Bill currently passing through Parliament and deal with it in that way?