Artificial Intelligence (Regulation) Bill [HL]

Lord Clement-Jones Excerpts
2nd reading
Friday 22nd March 2024

(9 months ago)

Lords Chamber
Read Full debate Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I congratulate the noble Lord, Lord Holmes, on his inspiring introduction and on stimulating such an extraordinarily good and interesting debate.

The excellent House of Lords Library guide to the Bill warns us early on:

“The bill would represent a departure from the UK government’s current approach to the regulation of AI”.


Given the timidity of the Government’s pro-innovation AI White Paper and their response, I would have thought that was very much a “#StepInTheRightDirection”, as the noble Lord, Lord Holmes, might say.

There is clearly a fair wind around the House for the Bill, and I very much hope it progresses and we see the Government adopt it, although I am somewhat pessimistic about that. As we have heard in the debate, there are so many areas where AI is and can potentially be hugely beneficial, despite the rather dystopian narratives that the noble Lord, Lord Ranger, so graphically outlined. However, as many noble Lords have emphasised, it also carries risks, not just of the existential kind, which the Bletchley Park summit seemed to address, but others mentioned by noble Lords today, such as misinformation, disinformation, child sexual abuse, and so on, as well as the whole area of competition, mentioned by the noble Lord, Lord Fairfax, and the noble Baroness, Lady Stowell—the issue of the power and the asymmetry of these big tech AI systems and the danger of regulatory capture.

It is disappointing that, after a long gestation of national AI policy-making, which started so well back in 2017 with the Hall-Pesenti review, contributed to by our own House of Lords Artificial Intelligence Committee, the Government have ended up by producing a minimalist approach to AI regulation. I liked the phrase used by the noble Lord, Lord Empey, “lost momentum”, because it certainly feels like that after this period of time.

The UK’s National AI Strategy, a 10-year plan for UK investment in and support of AI, was published in September 2021 and accepted that in the UK we needed to prepare for artificial general intelligence. We needed to establish public trust and trustworthy AI, so often mentioned by noble Lords today. The Government had to set an example in their use of AI and to adopt international standards for AI development and use. So far, so good. Then, in the subsequent AI policy paper, AI Action Plan, published in 2022, the Government set out their emerging proposals for regulating AI, in which they committed to develop

“a pro-innovation national position on governing and regulating AI”,

to be set out in a subsequent governance White Paper. The Government proposed several early cross-sectoral and overarching principles that built on the OECD principles on artificial intelligence: ensuring safety, security, transparency, fairness, accountability and the ability to obtain redress.

Again, that is all good, but the subsequent AI governance White Paper in 2023 opted for a “context-specific approach” that distributes responsibility for embedding ethical principles into the regulation of AI systems across several UK sector regulators without giving them any new regulatory powers. I thought the analysis of this by the noble Lord, Lord Young, was interesting. There seemed to be no appreciation that there were gaps between regulators. That approach was confirmed this February in the response to the White Paper consultation.

Although there is an intention to set up a central body of some kind, there is no stated lead regulator, and the various regulators are expected to interpret and apply the principles in their individual sectors in the expectation that they will somehow join the dots between them. There is no recognition that the different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent, explainable, accurate and free of bias, whether they are in an existing regulated or unregulated sector. As noble Lords have mentioned, discussing existential risk is one thing, but going on not to regulate is quite another.

Under the current Data Protection and Digital Information Bill, data subject rights regarding automated decision-making—in practice, by AI systems—are being watered down, while our creatives and the creative industries are up in arms about the lack of support from government in asserting their intellectual property rights in the face of the ingestion of their material by generative AI developers. It was a pleasure to hear what the noble Lord, Lord Freyberg, had to say on that.

For me, the cardinal rules are that business needs clarity, certainty and consistency in the regulatory system if it is to develop and adopt AI systems, and we need regulation to mitigate risk to ensure that we have public trust in AI technology. As the noble Viscount, Lord Chandos, said, regulation is not necessarily the enemy of innovation; it can be a stimulus. That is something that we need to take away from this discussion. I was also very taken with the idea of public trust leaving on horseback.

This is where the Bill of the noble Lord, Lord Holmes, is an important stake in the ground, as he has described. It provides for a central AI authority that has a duty of looking for gaps in regulation; it sets out extremely well out the safety and ethical principles to be followed; it provides for regulatory sandboxes, which we should not forget are an innovation invented in the UK; and it provides for AI responsible officers and for public engagement. Importantly, it builds in a duty of transparency regarding data and IP-protected material where they are used for training purposes, and for labelling AI-generated material, as the noble Baroness, Lady Stowell, and her committee have advocated. By itself, that would be a major step forward, so, as the noble Lord knows, we on these Benches wish the Bill very well, as do all those with an interest in protecting intellectual property, as we heard the other day at the round table that he convened.

However, in my view what is needed at the end of the day is the approach that the interim report of the Science, Innovation and Technology Committee recommended towards the end of last year in its inquiry into AI governance: a combination of risk-based cross-sectoral regulation and specific regulation in sectors such as financial services, applying to both developers and adopters, underpinned by common trustworthy standards of risk assessment, audit and monitoring. That should also provide recourse and redress, as the Ada Lovelace Institute, which has done so much work in the area, asserts, and as the noble Lord, Lord Kirkhope, mentioned.

That should include the private sector, where there is no effective regulator for the workplace, as the noble Lord, Lord Davies, mentioned, and the public sector, where there is no central or local government compliance mechanism; no transparency yet in the form of a public register of use of automated decision-making, despite the promised adoption of the algorithmic recording standard; and no recognition by the Government that explicit legislation and/or regulation for intrusive AI technologies used in the public sector, such as live facial recognition and other biometric capture, is needed. Then, of course, we need to meet the IP challenge. We need to introduce personality rights to protect our artists, writers and performers. We need the labelling of AI-generated material alongside the kinds of transparency duties contained in the noble Lord’s Bill.

Then there is another challenge, which is more international. This was mentioned by the noble Lords, Lord Kirkhope and Lord Young, the noble and learned Lord, Lord Thomas of Cwmgiedd, and the noble Earl, Lord Erroll. We have world-beating AI researchers and developers. How can we ensure that, despite differing regulatory regimes—for instance, between ourselves and the EU or the US—developers are able to commercialise their products on a global basis and adopters can have the necessary confidence that the AI product meets ethical standards?

The answer, in my view, lies in international agreement on common standards such as those of risk and impact assessment, testing, audit, ethical design for AI systems, and consumer assurance, which incorporate what have become common internationally accepted AI ethics. Having a harmonised approach to standards would help provide the certainty that business needs to develop and invest in the UK more readily, irrespective of the level of obligation to adopt them in different jurisdictions and the necessary public trust. In this respect, the UK has the opportunity to play a much more positive role with the Alan Turing Institute’s AI Standards Hub and the British Standards Institution. The OECD.AI group of experts is heavily involved in a project to find common ground between the various standards.

We need a combination of proportionate but effective regulation in the UK and the development of international standards, so, in the words of the noble Lord, Lord Holmes, why are we not legislating? His Bill is a really good start; let us build on it.