Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate
Full Debate: Read Full DebateViscount Camrose
Main Page: Viscount Camrose (Conservative - Excepted Hereditary)Department Debates - View all Viscount Camrose's debates with the Department for Business and Trade
(5 days, 14 hours ago)
Lords ChamberMy Lords, of course I must start by joining others in thanking the noble Lord, Lord Clement-Jones, for bringing forward this timely and important Bill, with whose aims we on these Benches strongly agree. As public bodies take ever more advantage of new technological possibilities, surely nothing is more critical than ensuring that they do so in a way that adheres to principles of fairness, transparency and accountability.
It was also particularly helpful to hear from the noble Lord the wide range of very specific examples of the problems caused by what I will call AADM for brevity. I felt that they really brought it to life. I also take on board the point made by the noble Lord, Lord Knight, about hiring and firing by AADM. The way this is done is incredibly damaging and, frankly, if I may say so, too often simply boneheaded.
The point by the noble Baroness, Lady Lane-Fox, about procurement is absolutely well founded: I could not agree more strongly that this is a crucial area for improvement. That point was well supported by the noble Baroness, Lady Freeman of Steventon, as well. I thought that the argument, powerful as ever, from noble Lord, Lord Tarassenko, for sovereign AI capabilities was also particularly useful, and I hope that the Government will consider how to take that forward. Finally, I really welcomed the point made so eloquently by the noble Baroness, Lady Hamwee, in reminding us that just the existence of a human in the loop is a completely insufficient condition for making these things effective.
We strongly support the goal of this Bill: to ensure trustworthy AI that deserves public confidence, fosters innovation and contributes to economic growth. However, the approach proposed, raises—for me, anyway—several concerns that I worry could hinder its effectiveness.
First, definition is a problem. Clause 2(1) refers to “any algorithmic … systems” but, of course, “algorithmic” can have a very broad definition: it can encompass any process, even processes that are unrelated to digital or computational systems. While the exemptions in subsections (2) and (4) are noted, did the noble Lord give consideration to adopting or incorporating the AI White Paper’s definition around autonomy and adaptiveness, or perhaps just the definition around AADM used in the DUA Bill, which we will no doubt be discussing much more on Monday? We feel that improving the definition would provide some clarity and better align the scope with the Bill’s purpose.
I also worry that the Bill fails to address the rapid pace of AI development. For instance, I worry that requiring ongoing assessments for every update under Clause 3(3) is impractical, given that systems often change daily. This obligation should be restricted to significant changes, thereby ensuring that resources are spent where they matter most.
I worry, too, about the administrative burden that the Bill may create. For example, Clause 2(1) demands a detailed assessment even before a system is purchased. I feel that that is unrealistic, particularly with pilot projects that may operate in a controlled way but in a production environment, not in a test environment as described in Clause 2(2)(b). Would that potentially risk stifling exploration and innovation, and, indeed, slowing procurement within the public sector?
Another area of concern is communication. It is so important that AI gains public trust and that people come to understand the systems and the safeguards in place around them. I feel that the Bill should place greater emphasis on explaining decisions to the general public in ways that they can understand rapidly, so that we can ensure that transparency is not only achieved but perceived.
Finally, the Bill is very prescriptive in nature, and I worry that such prescriptiveness ends up being ineffective. Would it be a more effective approach, I wonder, to require public bodies to have due regard for the five principles of AI outlined in the White Paper, allowing them the flexibility to determine how best to meet those standards, but in ways that take account of the wildly differing needs, approaches and staffing of the public bodies themselves? Tools such as the ATRS could obviously be made available to assist, but I feel that public bodies should have the agency to find the most effective solutions for their own circumstances.
Let me finish with three questions for the Minister. First, given the rapid pace of tech change, what consideration will be given to ensure that public authorities can remain agile and responsive, while continuing to meet its requirements? Secondly, the five principles of AI set out in the White Paper by the previous Government offer a strong foundation for guiding public bodies. Will the Minister consider whether allowing flexibility in how these principles are observed might achieve the Bill’s goals, while reducing the administrative burdens and encouraging innovation? Thirdly, what measures will be considered to build public trust in AI systems, ensuring that the public understand both the decisions made and the safeguards in place around them?