Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Baroness Freeman of Steventon Excerpts
Baroness Freeman of Steventon Portrait Baroness Freeman of Steventon (CB)
- View Speech - Hansard - -

I would like to pick up on three important aspects of this Bill that perhaps set it aside from what we might discuss in Committee on Monday. One is the fact that it covers decision support tools, not just fully automated decision-making; one is the fact that it covers tools being considered for procurement, not just once they are in use; and the most important aspect to my mind is that it at least hints at the need for some evaluation of the efficacy and usefulness of such tools.

Until recently, I was in charge of the governance and communication around the algorithmic online decision support tools to predict prostate and breast cancer, which are used extensively around the world to help patients and doctors make shared decisions about treatment options. Because of that, they come under the Medical Devices Regulations, which meant that we needed to provide evidence of efficacy and that they were the right tools to be used in these decisions.

Decisions about financial and judicial aspects of people’s lives are equally important and I do not think we currently have the legislation to help govern these sorts of decision-support tools in these circumstances. These tools are incredibly useful, because they help ensure that the right questions are being asked, so that the decision-maker and the tool can have as much of the salient information for that decision as they can. They can then give a range of outcomes that happened to people with those characteristics in the past, often under different circumstances, allowing them to play out “what if?” scenarios. This can be helpful in making a decision, but only if the decision-maker knows certain things about that tool. The Bill is quite right that these things need to be known by the procurer before the system is unleashed in a particular scenario.

I mentioned that these tools can help ensure that the right questions are being asked. If someone feels that a tool does not have all the salient information about them, they will naturally and correctly downgrade their trust in the precision of the output. An experienced doctor who knows that the tool has not asked about the frailty of a patient, for instance, will naturally know that there is uncertainty and that they will need to look at the patient in front of them and adjust the tool’s output, using their clinical judgment. However, a decision-maker who is not aware that the tool is lacking in some important piece of information, perhaps because that information cannot easily be quantified or because there was not enough data to include it in the algorithm, needs to be alerted to this major cause of uncertainty. Because the algorithms are built using data from what has happened to people in the past, users need to know how relevant that data is to their situation. Does it include enough people with characteristics similar to them? Because for some longer-term outcomes that data might necessarily be quite old, does that add more uncertainty to the outputs? Without knowing the basis for the algorithm, people cannot assess how much weight to put on the tool’s results.

These are questions that can be asked of any algorithmic tool supplier, but any procurer or user should be able to ask about the effectiveness of the tool as used in a real-world scenario as well. How accurate is it in every dimension in your scenario, which might be very different from the situation in which it was developed? How do decision-makers respond to outputs? Do they understand its limitations? Do they overtrust it or undertrust it? These are vital questions to ask, and the answers need to be supplied for any form of decision support tool.

This Bill is the only time I have seen the suggestion that those sorts of questions about efficacy and applicability, and user experience such as training, are talked about as stages that should be completed before procurement, as well as during use, and made transparent. I urge that these aspects are considered.