Algorithms: Public Sector Decision-making

Lord Browne of Ladyton Excerpts
Wednesday 12th February 2020

(4 years, 8 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - -

My Lords, it is a pleasure to follow the noble Lord. At the heart of his speech he made a point that I violently agree with: the pace of science and technology is utterly outstripping the ability to develop public policy to engage with it. We are constantly catching up. This is not a specific point for this debate, but it is a general conclusion that I have come to. We need to reform the way in which we make public policy to allow the flexibility, within the boxes of what is permitted, for advances to be made, but to remain within a regulated framework. But perhaps that is a more general debate for another day.

I am not a graduate of the Artificial Intelligence Select Committee. I wish I had been a member of it. When its very significant and widely recognised as great report was debated in your Lordships’ House, I put my name down to speak. I found myself in a very small minority of people who had not been a member of the committee, but I did it out of interest rather than knowledge. It was an extraordinary experience. I learned an immense amount in a very short time in preparing a speech that I thought would keep my end up among all the people who had spent all this time involved in the subject. I did the same when I saw that the noble Lord, Lord Clement-Jones, had secured this debate, because I knew I was guaranteed to learn something. I did, and I thank him for his consistent tutoring of me by my following his contributions in your Lordships’ House. I am extremely grateful to him that he secured this debate, as the House should be.

I honestly was stunned to see the extensive use of artificial intelligence technology in the public services. There is no point in my trying to compete with the list of examples the noble Lord gave in opening the debate so well. It is being used to automate decision processes and to make recommendations and predications in support of human decisions—or, more likely in many cases, human decisions are required in support of its decisions. A remarkable number of these decisions rely on potentially controversial data usage.

That leads me to my first question for the Minister. To what extent are the Government—who are responsible for all of this public service in accountability terms—aware of the extent to which potentially controversial value judgments are being made by machines? More importantly, to what degree are they certain that there is human oversight of these decisions? Another element of this is transparency, which I will return to in a moment, but in the actual decision-making process, we should not allow automated value judgments where there is no human oversight. We should insist that there is a minimum understanding on the part of the humans of what has promoted that value judgment from the data.

I constantly read examples of decisions being made by artificial intelligence machine learning where the professionals who are following them are unable to explain them to the people whose lives are being affected by them. When they are asked the second question, “Why?”, they are unable to give an explanation because the machine can see something in the data which they cannot, and they are at a loss to understand what it is. In a medical situation, there are lots of black holes in the decisions that are made, including in the use of drugs. Perhaps we should rely on the outcomes rather than always understanding. We probably would not give people drugs if we knew exactly how they all worked.

So I am not saying that all these decisions are bad, but there should be an overarching rule about these controversial issues. It is the Government’s duty at least to know how many of these decisions are being made. I want to hear an assurance that the Government are aware of where this is happening and are happy about the balanced judgments that are being made, because they will have to be made.

I push unashamedly for increased openness, transparency and accountability on algorithmic decision-making. That is the essence of the speech that opened this debate, and I agree 100% with all noble Lords who made speeches of that form. I draw on those speeches and ask the Government to ensure that where algorithms are used, this openness and transparency are required and not just permitted, because, unless it is required, people will not know why decisions about them have been made. Most of those people have no idea how to ask for the openness that they should expect.