Technology Rules: The Advent of New Technologies in the Justice System (Justice and Home Affairs Committee Report) Debate

Full Debate: Read Full Debate
Department: Home Office

Technology Rules: The Advent of New Technologies in the Justice System (Justice and Home Affairs Committee Report)

Lord Hope of Craighead Excerpts
Monday 28th November 2022

(1 year, 5 months ago)

Grand Committee
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Hope of Craighead Portrait Lord Hope of Craighead (CB)
- Hansard - -

My Lords, I congratulate the noble Baroness, Lady Hamwee, on securing this debate. As another non-member of the committee, I join the previous speaker in congratulating her and all members of the committee on such an excellent and informative report. I hope that when the Minister replies, he will be able to remove at least some of the evident disappointment which the noble Baroness felt on reading the Government’s response.

Before I go into any detail, I should explain that my interest in this subject is directed to the use of AI in the courts and the challenges that it faces. However, I confess that I have no technical expertise and have had very little contact with the courts’ use of AI at first hand; nor I did not have the advantage the committee members had of listening to the evidence, so I start with a definite disadvantage. I come from a generation which is unable to use its thumbs to operate the mobile phone. We did not have these things when we were at school so I have to jab it, as others of my generation do, with my forefingers. Things have been moving so fast that even the eight years since I retired from my judicial career have seen changes that were barely in prospect when I was still sitting as a judge.

I have struggled with the word “algorithm”, for example—not a word that I was ever accustomed to using. When I looked it up in my copy of the third edition of the shorter English dictionary, which was published in 1964 and which I purchased one year later when I was embarking on my legal career, I was told that “algorithm” is an erroneous version of “algorism”, which is an Arabic system of numbering. No other definition was offered, so I am grateful to the committee for telling me in box 1 of the report what in today’s language it really means. That definition should perhaps have made it clear that the instructions are given by means of numbers, which I believe is the way that AI operates. We owe all this to the Arabic system, which is why the word derives from the previous one.

Even so, I struggle to understand how the system works. Where do the instructions come from, and are they the right people? How do we know that the answers it produces are the right ones? Is the system open to cross-examination to test these issues? If so, how can this be done? I share the committee’s concern about where all this is leading. So far as the courts are concerned, AI comes especially into play in two ways. The first is in the provision of evidence in a criminal trial. The other is in its use in dispute resolution in the civil courts. Each of them presents very real challenges.

The report, for the most part, is directed at the use of advanced technologies by police forces. The courts become involved when evidence that has been gathered by this means is led at a criminal trial to secure a conviction. Some years ago—in fact, quite a number of years ago—I presided in a case before the criminal appeal court in which the appellant had been convicted on the basis of a primitive system of facial recognition technology. He insisted that it was a mistake and that its use was unfair because, due to problems with legal aid, he had no access to expert evidence to challenge it. It seemed to us that that amounted to a miscarriage of justice, so we set aside the conviction so that he could face trial again with expert assistance.

In the retrial, the jury—unfortunately, from his point of view—reached the same conclusion as the first jury on the recognition evidence and once again he was convicted. My point is that fairness and transparency, which the noble Baroness, Lady Primarolo, emphasised in her impressive speech, should be at the heart of any criminal trial. That requires that evidence of this kind should be open to challenge. As it happens, there was no suggestion that the evidence in that case had been manipulated; it was just said to be a mistake. The reference to the possibility of manipulation must give rise to real concerns, as shown by the very important selection of paragraphs 23 to 26 in the report, under the heading,

“The right to a fair trial”.

I support the recommendations that are referred to as numbers 1, 2 and 4 in the Government’s response. They are all designed to ensure the safe and ethical use of AI. The Government say they are confident that existing structures and organisations create a sufficient network of checks and balances, but the evidence that is narrated in this report suggests that that confidence may be misplaced. More safeguards than those that are available may be needed in this fast-moving area. I endorse the point made by the noble Lord, Lord Blunkett, which the noble Baroness mentioned: it is far better to do this now than later, when it would be too late and things would have moved on beyond recall.

As for AI’s use in dispute resolution in the civil courts, I pay tribute to the work of the Library and its very helpful briefing on the report. It contains a link to an article referred to by the noble Lord, Lord Hunt of Wirral, headed,

“Technology to become embedded in UK justice system by 2040, senior judge suggests”.


That contained a link to a speech that was given online in March this year by the Master of the Rolls, Sir Geoffrey Vos, about the future for dispute resolution in what he referred to as a “brave new world”.

If one wants to be enlightened of the huge advantages that AI can offer, they can be seen in Sir Geoffrey’s speech. He is an enthusiastic supporter, promoting AI’s use in the civil courts as fast as possible. He focuses particularly on the advantage of speed and simplicity, which gathering evidence in this way can produce. I am certainly not one of those who decries the use of AI; it is all a question of how it can be best operated.

According to Sir Geoffrey, factual disputes will themselves become a thing of the past, as so much of what we do will be indelibly recorded by AI. He referred, among other things, to number plate recognition. You cannot really dispute where your car has appeared, because AI no longer leaves any room for dispute about that. He says that we are more and more likely to find this a feature of dispute resolution in the civil courts.

He went on to say that some decisions, admittedly minor decisions, such as those about time limits and other procedural aspects, could be made by this system with no human intervention. Proposals for dispute resolution themselves would be “driven by AI”, as he puts it.

He acknowledged that public confidence is important, and that the public would need to understand what had been decided by a machine and what had not. He also said that, ultimately, there must be the ability to question an AI-driven decision before a human judge. That begs the question whether and how that can be done, and how far we can trust algorithms that are not open to being tested in that way.

I was encouraged by the statement in paragraph 32 of the Government’s response that they will work with the justice system with a view to

“better long term research and evaluation of the different circumstances in which predictive algorithms”

are described and used to support future decision-making. Of course, there is much that the courts themselves can do to control and regulate their use, but the extent of the ability of litigants to question and interrogate the algorithms is not open to control or guidance by the courts. That is why the recommendation in paragraph 155 of the report, which is dealt with in paragraph 18 of the Government’s response, is so important. It is about the need for a requirement on producers to embed explainability within the tools. If that requirement is there, one may be able open up a system of cross-examination to find out what is going on and see whether what has been produced can be relied on. I fear that the Government’s response in paragraph 35 hardly does justice to this crucial issue.

I hope that when he comes to reply the Minister will be able to reassure the noble Baroness that the Government will look again at the evidence and recommendations in the committee’s report, to see whether more can be done to regulate and control the way that AI is imposing itself on our lives. I suggest that if the Minister and his team have not already done so, they might like to read Sir Geoffrey’s speech, because it will show the advantages and concerns which surround this whole issue.