Algorithms: Public Sector Decision-making Debate
Full Debate: Read Full DebateLord Stunell
Main Page: Lord Stunell (Liberal Democrat - Life peer)Department Debates - View all Lord Stunell's debates with the Department for Digital, Culture, Media & Sport
(4 years, 9 months ago)
Lords ChamberMy Lords, it is a pleasure to contribute to this debate. Unlike many noble Lords who have spoken, I am not a member of the Select Committee. However, I am a member of the Committee on Standards in Public Life. On Monday, it published its report, Artificial Intelligence and Public Standards. The committee is independent of government. I commend the report to the noble Lord, Lord Browne; he would find many of the questions he posed formulated in it, with recommendations on what should be done next.
The implications of algorithmic decision-making in the public sector for public standards, which is what the Committee has oversight of, are quite challenging. We found that there were clearly problems in the use of AI in delivering public services and in maintaining the Nolan principles of openness, accountability and objectivity. The committee, the Law Society and the Bureau of Investigative Journalism concluded that it is difficult to find out the extent of AI use in the public sector. There is a key role for the Government—I hope the Minister is picking this point up—to facilitate greater transparency in the use of algorithmic decision-making in the public sector.
The problem outlined by the noble Lord, Lord Browne, and others is what happens when the computer says no? There is a strong temptation for the person who is manipulating the computer to say, “The computer made me do it.” So, how does decision-making and accountability survive when artificial intelligence is delivering the outcome? The report of the Committee on Standards in Public Life makes it clear that public officials must retain responsibility for any final decisions and senior leadership must be prepared to be held accountable for algorithmic systems. It should never be acceptable to say, “The computer says no and that is it.” There must always be accountability and, if necessary, an appeals system.
In taking evidence, the committee also discovered that some commercially developed AI systems cannot give explanations for their decisions; they are black box systems. However, we also found that you can make significant progress in making things explainable through AI systems if the public sector which is purchasing those systems from private providers uses its market power to require that.
Several previous speakers have mentioned the problems of data bias, which is a serious concern. Certainly, our committee saw a number of worrying illustrations of that. It is worth understanding that artificial intelligence develops by looking at the data it is presented with. It learns to beat everyone in the world at Go by examining every game that has ever been played and working out what the winning combinations are.
The noble Lord, Lord Taylor, made an important point about facial recognition systems. They are very much better at recognising white faces correctly, rather than generic black faces—they all look the same to them—because the system is simply storing the information it has been given and using it to apply to the future. The example which came to the attention of the committee was job applications. If you give 100 job applications to an AI system and say, “Can you choose suitable ones for us to draw up an interview list?”, it will take account of who you previously appointed. It will work out that you normally appoint men and therefore the shortlist, or the long list, that the AI system delivers will mostly consist of men because it recognises that if it puts women forward, they are not likely to be successful. So, you have to have not only an absence of bias but a clear understanding of what your data will do to the system, and that means you have to have knowledge and accountability. That pertains to the point made by my noble friend Lord Addington about people with vulnerabilities— people who are, let us say, out of the normal but still highly employable, but do not happen to fit the match you have.
So, one of our key recommendations is new guidance on how the Equality Act will apply for algorithmic systems. I am pleased to say that the Equality and Human Rights Commission has offered direct support for our committee’s recommendation. I hope to hear from the Minister that that guidance is in her in tray for completion.
The question was asked: how will anyone regulate this? Our committee’s solution to that problem is to impose that responsibility on all the current regulatory bodies. We did not think that it would be very functional to set up a separate, independent AI regulator which tried to overarch the other regulators. The key is in sensitising, informing and equipping the existing regulators in the sector to deliver. We say there is plenty of scope for some oversight of the whole process, and we very much support the view that the Centre for Data Ethics and Innovation should be that body. There is plenty of scope for more debate, but I hope the Minister will grab hold of the recommendations we have made and push forward with implementing them.