Wednesday 12th February 2020

(4 years, 10 months ago)

Lords Chamber
Read Hansard Text Read Debate Ministerial Extracts
Question for Short Debate
18:55
Asked by
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - - - Excerpts

To ask Her Majesty’s Government what steps they have taken to assess the full implications of decision-making and prediction by algorithm in the public sector.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, first, a big thank you to all noble Lords who are taking part in the debate this evening.

Over the past few years we have seen a substantial increase in the adoption of algorithmic decision-making—ADM—and prediction across central and local government. An investigation by the Guardian last year showed that some 140 of 408 councils in the UK are using privately developed algorithmic “risk assessment” tools, particularly to determine eligibility for benefits and to calculate entitlements. Data Justice Lab research in late 2018 showed that 53 out of 96 local authorities and about a quarter of police authorities are now using algorithms for prediction, risk assessment and assistance in decision-making. In particular, we have the Harm Assessment Risk Tool—HART—system used by Durham police to predict reoffending, which was shown by Big Brother Watch to have serious flaws in the way the use of profiling data introduces bias and discrimination and dubious predictions.

Central government use is more opaque, but HMRC, the Ministry of Justice and the DWP are the highest spenders on digital, data and algorithmic services. A key example of ADM use in central government is the DWP’s much-criticised universal credit system, which was designed to be digital by default from the beginning. The Child Poverty Action Group, in its study, Computer Says “No!”, shows that those accessing their online account are not being given adequate explanation as to how their entitlement is calculated.

The UN special rapporteur on extreme poverty and human rights, Philip Alston, looked at our universal credit system a year ago and said in a statement afterwards:

“Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.”


These issues have been highlighted by Liberty and Big Brother Watch in particular.

Even when not using ADM solely, the impact of an automated decision-making system across an entire population can be immense in terms of potential discrimination, breach of privacy, access to justice and other rights. Last March, the Committee on Standards in Public Life decided to carry out a review of AI in the public sector to understand its implications for the Nolan principles and to examine whether government policy is up to the task of upholding standards as AI is rolled out across our public services. The committee chair, the noble Lord, Lord Evans of Weardale, said on publishing the report this week:

“Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector. Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”


It found that despite the GDPR, the data ethics framework, the OECD principles and the guidelines for using artificial intelligence in the public sector, the Nolan principles of openness, accountability and objectivity are not embedded in AI governance in the public sector, and should be.

The committee’s report presents a number of recommendations to mitigate these risks, including greater transparency by public bodies in the use of algorithms, new guidance to ensure that algorithmic decision-making abides by equalities law, the creation of a single coherent regulatory framework to govern this area, the formation of a body to advise existing regulators on relevant issues, and proper routes of redress for citizens who feel decisions are unfair.

It was clear from the evidence taken by our own AI Select Committee that Article 22 of the GDPR, which deals with automated individual decision-making, including profiling, does not provide sufficient protection for those subject to ADM. It contains a right to explanation provision when an individual has been subject to fully automated decision-making, but few highly significant decisions are fully automated. Often it is used as a decision support; for example, in detecting child abuse. The law should also cover systems where AI is only part of the final decision.

The May 2018 Science and Technology Select Committee report, Algorithms in Decision-Making, made extensive recommendations. It urged the adoption of a legally enforceable right to explanation that would allow citizens to find out how machine learning programs reach decisions that affect them and potentially challenge the results. It also called for algorithms to be added to a ministerial brief and for departments to publicly declare where and how they use them. Subsequently, a report by the Law Society published last June about the use of Al in the criminal justice system expressed concern and recommended measures for oversight, registration and mitigation of risks in the justice system.

Last year, Ministers commissioned the AI adoption review, which was designed to assess the ways that artificial intelligence could be deployed across Whitehall and the wider public sector. Yet the Government are now blocking the full publication of the report and have provided only a heavily redacted version. How, if at all, does the Government’s adoption strategy fit with the publication last June by the Government Digital Service and the Office for Artificial Intelligence of guidance for using artificial intelligence in the public sector, and then in October further guidance on AI procurement derived from work by the World Economic Forum?

We need much greater transparency about current deployment, plans for adoption and compliance mechanisms. In its report last year entitled Decision-making in the Age of the Algorithm, NESTA set out a comprehensive set of principles to inform human/machine interaction for public sector use of algorithmic decision-making which go well beyond the government guidelines. Is it not high time that a Minister was appointed, as was also recommended by the Commons Science and Technology Select Committee, with responsibility for making sure that the Nolan standards are observed for algorithm use in local authorities and the public sector and that those standards are set in terms of design, mandatory bias testing and audit, together with a register for algorithmic systems in use—

Lord Deben Portrait Lord Deben (Con)
- Hansard - - - Excerpts

Could the noble Lord extend what he has just asked for by saying that the Minister should also cover those areas where algorithms defeat government policy and the laws of Parliament? I point by way of example to how dating agencies make sure that Hindus of different castes are never brought together. The algorithms make sure that that does not happen. That is wholly contrary to the rules and regulations we have and it is rather important.

Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - - - Excerpts

My Lords, I take entirely the noble Lord’s point, but there is a big distinction between what the Government can do about the use of algorithms in the public sector and what the private sector should be regulated by. I think that he is calling for regulation in that respect.

All the aspects that I have mentioned are particularly important for algorithms used by the police and the criminal justice system in decision-making processes. The Centre for Data Ethics and Innovation should have an important advisory role in all of this. If we do not act, the Legal Education Foundation advises that we will find ourselves in the same position as the Netherlands, where there was a recent decision that an algorithmic risk assessment tool called SyRI, which was used to detect welfare fraud, breached Article 8 of the European Convention on Human Rights.

There is a problem with double standards here. Government behaviour is in stark contrast to the approach of the ICO’s draft guidance, Explaining Decisions Made with AI, which may meet the point just made by the noble Lord. Last March, when I asked an Oral Question on this subject, the noble Lord, Lord Ashton of Hyde, ended by saying

“Work is going on, but I take the noble Lord’s point that it has to be looked at fairly urgently”.—[Official Report, 14/3/19; col. 1132.]


Where is that urgency? What are we waiting for? Who has to make a decision to act? Where does the accountability lie for getting this right?

19:04
Baroness Rock Portrait Baroness Rock (Con)
- Hansard - - - Excerpts

My Lords, I congratulate the noble Lord, Lord Clement-Jones, on securing this important debate. It is a topic that I know is close to his heart. I had the privilege of serving on the Select Committee on Artificial Intelligence which he so elegantly and eloquently chaired.

Algorithmic decision-making has enormous potential benefits in the public sector and it is therefore good that we are seeing growing efforts to make use of this technology Indeed, only last month, research was published showing how AI may be useful in making screening for breast cancer more efficient. The health sector has many such examples but algorithmic decision-making is showing potential in other sectors too.

However, the growing use of public sector algorithmic decision-making also brings challenges. When an algorithm is being used to support a decision, it can be unclear who is accountable for the outcome. Who is the front-line decision-maker? Is it the administrator in charge of the introduction of the Al tool, or perhaps the private sector developer? We must make sure that the lines of accountability are always clear. With more complex algorithmic decision-making, it can be unclear why a decision has been made. Indeed, even the public body making the decision may be unable to interrogate the algorithm being used to support it. This threatens to undermine good administration, procedural justice and the right of individuals to redress and challenge. Finally, using past data to drive recommendations and decisions can lead to the replication, entrenchment and even the exacerbation of unfair bias in decision-making against particular groups.

What is at stake? Algorithmic decision-making is a general-purpose technology which can be used in almost every sector. The challenges it brings are diverse and the stakes involved can be very high indeed. At an individual level, algorithms may be used to make decisions about medical diagnosis and treatment, criminal justice, benefits entitlement or immigration. No less important, algorithmic decision-making in the public sector can make a difference to resource allocation and policy decisions, with widespread impacts across society.

I declare an interest as a board member of the Centre for Data Ethics and Innovation. We have spent the last year conducting an in-depth review into the specific issue of bias in algorithmic decision-making. We have looked at this issue in policing and in local government, working with civil society, central government, local authorities and police forces in England and Wales. We found that there is indeed the potential for bias to creep in where algorithmic decision-making is introduced, but we also found a great deal of willingness to identify and address these issues.

The assessment of consequences starts with the public bodies using algorithmic decision-making. They want to use new technology responsibly, but they need the tools and frameworks to do so. The centre developed specific guidance for police forces to help them trial data analytics in a way that considers the potential for bias—as well as other risks—from the outset. The centre is now working with individual forces and the Home Office to refine and trial this guidance, and will be making broader recommendations to the Government at the end of March.

However, self-assessment tools and a focus on algorithmic bias are only part of the answer. There is currently insufficient transparency and centralised knowledge about where high-stakes algorithmic decision-making is taking place across the public sector. This fuels misconceptions, undermines public trust and creates difficulties for central government in setting and implementing standards for the use of data-driven technology, making it more likely that the technology may be used in unethical ways.

The CDEI was pleased to contribute to the recently published report from the Committee on Standards in Public Life’s AI review, which calls for greater openness in the use of algorithmic decision-making in the public sector. It also is right that the report calls for a consistent approach to formal assessment of the consequences of introducing algorithmic decision-making and independent mechanisms of accountability. Developments elsewhere, such as work being done in Canada, show how this may be done.

The CDEI’s new work programme commences on 1 April. It will be proposing a programme of work exploring transparency standards and impact assessment approaches for public sector algorithmic decision-making. This is a complex area. The centre would not recommend new obligations for public bodies lightly. We will work with a range of public bodies to explore possible solutions that will allow us to know where important decisions are being algorithmically supported in the public sector, and consistently and clearly assess the impact of those algorithms.

There is a lot of good work on these issues going on across government. It is important that we all work together to ensure that these efforts deliver the right solutions.

19:10
Lord Giddens Portrait Lord Giddens (Lab)
- Hansard - - - Excerpts

My Lords, as we have six minutes, let me also congratulate the noble Lord, Lord Clement-Jones, on having introduced this debate so ably and say what an excellent and, if I might say so, affable chairman he was of the AI committee.

AI and machine learning are on the front line of our lives wherever we look. The centre for disease control in Zhejiang province in China is deploying AI to analyse the genetic composition of the coronavirus. It has shortened a process that used to take many days to 30 minutes. Yet we—human beings—do not know how exactly that outcome was achieved. The same is true of AlphaGo Zero, which famously trained itself to beat the world champion at Go, with no direct human input whatever. That borders on what the noble Baroness, Lady Rock, said. Demis Hassabis, who created the system, said that AlphaGo Zero was so powerful because it was

“no longer constrained by the limits of human knowledge.”

That is a pretty awesome statement.

How, therefore, do we achieve accountability, as the Commons report on algorithms puts it, for systems whose reasoning is opaque to us but that are now massively entwined in our lives? This is a huge dilemma of our times, which goes a long way beyond correcting a few faulty or biased algorithms.

I welcome the Government’s document on AI and the public sector, which recognises the impact of deep learning and the huge issues it raises. California led the world into the digital revolution and looks to be doing the same with regulatory responses. One proposal is for the setting up of public data banks—data utilities—which would set standards for public data and, interestingly, integrate private data accumulated by the digital corporations with public data and create incentives for private companies to transfer private data to public uses. There is an interesting parallel experiment going on in Toronto, with Google’s direct involvement. How far are the Government tracking and seeking to learn from such innovations in different parts of the world? This is a global, ongoing revolution.

Will the Government pay active and detailed attention to the regulation of facial recognition technology and, again, look to what is happening elsewhere? The EU, for example—with which I believe we used to have some connection—is looking with some urgency at ways of imposing clear limits on such technology to protect the privacy of citizens. There is a variety of cases about this where the Information Commissioner, Elizabeth Denham, has expressed deep concern.

On a more parochial level, noble Lords will probably know about the furore around the use of facial recognition at the King’s Cross development. The cameras installed by the developer at the site incorporated facial recognition technology. Although limited in nature, it had apparently been in use for some while.

The surveillance camera code of practice states:

“There must be as much transparency in the use of a surveillance camera system as possible”.


That is not the world’s most earth-shattering statement, but it is important. The code continues by saying that clear justification must be offered. What procedures are in place across the country for that? I suspect that they are pretty minimal, but this is an awesome new technology. If you look across the world, you can see that authoritarian states have an enormous amount of day-to-day data on everybody. We do not want that situation reproduced here.

The new Centre for Data Ethics and Innovation appears to have a pivotal role in the Government’s thinking. However, there seems to be rather little detail about it so far. What is the timetable? How long will the consultation period last? Will it have regulatory powers? That is pretty important. After all, the digital world moves at a massively fast pace. How will we keep up?

Quite a range of bodies are now concerned with the impact of the digital revolution. I congratulate the Government on that, because it is an achievement. The Turing Institute seems well out in front in terms of coherence and international reputation. What is the Minister’s view of its achievements so far and how do the Government see it meshing with this diversity of other bodies that—quite rightly—have been established?

19:16
Lord Addington Portrait Lord Addington (LD)
- Hansard - - - Excerpts

My Lords, I thank my noble friend for bringing this subject to our attention. The noble Lord, Lord Giddens, went for the big picture; I will, rather unashamedly, go back to a very small part of it.

Bias in an algorithm is quite clearly there because it is supposed to be there, from what I can make out. When I first thought about the debate, I suddenly thought of a bit of work I did about three years ago with a group called AchieveAbility. It was about recruitment for people in the neurodiverse categories—that is, those with dyslexia, dyspraxia, autism and other conditions of that nature. These people had problems with recruitment. We went through things and discovered that they were having the most problems with the big recruitment processes and the big employers, because they had isometric tests and computers and things and these people did not fit there. The fact is that they processed information differently; for example, they might not want to do something when it came round. This was especially true of junior-level employment. When asked, “Can you do everything at the drop of a hat at a low level?”, these people, if they are being truthful, might say, “No”, or, “I’ll do it badly or slowly.”

The minute you put that down, you are excluded. There may be somewhere smaller where they could explain it. For instance, when asked, “Can you take notes in a meeting?”, they may say, “Not really, because I use a voice-operated computer and if I talk after you talk, it’s going to get a bit confusing.” But somebody else may say, “Oh no, I’m quite happy doing the tea.” In that case, how often will they have to take notes? Probably never. That was the subtext. The minute you dump this series of things in the way of what the person can do, you exclude them. An algorithm—this sort of artificial learning—does not have that input and will potentially compound this problem.

This issue undoubtedly comes under the heading of “reasonable adjustment”, but if people do not know that they have to change the process, they will not do it. People do not know because they do not understand the problem and, probably, do not understand the law. Anybody who has had any form of disability interaction will have, over time, come across this many times. People do it not through wilful acts of discrimination but through ignorance. If you are to use recruitment and selection processes, you have to look at this and build it in. You have to check. What is the Government’s process for so doing? It is a new field and I understand that it is running very fast, but tonight, we are effectively saying, “Put the brakes on. Think about how you use it correctly to achieve the things we have decided we want.”

There is positive stuff here. I am sure that the systems will be clever enough to build in this—or something that addresses this—in future, but not if you do not decide that you have to do it. Since algorithms reinforce themselves, as I understand it, it is quite possible that you will get a barrage of good practice in recruitment that gives you nice answers but does not take this issue into account. You will suddenly have people saying, “Well, we don’t need you for this position, then.” That is 20% of the population you can ignore, or 20% who will have to go round the sides. We really should be looking at this. As we are looking at the public sector here, surely the Government, in their recruitment practices at least, should have something in place to deal with this issue.

I should declare my interests. I am dyslexic. I am the president of the British Dyslexia Association and chairman of a technology company that does the assistive technology, so I have interests here but I also have some knowledge. If you are going to do this and get the best out of it, you do not let it run free. You intervene and you look at things. The noble Lord, Lord Deben, pointed out another area where intervention to stop something that you do not want to happen happening is there. Surely we can hear about the processes in place that will mean that we do not allow the technology simply to go off and create its own logic through not interfering with it. We have to put the brakes on and create some form of direction on this issue. If we do not, we will probably undo the good work we have done in other fields.

19:21
Lord Bishop of Oxford Portrait The Lord Bishop of Oxford
- Hansard - - - Excerpts

My Lords, I declare an interest as a board member of the CDEI and a member of the Ada Lovelace Institute’s new Rethinking Data project. I am also a graduate of the AI Select Committee. I am grateful to the noble Lord, Lord Clement-Jones, for this important debate.

Almost all those involved in this sector are aware that there is an urgent need for creative regulation that realises the benefits of artificial intelligence while minimising the risks of harm. I was recently struck by a new book by Brad Smith, the president of Microsoft, entitled Tools and Weapons—that says it all in one phrase. His final sentence is a plea for exactly this kind of creative regulation. He writes:

“Technology innovation is not going to slow down. The work to manage it needs to speed up.”


Noble Lords are right to draw attention to the dangers of unregulated and untested algorithms in public sector decision-making. As we have heard, information on how and where algorithms are used in the public sector is relatively scant. We know that their use is being encouraged by government and that such use is increasing. Some practice is exemplary, while some sectors have the feel of the wild west about them: entrepreneurial, unregulated and unaccountable.

The CDEI is the Government’s own advisory body on AI and ethics, and is committed to addressing and advising on these questions. A significant first task has been to develop an approach founded on clear, high-level ethical principles to which we can all subscribe. The Select Committee called for this principle-centred approach in our call for an AI code, and at the time we suggested five clear principles. The Committee on Standards in Public Life has now affirmed the need for this high-level ethical work and has called for greater clarity on these core principles. I support this call. Only a principled approach can ensure consistency across a broad and diverse range of applications. The debate about those principles takes us to the heart of what it means to be human and of human flourishing in the machine age. But which principles should undergird our work?

Last May the UK Government signed up to the OECD principles on artificial intelligence, along with all other member countries. The CDEI has informally adopted these principles in our own work. They are very powerful and, I believe, need to become our reference point in every piece of work. They are: AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being; AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity; AI should be transparent so that people understand AI-based outcomes and can challenge them; AI systems must function in a robust, secure and safe way; and organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning.

In our recent recommendations to the Government on online targeting, the CDEI used the OECD principles as a lens to identify the nature and scale of the ethical problems with how AI is used to shape people’s online experiences. The same principles will flow through our second major report on bias in algorithmic decision-making, as the noble Baroness, Lady Rock, described.

Different parts of the public sector have codes of ethics distinctive to them. Developing patterns of regulation for different sectors will demand the integration of these five central principles with existing ethical codes and statements in, for example, policing, social work or recruitment.

The application of algorithms in the public sector is too wide a set of issues for a single regulator or to be left unregulated. We need core values to be translated into effective regulation, standards and codes of practice. I join others in urging the Government to work with the CDEI and others to clarify and deploy the crucial principles against which the public-centred use of AI is to be assessed, and to expand the efforts to hold public bodies and the Government themselves to account.

19:26
Lord Taylor of Warwick Portrait Lord Taylor of Warwick (Non-Afl)
- Hansard - - - Excerpts

My Lords, I also thank the noble Lord, Lord Clement-Jones, for securing this timely and important debate. It is over only the last 20 years that we have seen the meteoric growth of artificial intelligence. When I was discussing this with a friend of mine, his response was: “What, only 20 years? I’ve got socks older than that.” That is probably too much information—I accept that—but there is no doubt that the use of this kind of AI-driven data is still very new.

The use of such technologies was still the stuff of science fiction when I was first elected as a district councillor in the West Midlands. When I was chancellor of Bournemouth University, the impact of data analytics was very apparent to me. It was my privilege in 1996 to present the Bill that established the use of the UK’s first ever DNA database. As vice-president of the British film board for 10 years, I saw the way in which AI simply transformed what we all see on our computer and cinema screens.

I was recently honoured to chair the Westminster Media Forum conference looking at online data regulation. A major theme of the conference was the need to balance—it is a difficult balance—the opportunities provided by these new technologies and the risks of harming the very people this is supposed to help.

The next decade will be like a “Strictly Come Dancing” waltz between democracy and technocracy. There has to be a partnership between government leaders and the tech company executives, with ethics at the centre. As the noble Lord, Lord Clement-Jones, said, one in three councils uses this AI-driven data to make welfare decisions, and at least a quarter of police authorities now use it to make predictions and risk assessments.

There are examples of good practice. I was born and raised in a part of the world universally regarded as paradise. It is called Birmingham—just off the M6 motorway by the gasworks.

None Portrait Noble Lords
- Hansard -

Hear, hear!

Lord Taylor of Warwick Portrait Lord Taylor of Warwick
- Hansard - - - Excerpts

I see there is a consensus there, and I am grateful.

I am pleased that all seven local authorities in the West Midlands Combined Authority have appointed a digital champion and co-ordinator, but in other areas evidence is emerging that some of the systems used by councils are unreliable. This is very serious, because these procedures are used to deploy benefit claims, prevent child abuse and even allocate school places.

Concerns have been raised by campaign groups such as Big Brother Watch about privacy and data security, but I am most worried about the Law Society’s concerns. It has highlighted the problems caused by biased data-based profiling of whole inner-city communities when trying to predict reoffending rates and anti-social behaviour. This can cause bias against black and ethnic minority communities. The potential for unconscious bias has to be taken very seriously.

As far as the National Health Service is concerned, accurate data analysis is clearly a valuable tool in serving the needs of patients, but according to a Health Foundation report of only last year, we are not investing in enough NHS data analysts. That surely is counterproductive.

I would like the Minister to answer some questions. Who exactly is responsible for making sure that standards are set and regulated for AI data use in local authorities and the public sector? Will it be Ofcom, as the new internet regulator, the Biometrics Commissioner or the Information Commissioner’s Office? Who will take responsibility? What protection is there in particular to safeguard the data of children and other groups, such as black and ethnic minorities? What are the Government planning to do about facial recognition systems, which are basically inaccurate? That is really quite frightening when you think about it.

AI and data technology are advancing so fast that the Government are essentially reactive, not proactive. Let us face it: Parliament still uses procedures set down in the 18th century. It took the Government three and a half years to pass the Brexit Bill, whereas it can take less than three and a half seconds for somebody to give consent, by the click of a mouse, to their personal data being stored and shared on the world wide web.

I do not think we should be in awe of AI, because ai is also the name of a small three-toed sloth that inhabits the forests of South America. The ai eats tree leaves and makes a high-pitched cry when disturbed.

Seriously, it is vital that there is co-ordination between national government, local authorities, academic research, industry and the media. At the heart of government data policy must be ethics. Regulation must not stifle innovation, but support it. We are at the start of an exciting new decade of 2020 vision, where democracy and technocracy must be in partnership. You cannot shake hands with a clenched fist.

19:31
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - - - Excerpts

My Lords, it is a pleasure to follow the noble Lord. At the heart of his speech he made a point that I violently agree with: the pace of science and technology is utterly outstripping the ability to develop public policy to engage with it. We are constantly catching up. This is not a specific point for this debate, but it is a general conclusion that I have come to. We need to reform the way in which we make public policy to allow the flexibility, within the boxes of what is permitted, for advances to be made, but to remain within a regulated framework. But perhaps that is a more general debate for another day.

I am not a graduate of the Artificial Intelligence Select Committee. I wish I had been a member of it. When its very significant and widely recognised as great report was debated in your Lordships’ House, I put my name down to speak. I found myself in a very small minority of people who had not been a member of the committee, but I did it out of interest rather than knowledge. It was an extraordinary experience. I learned an immense amount in a very short time in preparing a speech that I thought would keep my end up among all the people who had spent all this time involved in the subject. I did the same when I saw that the noble Lord, Lord Clement-Jones, had secured this debate, because I knew I was guaranteed to learn something. I did, and I thank him for his consistent tutoring of me by my following his contributions in your Lordships’ House. I am extremely grateful to him that he secured this debate, as the House should be.

I honestly was stunned to see the extensive use of artificial intelligence technology in the public services. There is no point in my trying to compete with the list of examples the noble Lord gave in opening the debate so well. It is being used to automate decision processes and to make recommendations and predications in support of human decisions—or, more likely in many cases, human decisions are required in support of its decisions. A remarkable number of these decisions rely on potentially controversial data usage.

That leads me to my first question for the Minister. To what extent are the Government—who are responsible for all of this public service in accountability terms—aware of the extent to which potentially controversial value judgments are being made by machines? More importantly, to what degree are they certain that there is human oversight of these decisions? Another element of this is transparency, which I will return to in a moment, but in the actual decision-making process, we should not allow automated value judgments where there is no human oversight. We should insist that there is a minimum understanding on the part of the humans of what has promoted that value judgment from the data.

I constantly read examples of decisions being made by artificial intelligence machine learning where the professionals who are following them are unable to explain them to the people whose lives are being affected by them. When they are asked the second question, “Why?”, they are unable to give an explanation because the machine can see something in the data which they cannot, and they are at a loss to understand what it is. In a medical situation, there are lots of black holes in the decisions that are made, including in the use of drugs. Perhaps we should rely on the outcomes rather than always understanding. We probably would not give people drugs if we knew exactly how they all worked.

So I am not saying that all these decisions are bad, but there should be an overarching rule about these controversial issues. It is the Government’s duty at least to know how many of these decisions are being made. I want to hear an assurance that the Government are aware of where this is happening and are happy about the balanced judgments that are being made, because they will have to be made.

I push unashamedly for increased openness, transparency and accountability on algorithmic decision-making. That is the essence of the speech that opened this debate, and I agree 100% with all noble Lords who made speeches of that form. I draw on those speeches and ask the Government to ensure that where algorithms are used, this openness and transparency are required and not just permitted, because, unless it is required, people will not know why decisions about them have been made. Most of those people have no idea how to ask for the openness that they should expect.

19:38
Lord Stunell Portrait Lord Stunell (LD)
- Hansard - - - Excerpts

My Lords, it is a pleasure to contribute to this debate. Unlike many noble Lords who have spoken, I am not a member of the Select Committee. However, I am a member of the Committee on Standards in Public Life. On Monday, it published its report, Artificial Intelligence and Public Standards. The committee is independent of government. I commend the report to the noble Lord, Lord Browne; he would find many of the questions he posed formulated in it, with recommendations on what should be done next.

The implications of algorithmic decision-making in the public sector for public standards, which is what the Committee has oversight of, are quite challenging. We found that there were clearly problems in the use of AI in delivering public services and in maintaining the Nolan principles of openness, accountability and objectivity. The committee, the Law Society and the Bureau of Investigative Journalism concluded that it is difficult to find out the extent of AI use in the public sector. There is a key role for the Government—I hope the Minister is picking this point up—to facilitate greater transparency in the use of algorithmic decision-making in the public sector.

The problem outlined by the noble Lord, Lord Browne, and others is what happens when the computer says no? There is a strong temptation for the person who is manipulating the computer to say, “The computer made me do it.” So, how does decision-making and accountability survive when artificial intelligence is delivering the outcome? The report of the Committee on Standards in Public Life makes it clear that public officials must retain responsibility for any final decisions and senior leadership must be prepared to be held accountable for algorithmic systems. It should never be acceptable to say, “The computer says no and that is it.” There must always be accountability and, if necessary, an appeals system.

In taking evidence, the committee also discovered that some commercially developed AI systems cannot give explanations for their decisions; they are black box systems. However, we also found that you can make significant progress in making things explainable through AI systems if the public sector which is purchasing those systems from private providers uses its market power to require that.

Several previous speakers have mentioned the problems of data bias, which is a serious concern. Certainly, our committee saw a number of worrying illustrations of that. It is worth understanding that artificial intelligence develops by looking at the data it is presented with. It learns to beat everyone in the world at Go by examining every game that has ever been played and working out what the winning combinations are.

The noble Lord, Lord Taylor, made an important point about facial recognition systems. They are very much better at recognising white faces correctly, rather than generic black faces—they all look the same to them—because the system is simply storing the information it has been given and using it to apply to the future. The example which came to the attention of the committee was job applications. If you give 100 job applications to an AI system and say, “Can you choose suitable ones for us to draw up an interview list?”, it will take account of who you previously appointed. It will work out that you normally appoint men and therefore the shortlist, or the long list, that the AI system delivers will mostly consist of men because it recognises that if it puts women forward, they are not likely to be successful. So, you have to have not only an absence of bias but a clear understanding of what your data will do to the system, and that means you have to have knowledge and accountability. That pertains to the point made by my noble friend Lord Addington about people with vulnerabilities— people who are, let us say, out of the normal but still highly employable, but do not happen to fit the match you have.

So, one of our key recommendations is new guidance on how the Equality Act will apply for algorithmic systems. I am pleased to say that the Equality and Human Rights Commission has offered direct support for our committee’s recommendation. I hope to hear from the Minister that that guidance is in her in tray for completion.

The question was asked: how will anyone regulate this? Our committee’s solution to that problem is to impose that responsibility on all the current regulatory bodies. We did not think that it would be very functional to set up a separate, independent AI regulator which tried to overarch the other regulators. The key is in sensitising, informing and equipping the existing regulators in the sector to deliver. We say there is plenty of scope for some oversight of the whole process, and we very much support the view that the Centre for Data Ethics and Innovation should be that body. There is plenty of scope for more debate, but I hope the Minister will grab hold of the recommendations we have made and push forward with implementing them.

19:45
Lord St John of Bletso Portrait Lord St John of Bletso (CB)
- Hansard - - - Excerpts

My Lords, I too thank the noble Lord, Lord Clement-Jones, for introducing this topical and very important debate, and I am delighted that we have been given six minutes rather than the previously allotted three.

As the Science and Technology Committee in the other place reported in Algorithms in Decision-making, algorithms have been used for many years to aid decision-making, but the recent huge growth of big data and machine learning has substantially increased decision-making in a number of sectors, not just in the public sector but in finance, the legal system, the criminal justice system, the education system and healthcare. I shall not give examples because of the lack of time.

As every speaker has mentioned, the use of these technologies has proven controversial on grounds of bias, largely because of the algorithm developers’ selection of datasets. The question and challenge is how to recognise bias and neutralise it. In deciding upon the relevance of algorithmic output to a decision by a public sector body, the decision-maker should have the discretion to assess unthought of relevant factors and whether the decision is one for which the algorithm was designed. Clearly there is a need for a defined code of standards for public sector algorithmic decision-making. In this regard, I refer to the recommendations of NESTA, which was mentioned by the noble Lord, Lord Clement-Jones. It recommended that every algorithm used by a public sector organisation should be accompanied by a description of its function, objectives and intended impact. If we are to ask public sector staff to use algorithms responsibly to complement or replace some aspects of their decision-making, it is vital that they have a clear understanding of what they are intended to do and in what context they might be applied.

Given the rising use of algorithms by the public sector, only a small number can be reasonably audited. In this regard, there is a recommendation that every algorithm should have an identical sand-box version for auditors to test the impact of different input conditions. As almost all noble Lords have mentioned, there is a need for more transparency about what data was used to train an algorithm, identifying whether there is discrimination on a person’s ethnicity, religion or other factors, a point most poignantly made by the noble Lord, Lord Taylor. By way of example, if someone is denied council housing or a prisoner is denied probation, they need to know whether an algorithm was involved in that decision. If it is proven that an individual was negatively impacted by a mistaken decision made by an algorithm, a recommendation has been made by NESTA that an insurance scheme should be established by public sector bodies to ensure that citizens can receive appropriate compensation.

I shall keep it brief. In conclusion, I do not want to give the impression that I am opposed to the use of algorithms in the decision-making processes of the public sector. The report on AI by our Select Committee, which was so ably chaired by the noble Lord, Lord Clement-Jones—I was lucky enough to be a member—highlighted the huge benefits that artificial intelligence can provide to the public and private sectors. Can the Minister elaborate on the Government’s adoption strategy? With the vast majority of investments in AI coming from the United States as well as from Japan, I believe the UK should focus its efforts to lead the way in developing ethical and responsible AI.

19:50
Lord Holmes of Richmond Portrait Lord Holmes of Richmond (Con)
- Hansard - - - Excerpts

My Lords, I am glad of the opportunity to take part in this debate. I declare my interests as set out in the register and congratulate my friend, the noble Lord, Lord Clement-Jones, on securing the debate. The only difficulty in speaking at this stage is that we are rightly and rapidly running out of superlatives for him. I shall merely describe him as the lugubrious, fully committed, credible and convivial noble Lord, Lord Clement-Jones.

AI has such potential and it is absolutely right that it is held to a higher standard. In this country—somewhat oddly, I believe—we currently allow thousands of human driver-related deaths on our roads. It is right that any autonomous vehicle is held to a kill rate of zero. But what does this mean in the public sector, in areas such health, welfare and defence? As the noble Lord, Lord Clement-Jones, set out, over a third of our local authorities are already deploying AI. This is not something for the future. It is absolutely for the now. None of us can afford to be bystanders, no matter how innocent. Everybody has a stake, and everybody needs to have a say.

I believe the technology has such potential for the good, not least for the public good—but it is a potential, not an inevitability. This is why I was delighted to see the report by the Committee on Standards in Public Life published only two days ago, to which the noble Lord, Lord Stunell, referred. I support everything set out in that report, not least its reference to the three critical Nolan principles. I restrict my comments to what the report said about bias and discrimination. Echoing the words of the noble Lord, Lord Stunell, I agree that there is an important role for the Equality and Human Rights Commission, alongside the Alan Turing Institute and the CDEI, in getting to grips with how public bodies need to approach algorithmic intelligence.

When it comes to fairness, what do we mean—republican, democratic, libertarian or otherwise, equality of opportunity, equality of outcomes? On the technical conception of fairness there are at least 21 different definitions which computer scientists have come up with, as well as mathematical concepts within this world. What about individual, group or utility fairness and their trade-offs? If we end up with a merely utilitarian conclusion, that will be so desperately disappointing and so dangerous. I wish I could channel my inner noble Baroness, Lady O’Neill of Bengarve, who speaks far more eloquently on this than me.

The concepts and definitions are slippery but the consequences, as we have heard, are absolutely critical—in health, in education, in recruitment, in criminal profiling. We know how to make a success of this. It will come down to the recommendations of the committee’s report. It will come down to the recommendations—and not least the five principles—set out by the Artificial Intelligence Select Committee. Yes, mea culpa, I was a member of that committee, so excellently chaired, I say again, by the noble Lord, Lord Clement-Jones.

We need to consider the approach taken by the EHRC to reasonable adjustments for public bodies and the public sector equality duty; this is really about “CAGE”—"clear, applicable guidance: essential”. The prize is extraordinary. I shall give your Lordships just one example: in health, not even diagnostics but DNA is currently costing the NHS £1 billion. A simple algorithmic solution would mean £1 billion saved and therefore £1 billion that could go into care.

I am neither a bishop nor a boffin but I believe this: if we can harness all the positivity and all the potential of algorithms, of all the elements of the fourth industrial revolution, not only will we be able to make an incredible impact on the public good but I truly believe that we will be able to unite sceptics and evangelists behind ethical AI.

19:55
Lord Wallace of Saltaire Portrait Lord Wallace of Saltaire (LD)
- Hansard - - - Excerpts

My Lords, I first came into this whole area when I was a Lords Minister in the Cabinet Office seven years ago, when we were struggling with the beginnings of Whitehall going through the digital transformation. I am struck by just how much things have moved since then, mostly in a highly desirable direction, but we are all concerned that we continue to move with the right safeguards and regulations.

I am not an expert, but I have learned a lot from my son-in-law who is a financial quantitative analyst looking for when patterns do not hold as well as when they do, and from my son-in-law who is an systems biologist working on mutations in RNA and DNA, not that far away from the current Chinese virus. So I follow what the experts do without being an expert myself.

I am also struck by how very little the public are aware, and how little Parliament has been involved so far. The noble and learned Lord, Lord Keen, referred the other week to us returning to the “normal relationship” between Parliament and government, by which I think he meant Parliament shutting up more and allowing the Government to get on with things. I hope that is not what will happen in this area, because it is vital that the Government carry Parliament and then the public with them as they go forward.

A study for the Centre for Data Ethics and Innovation by MORI showed very little public awareness of what is going on in the sector. As the public learned, so they got more sceptical; I think the word used was “shocked”. We know that there are major benefits in the public sector from the greater use of artificial intelligence, if introduced with appropriate safeguards and regulations. This is evidence-based policy-making, which is what we are all interested in, so we need to make sure that we get it right and carry the public with the expansion of artificial intelligence.

There is a real danger of provoking a tabloid press campaign against the expansion of AI. We have seen what happened with the campaign against the MMR vaccine and how much credibility that got among the popular media, so transparency, regulation, education and explanation are important.

We need a clear legal framework. In 2012, one of the problems was that different Whitehall departments had different legal frameworks for how they used their data and how far they could share it with other departments. We need a flexible legal framework because, as we manage to do more things with mass data and mass data sharing, we shall need to adapt the framework—another reason why Parliament needs to be actively engaged.

We need ethics training for those in the public sector—and in the private sector interacting with the public sector—using artificial intelligence, so that they are aware of the limitations and potential biases and aware also that human interaction with the data and the algorithms is essential. One of the things that worries me at present, as an avid reader of Dominic Cummings’ blog, is the extent to which he believes that scientists and mathematicians should be allowed to get on with things without anthropologists, sociologists and others saying, “Hang on a minute. It’s not always as simple as you think. Humans often react in illogical ways, and that has to be built into your system.”

My noble friend Lord Stunell talked about public/private interaction. I think we understand that, while we are concentrating here on the proper public sector, one cannot disentangle private contractors and data managers from what goes on in the public sector, so we also need to extend regulation and education to the many bright private suppliers. I had a young man come to see me this afternoon who works for one of these small companies, and I was extremely impressed by how well he understood the issues.

We also need to engage civil society. Having spent a few weeks talking to university research centres, I am very impressed by how on top of this they are. There are some very impressive centres, which we also need to encourage. The richness of the developing expertise within the UK is something which the Government certainly need to encourage and lead.

My noble friend Lord Addington suggested that we may need to put the brakes on. We have to recognise that the pace of change is not going to slow, so we have to adapt and make sure that our regulatory framework adapts. I was pleased to listen to a talk by the director of the Centre for Data Ethics and Innovation hosted by the All-Party Parliamentary Group on Data Analytics last week. It is a very good innovation, but it needs to expand and to have a statutory framework. Is the Minister able to tell us what progress is being made in providing the CDEI with a statutory framework?

There are alternative approaches for the Government to take. One, the Dominic Cummings approach, would be to use speed and impatience in pushing innovation through and dismissing criticism. The second would be to go at all deliberate speed, with careful explanation, clear rules and maximum transparency, carrying Parliament and the public with it. The young man who came to see me this afternoon talked about having digital liberalism or digital authoritarianism—that is the choice.

20:01
Lord Griffiths of Burry Port Portrait Lord Griffiths of Burry Port (Lab)
- Hansard - - - Excerpts

My Lords, I am only too glad to add my word of thanks to the humble, ordinary, flesh-and-blood noble Lord, Lord Clement-Jones, for our debate this evening. So many points have been raised, many of them the object of concern of more than one contributor to the debate. I am reminded a little of what happened when we had the big bang in the 1980s: finance went global and clever young people knew how to construct products within the financial sector that their superiors and elders had no clue about. Something like that is happening now, which makes it even more important for us to be aware and ready to deal with it.

I take up the point raised by the noble Lord, Lord Browne of Ladyton, about legislation. He said that it had to be flexible; I would add “nimble”. We must have the general principles of what we want to do to regulate this area available to us, but be ready to act immediately—as and when circumstances require it—instead of taking cumbersome pieces of legislation through all stages in both Houses. The movement is much faster than that in the real world. I recognise what has been said about the exponential scale in the advance of all these methodologies and approaches. We heard ample mention of the Nolan principles; I am glad about that.

On the right of explanation, I picked up an example that it is worth reminding ourselves of when we ask what it means to have an explanation of what is happening. It comes from Italy; perhaps other Members will be aware of it too. An algorithm was used to decide into which schools to send state schoolteachers. After some dubious decision-making by the algorithm, teachers had to fight through the courts to get some sort of transparency regarding the instructions that the algorithm had originally been given. Although the teachers wanted access to the source code of the algorithm—the building blocks, with all the instructions —the Italian Supreme Court ruled that appropriate transparency constituted only an explanation of its function and the underlying legal rules. In other words, it did not give the way in which the method was evolved or the algorithm formed; it was just descriptive rather than analytical. I believe that, if we want transparency, we have to make available the kind of its nuts-and-bolts aspects that lead to the algorithms that are then the object of our concern.

On accountability, who can call the shots? The noble Baroness, Lady Rock, was one of those who mentioned that. I have been reading, because it is coming up, the Government’s online harms response and the report of the House of Commons Science and Technology Committee. I am really in double-Dutch land with it all as I look at how they interleave with each other. Each says things separately and yet together. In the report that I think we will be looking at tomorrow, it is recommended that we should continue to use the good offices of the ICO to cover the way in which the online harms process is taken forward. We have also heard that that may be the appropriate body to oversee all the things that we have been discussing. While the Information Commissioner’s Office is undoubtedly brilliant and experienced, is it really the only regulator that can handle this multiplicity of tasks? Is there a need now to look at perhaps adding something in to recognise the speed at which these things are developing—to say nothing of appointing, as the report suggests, a Minister with responsibility for this area?

I am so glad to see the noble Lord, Lord Ashton, arrive in his new guise as Chief Whip, because, in a previous incarnation, we were eyeball to eyeball like this. He reminds me of course that it was on the Data Protection Bill, as it then was—an enormous, composite, huge thing—that I cut my teeth, swimming against the tide and wondering whether I would drown. It was said then that the Centre for Data Ethics and Innovation was something we should aim at. It needs to happen. Here we are, two years later, and it still has not happened; it is still an aspiration. We must move forward to a competent body that can look at the ethical dimensions of these developments. It must have teeth and it must make recommendations, and it must do so speedily. On that, I am simply repeating what others have said.

Let me finish with one word—it will go into Hansard; it will go against my reputation and I will be a finished man after saying it. When I put my computer on with certain of the things that I do—for example, the Guardian quick crossword, which is part of my morning devotions—the advertising that comes up presumably has been put there by an algorithm. But it suggests that I want to buy women’s underwear. I promise noble Lords that I have no experience in that area at all, and I want to know, as a matter of transparency, what building blocks have gone into the algorithm that has told my computer to interest me in these rather recondite aspects of womenswear.

20:08
Baroness Barran Portrait The Parliamentary Under-Secretary of State, Department for Digital, Culture, Media and Sport (Baroness Barran) (Con)
- Hansard - - - Excerpts

My Lords, I am lost for words. I am really not sure how one follows that disclosure.

I echo other noble Lords in thanking the noble Lord, Lord Clement-Jones, for securing this important and interesting debate. I think that I am with the noble Lord, Lord Browne of Ladyton, in being the only outcasts who were not on any of the committees—the noble Lord, Lord Griffiths, indicates that he was not either—so we are an elite club.

The noble Lord, Lord Clement-Jones, rightly highlighted the widespread and rapidly growing use of algorithms, which underlines the importance of this debate. As noble Lords are aware, the UK is a world leader in relation to artificial intelligence, in terms of attracting investment, attracting talent and, crucially, in thinking through the practical and ethical challenges that the technology presents.

While driving forward innovation, we need to ensure that we maintain the public’s trust in how decisions are made about them and how their data is used, thus ensuring fairness, avoiding bias and offering transparency and accountability—which are all aspirations that noble Lords have expressed.

We want to maximise the potential that artificial intelligence offers, while ensuring that any negative implications of its use are mitigated. The Government have introduced a number of measures and interventions to ensure that we maintain public trust, something underlined by my noble friend Lady Rock, in the use of these technologies in the public sector. These include setting up the Centre for Data Ethics and Innovation; developing a data ethics framework and a guide to using artificial intelligence; and creating a draft set of guidelines for AI procurement. To be successful, we need practice to become standardised, consistent and accountable. If so, public services have the potential, as my noble friend Lord Holmes pointed out, to become much fairer than they have been historically. I think it was the noble Lord, Lord Wallace, who said—forgive me if I have got this wrong—that we have to realise that potential.

Several noble Lords talked about the report from the Committee on Standards in Public Life, Artificial Intelligence and Public Standards. The Government have noted the recommendations on greater transparency by public bodies in the use of algorithms; new guidance to ensure that algorithmic decision-making abides by equalities law, which obviously applies in just the same way as in any other context; the creation of a single, coherent regulatory framework to govern this area; the formation of a statutory body to advise existing regulators on relevant issues; and proper routes of redress for citizens who feel that decisions are unfair. The Government will respond to these recommendations in due course, and that may offer another opportunity to reflect on these issues.

We also welcome the committee’s recommendation relating to the Centre for Data Ethics and Innovation. We were very pleased to see the committee’s endorsement of the centre’s important role in identifying gaps in the regulatory landscape. We are discussing with the centre the statutory powers it thinks it will need—a point made by the noble Lord, Lord Giddens—to deliver against those terms of reference. The right reverend Prelate the Bishop of Oxford expressed the need for a set of principles and an ethical basis for all our work. Noble Lords will be aware of the development of the data ethics framework, which includes a number of those principles. We are currently working on refreshing that framework to make it as up to date as possible for public servants who work with data.

The Committee on Standards in Public Life report, and others, have raised the issue of multiple frameworks. The Government are currently looking into developing a landing page on GOV.UK to enable users to assess the different frameworks and direct them to the one that is most appropriate and relevant to their needs. A number of noble Lords raised the importance of any framework staying agile and nimble. That is absolutely right. There is a lot more work to do on this, including looking at defining high-stakes algorithms and thinking through the mechanisms to ensure that decisions are made in an objective way. In that agility, I think all noble Lords would agree that we want to stay anchored to those key ethical principles, including, of course, the Nolan principles.

One of the foundations of our approach is the work being done on having a clear ethical framework, but we also need sound ways of implementing in practice the principles expressed in the framework. Part of our work in trying to increase transparency and accountability in the use of algorithms in AI has been the collaboration between the Office for Artificial Intelligence and the World Economic Forum’s Centre for the Fourth Industrial Revolution to codesign guidelines for AI procurement to unlock AI adoption in the public sector.

We published the draft guidelines for consultation in September 2019. The Office for Artificial Intelligence is now collaborating with other government departments to test those findings and proposals and has launched a series of pilots of the guidelines, including with four or five major government departments. Following the pilot and consultation phase, we will update the guidelines and work to design what only government could call an “AI procurement in a box” toolkit to provide other Governments and our public sector agencies with the tools they need to have the highest standards of procurement.

In an effort to bring coherence across central government departments, my honourable friend the Minister for Digital and Broadband and my right honourable friend the Minister for Universities, Science, Research and Innovation wrote a letter earlier this week to all Secretaries of State reminding them of and highlighting the work of the AI Council and the support it can give government departments.

The noble Lord, Lord Giddens, asked about the Alan Turing Institute. The Government value its work greatly, particularly some of the work being done around skills development, which is so critical in this field.

I think every noble Lord spoke about algorithmic bias. My noble friend Lord Taylor spoke about facial recognition and issues particularly among police forces. Other noble Lords referred to the work of DWP and child protection agencies. It is important that our work in trying to avoid bias—I think all noble Lords recognise that bias exists potentially within algorithms but also in more traditional decision-making—is guided by independent and expert advice. Again, we are grateful to the Centre for Data Ethics and Innovation, which as part of its current work programme is conducting a review into the potential for bias, looking particularly at policing, financial services, recruitment—this was referred to by the noble Lord, Lord Addington; I note how lucky it is that my noble friends Lady Rock and Lady Chisholm and I managed to beat the recruitment algorithm to get here—and local government. These sectors were all selected because they involve significant decisions being made about individuals. The report will be published in March and we very much look forward to its recommendations, which will inform our work in future.

I fear that I will have to write on some of the points raised, but I will do my best to cover as many as I can in the remaining time. The noble Lord, Lord St John, asked about having a duty on public bodies to declare where they are using algorithms. We hope the Centre will be looking at all of these things in the transparency aspect of its work. We are also currently reviewing the future work plan with the Centre, and obviously a number of the issues around accountability will be discussed as part of that.

In closing, I will go back to two points. One is on the potential of the use of artificial intelligence, which PricewaterhouseCoopers has estimated could contribute almost $16 trillion to the global economy; obviously the UK is one of the top three countries providing that, so that would be a huge boost to our economy. However, I also go back to what the right reverend Prelate the Bishop of Oxford said about what it means to be human. We can harness that potential in a way that enhances, rather than erodes, our humanity.

House adjourned at 8.20 pm.