Wednesday 12th February 2020

(4 years, 2 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Baroness Rock Portrait Baroness Rock (Con)
- Hansard - -

My Lords, I congratulate the noble Lord, Lord Clement-Jones, on securing this important debate. It is a topic that I know is close to his heart. I had the privilege of serving on the Select Committee on Artificial Intelligence which he so elegantly and eloquently chaired.

Algorithmic decision-making has enormous potential benefits in the public sector and it is therefore good that we are seeing growing efforts to make use of this technology Indeed, only last month, research was published showing how AI may be useful in making screening for breast cancer more efficient. The health sector has many such examples but algorithmic decision-making is showing potential in other sectors too.

However, the growing use of public sector algorithmic decision-making also brings challenges. When an algorithm is being used to support a decision, it can be unclear who is accountable for the outcome. Who is the front-line decision-maker? Is it the administrator in charge of the introduction of the Al tool, or perhaps the private sector developer? We must make sure that the lines of accountability are always clear. With more complex algorithmic decision-making, it can be unclear why a decision has been made. Indeed, even the public body making the decision may be unable to interrogate the algorithm being used to support it. This threatens to undermine good administration, procedural justice and the right of individuals to redress and challenge. Finally, using past data to drive recommendations and decisions can lead to the replication, entrenchment and even the exacerbation of unfair bias in decision-making against particular groups.

What is at stake? Algorithmic decision-making is a general-purpose technology which can be used in almost every sector. The challenges it brings are diverse and the stakes involved can be very high indeed. At an individual level, algorithms may be used to make decisions about medical diagnosis and treatment, criminal justice, benefits entitlement or immigration. No less important, algorithmic decision-making in the public sector can make a difference to resource allocation and policy decisions, with widespread impacts across society.

I declare an interest as a board member of the Centre for Data Ethics and Innovation. We have spent the last year conducting an in-depth review into the specific issue of bias in algorithmic decision-making. We have looked at this issue in policing and in local government, working with civil society, central government, local authorities and police forces in England and Wales. We found that there is indeed the potential for bias to creep in where algorithmic decision-making is introduced, but we also found a great deal of willingness to identify and address these issues.

The assessment of consequences starts with the public bodies using algorithmic decision-making. They want to use new technology responsibly, but they need the tools and frameworks to do so. The centre developed specific guidance for police forces to help them trial data analytics in a way that considers the potential for bias—as well as other risks—from the outset. The centre is now working with individual forces and the Home Office to refine and trial this guidance, and will be making broader recommendations to the Government at the end of March.

However, self-assessment tools and a focus on algorithmic bias are only part of the answer. There is currently insufficient transparency and centralised knowledge about where high-stakes algorithmic decision-making is taking place across the public sector. This fuels misconceptions, undermines public trust and creates difficulties for central government in setting and implementing standards for the use of data-driven technology, making it more likely that the technology may be used in unethical ways.

The CDEI was pleased to contribute to the recently published report from the Committee on Standards in Public Life’s AI review, which calls for greater openness in the use of algorithmic decision-making in the public sector. It also is right that the report calls for a consistent approach to formal assessment of the consequences of introducing algorithmic decision-making and independent mechanisms of accountability. Developments elsewhere, such as work being done in Canada, show how this may be done.

The CDEI’s new work programme commences on 1 April. It will be proposing a programme of work exploring transparency standards and impact assessment approaches for public sector algorithmic decision-making. This is a complex area. The centre would not recommend new obligations for public bodies lightly. We will work with a range of public bodies to explore possible solutions that will allow us to know where important decisions are being algorithmically supported in the public sector, and consistently and clearly assess the impact of those algorithms.

There is a lot of good work on these issues going on across government. It is important that we all work together to ensure that these efforts deliver the right solutions.