Standards in Public Life Debate

Full Debate: Read Full Debate
Department: Cabinet Office

Standards in Public Life

Lord Clement-Jones Excerpts
Thursday 9th September 2021

(2 years, 7 months ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, it is a huge pleasure to follow the noble Lord, Lord Puttnam. I commend his Digital Technology and the Resurrection of Trust report to all noble Lords who have not had the opportunity to read it. I thank the noble Lord, Lord Blunkett, for initiating this debate.

Like the noble Lord, Lord Puttnam, I will refer to a Select Committee report, going slightly off track in terms of today’s debate: last February’s Artificial Intelligence and Public Standards report by the Committee on Standards in Public Life, under the chairmanship of the noble Lord, Lord Evans of Weardale. This made a number of recommendations to strengthen the UK’s “ethical framework” around the deployment of AI in the public sector. Its clear message to the Government was that

“the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable … on the issues of transparency and data bias in particular, there is an urgent need for … guidance and … regulation … Upholding public standards will also require action from public bodies using AI to deliver frontline services.”

It said that these were needed to

“implement clear, risk-based governance for their use of AI.”

It recommended that a mandatory public AI “impact assessment” be established

“to evaluate the potential effects of AI on public standards”

right at the project-design stage.

The Government’s response, over a year later—in May this year—demonstrated some progress. They agreed that

“the number and variety of principles on AI may lead to confusion when AI solutions are implemented in the public sector”.

They said that they had published an “online resource”—the “data ethics and AI guidance landscape”—with a list of “data ethics-related resources” for use by public servants. They said that they had signed up to the OECD principles on AI and were committed to implementing these through their involvement as a

“founding member of the Global Partnership on AI”.

There is now an AI procurement guide for public bodies. The Government stated that

“the Equality and Human Rights Commission … will be developing guidance for public authorities, on how to ensure any artificial intelligence work complies with the public sector equality duty”.

In the wake of controversy over the use of algorithms in education, housing and immigration, we have now seen the publication of the Government’s new “Ethics, Transparency and Accountability Framework for Automated Decision-Making” for use in the public sector. In the meantime, Big Brother Watch’s Poverty Panopticon report has shown the widespread issues in algorithmic decision-making increasingly arising at local-government level. As decisions by, or with the aid of, algorithms become increasingly prevalent in central and local government, the issues raised by the CSPL report and the Government’s response are rapidly becoming a mainstream aspect of adherence to the Nolan principles.

Recently, the Ada Lovelace Institute, the AI Now Institute and Open Government Partnership have published their comprehensive report, Algorithmic Accountability for the Public Sector: Learning from the First Wave of Policy Implementation, which gives a yardstick by which to measure the Government’s progress. The position regarding the deployment of specific AI systems by government is still extremely unsatisfactory. The key areas where the Government are falling down are not the adoption and promulgation of principles and guidelines but the lack of risk-based impact assessment to ensure that appropriate safeguards and accountability mechanisms are designed so that the need for prohibitions and moratoria for the use of particular types of high-risk algorithmic systems can be recognised and assessed before implementation. I note the lack of compliance mechanisms, such as regular technical, regulatory audit, regulatory inspection and independent oversight mechanisms via the CDDO and/or the Cabinet Office, to ensure that the principles are adhered to. I also note the lack of transparency mechanisms, such as a public register of algorithms in operation, and the lack of systems for individual redress in the case of a biased or erroneous decision.

I recognise that the Government are on a journey here, but it is vital that the Nolan principles are upheld in the use of AI and algorithms by the public sector to make decisions. Where have the Government got to so far, and what is the current destination of their policy in this respect?