Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate

Full Debate: Read Full Debate

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Lord Clement-Jones Excerpts
1st reading
Monday 9th September 2024

(5 months, 3 weeks ago)

Lords Chamber
Read Full debate Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] 2024-26 Read Hansard Text
A Bill to regulate the use of automated and algorithmic tools in decision-making processes in the public sector, to require public authorities to complete an impact assessment of automated and algorithmic decision-making systems, to ensure the adoption of transparency standards for such systems, and for connected purposes.
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I draw the attention of the House to my AI advisory interests on the register.

The Bill was introduced by Lord Clement-Jones, read a first time and ordered to be printed.

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Lord Clement-Jones Excerpts
Moved by
Lord Clement-Jones Portrait Lord Clement-Jones
- View Speech - Hansard - -

That the Bill be now read a second time.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I declare my AI interests as set out in the register. I thank Big Brother Watch, the Public Law Project and the Ada Lovelace Institute, which, each in their own way, have provided the evidence and underpinned my resolve to ensure that we regulate the adoption of algorithmic and AI tools in the public sector, which are increasingly being used across it to make and support many of the highest-impact decisions affecting individuals, families and communities across healthcare, welfare, education, policing, immigration and many other sensitive areas of an individual’s life. I also thank the Public Bill Office, the Library and other members of staff for all their assistance in bringing this Bill forward and communicating its intent and contents, and I thank all noble Lords who have taken the trouble to come to take part in this debate this afternoon.

The speed and volume of decision-making that new technologies will deliver is unprecedented. They have the potential to offer significant benefits, including improved efficiency and cost effectiveness in government operations, enhanced service delivery and resource allocation, better prediction and support for vulnerable people and increased transparency in public engagement. However, the rapid adoption of AI in the public sector also presents significant risks and challenges, with the potential for unfairness, discrimination and misuse through algorithmic bias and the need for human oversight, a lack of transparency and accountability in automated decision-making processes and privacy and data protection concerns.

Incidents such as the 2020 A-level and GCSE grading fiasco, where an algorithmic approach saw students, particularly those from lower-income areas, unfairly miss out on university places when an algorithm was used to estimate grades from exams that were cancelled because of Covid-19, have starkly illustrated the dangers of unchecked algorithmic systems in public administration disproportionately affecting those from lower-income backgrounds. That led to widespread public outcry and a loss of trust in government use of technology.

Big Brother Watch’s investigations have revealed that councils across the UK are conducting mass profiling and citizen scoring of welfare and social care recipients. Its report, entitled Poverty Panopticon [The Hidden Algorithms Shaping Britains Welfare State], uncovered alarming statistics. Some 540,000 benefits applicants are secretly assigned fraud risk scores by councils’ algorithms before accessing housing benefit or council tax support. Personal data from 1.6 million people living in social housing is processed by commercial algorithms to predict rent non-payers. Over 250,000 people’s data is processed by secretive automated tools to predict the likelihood of abuse, homelessness or unemployment.

Big Brother Watch criticises the nature of these algorithms, stating that most are secretive, unevidenced, incredibly invasive and likely discriminatory. It argues that these tools are being used without residents’ knowledge, effectively creating tools of automated suspicion. The organisation rightly expressed deep concern that these risk-scoring algorithms could be disadvantaging and discriminating against Britain’s poor. It warns of potential violations of privacy and equality rights, drawing parallels to controversial systems like the Metropolitan Police’s gangs matrix database, which was found to be operating unlawfully. From a series of freedom of information requests last June, Big Brother Watch found that a flawed DWP algorithm wrongly flagged 200,000 housing benefit claimants for possible fraud and error, which meant that thousands of UK households every month had their housing benefit claims unnecessarily investigated.

In August 2020, the Home Office agreed to stop using an algorithm to help sort visa applications after it was discovered that the algorithm contained entrenched racism and bias, and following a challenge from the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove. The algorithm essentially created a three-tier system for immigration, with a speedy boarding lane for white people from the countries most favoured by the system. Privacy International has raised concerns about the Home Office's use of a current tool called Identify and Prioritise Immigration Cases—IPIC—which uses personal data, including biometric and criminal records to prioritise deportation cases, arguing that it lacks transparency and may encourage officials to accept recommended decisions without proper scrutiny.

Automated decision-making has been proven to lead to harms in privacy and equality contexts, such as in the Harm Assessment Risk Tool, which was used by Durham Police until 2021, and which predicted reoffending risks partly based on an individual’s postcode in order to inform charging decisions. All these cases illustrate how ADM can perpetuate discrimination. The Horizon saga illustrates how difficult it is to secure proper redress once the computer says no.

There is no doubt that our new Government are enthusiastic about the adoption of AI in the public sector. Both the DSIT Secretary of State and Feryal Clark, the AI Minister, are on the record about the adoption of AI in public services. They have ambitious plans to use AI and other technologies to transform public service delivery. Peter Kyle has said:

“We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services”,


and

“bringing together digital, data and technology experts from across Government under one roof, my Department will drive forward the transformation of the state”.—[Official Report, Commons, 2/9/24; col. 89.]

Feryal Clarke has emphasised the Administration’s desire to “completely transform digital Government” with DSIT. As the Government continue to adopt AI technologies, it is crucial to balance the potential benefits with the need for responsible and ethical implementation to ensure fairness, transparency and public trust.

The Ada Lovelace Institute warns of the unintended consequences of AI in the public sector, including the risk of entrenching existing practices, instead of fostering innovation and systemic solutions. As it says, the safeguards around automated decision-making, which exist only in data protection law, are therefore more critical than ever in ensuring people understand when a significant decision about them is being automated, why that decision is made, and have routes to challenge it, or ask for it to be decided by a human.

Our citizens need greater, not less, protection, but rather than accepting the need for these, we see the Government following in the footsteps of their predecessor by watering down such rights as there are under GDPR Article 22 not to be subject to automated decision-making. We will, of course, be discussing these aspects of the Data (Use and Access) Bill in Committee next week.

ADM safeguards are critical to public trust in AI, but progress has been glacial. Take the Algorithmic Transparency Recording Standard, which was created in 2022 and is intended to offer a consistent framework for public bodies to publish details of the algorithms used in making these decisions. Six records were published at launch, and only three more seem to have been published since then. The previous Government announced earlier this year that the implementation of the Algorithmic Transparency Recording Standard will be mandatory for departments. Minister Clark in the new Government has said,

“multiple records are expected to be published soon”,

but when will this be consistent across government departments? What teeth do the Central Digital and Data Office and the Responsible Technology Adoption Unit, now both within DSIT, have to ensure the adoption of the standard, especially in view of the planned watering down of the Article 22 GDPR safeguards? Where is the promised repository for ATRS records? What about the other public services in local government too?

The Public Law Project, which maintains a register called Tracking Automated Government, believes that in October last year there were more than 55 examples of public ADM systems use. Where is the transparency on those? The fact is that the Government’s Algorithmic Transparency Recording Standard, while a step in the right direction, remains voluntary and lacks comprehensive adoption or indeed a compliance mechanism or opportunity for redress. The current regulatory landscape is clearly inadequate to address these challenges. Despite the existing guidance and framework, there is no legally enforceable obligation on public authorities to be transparent about their use of ADM and algorithmic systems, or to rigorously assess their impact.

To address these challenges, several measures are needed. We need to see the creation of and adherence to ethical guidelines and accountability mechanisms for AI implementation; a clear regulatory framework and standards for use in the public sector; increased transparency and explainability of the adoption and use of AI systems; investment in AI education; and workforce development for public sector employees. We also need to see the right of redress, with a strengthened right for the individuals to challenge automated decisions.

My Bill aims to establish a clear mandatory framework for the responsible use of algorithmic and automated decision-making systems in the public sector. It will help to prevent the embedding of bias and discrimination in administrative decision-making, protect individual rights and foster public trust in government use of new technologies.

I will not adumbrate all the elements of the Bill. In an era when AI and algorithmic systems are becoming increasingly central to government ambitions for greater productivity and public service delivery, this Bill, I hope noble Lords agree, is crucial to ensuring that the benefits of these technologies are realised while safeguarding democratic values and individual rights. By ensuring that ADM systems are used responsibly and ethically, the Bill facilitates their role in improving public service delivery, making government operations more efficient and responsive.

The Bill is not merely a response to past failures but a proactive measure to guide the future use of technology within government and empower our citizens in the face of these powerful new technologies. I hope that the House and the Government will agree that this is the way forward. I beg to move.

--- Later in debate ---
Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I thank the Minister for her response and all noble Lords who have taken part in this debate, which I thought was perfectly formed and very expert. I was interested in the fact that the noble Baroness, Lady Lane-Fox, has a role in the digital centre for government and in what she had to say about what might be desirable going forward, particularly in the areas of skills and procurement. The noble Baroness, Lady Freeman, said much the same, which indicates something to me.

By the way, I think the Minister has given new meaning to the word “reservations”. That was the most tactful speech I have heard for a long time. It is a dangerous confidence if the Government really think that the ATRS, combined with the watered-down ADM provisions in the GDPR, are going to be enough. They are going to reap the whirlwind if they are not careful, with public trust being eroded. We have seen what has happened in the NHS: unless you are absolutely on the case on this, you will see 3.3 million people opt out of sharing their data, as in the NHS. This is something live; it erupts without due warning.

The examples I gave show a pretty dangerous use of ADM systems. Big Brother Watch has gone into some detail on the particular models that I illustrated. If the Government think that the ATRS is adequate, alongside their watered-down GDPR provisions, then, as I said, they are heading for considerable problems.

As the noble Lord, Lord Knight, can see, if the Government have reservations about my limited Bill, they will have even more reservations about anything more broad.

I do not want to tread on the toes of the noble Lord, Lord Holmes, who I am sure will come back with another Bill at some stage, but I am very sympathetic to the need for algorithmic impact assessment, particularly in the workplace, as advocated by the Institute for the Future of Work. We may be inflicting more amendments on the Minister when the time comes in the ADM Bill.

This Bill is, as the noble Baroness, Lady Lane-Fox, mentioned, based on the Canadian experience. It is based on a Canadian directive that is now well under way and is perfectly practical.

The warning of the noble Lord, Lord Tarassenko, about the use of large language models, with their unpredictability and inability to produce the same result, was an object lesson in the need for proper understanding and training within the Civil Service in the future, and for the development of open source-type LLMs on the back of the existing large language models that are out there, to make sure that they are properly trained and tested as a sovereign capacity.

It is clear that I am not going to get a great deal further. I am worried that we are going to see a continuation, in the phrase used by my noble friend Lady Hamwee, of the culture of deference: the machine is going to continue saying no and our citizens will continue to be unable to challenge decisions in an effective way. That will lead to further trouble.

I thank the noble Viscount, Lord Camrose, for his in-principle support. If the Bill is to have a Committee stage, I look forward to debating some of the definitions. In the meantime, I commend the Bill to the House.

Bill read a second time and committed to a Committee of the Whole House.

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate

Full Debate: Read Full Debate

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Lord Clement-Jones Excerpts
Order of Commitment discharged
Monday 20th January 2025

(1 month, 1 week ago)

Lords Chamber
Read Full debate Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] 2024-26 Read Hansard Text
Moved by
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - -

That the order of commitment be discharged.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, I understand that no amendments have been set down to this Bill and that no noble Lord has indicated a wish to move a manuscript amendment or to speak in Committee. Unless, therefore, any noble Lord objects, I beg to move that the order of commitment be discharged.

Motion agreed.

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate

Full Debate: Read Full Debate

Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]

Lord Clement-Jones Excerpts
3rd reading
Friday 7th February 2025

(3 weeks, 3 days ago)

Lords Chamber
Read Full debate Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] 2024-26 Read Hansard Text Watch Debate
Moved by
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - -

That the Bill do now pass.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - -

My Lords, the Bill is part of a wider debate that, as was the case with the previous Private Member’s Bill today, from the noble Baroness, Lady Owen, we had as part of the Data (Use and Access) Bill. The Government have now published their vision for digital public service and their State of Digital Government Review. For the Government Digital Service, we are now told that a new chapter begins. All this is, apparently, designed to improve productivity and services in the public sector. But how citizen-centred will the new technology be? How transparent and accountable will it be? To improve algorithmic and automated decision-making in the public sector, there needs to be an increase in transparency, fairness, accountability and the implementation of robust safeguards and human oversight mechanisms.

We obviously welcome the promise of an ICO code of conduct for automated decisions in the public and private sectors, as well as the algorithmic transparency recording standard of course. But that in itself lacks a number of elements: a human oversight requirement; impact assessments; a transparency register; and the prohibition of non-scrutinisable systems. There are considerable gaps in that standard. It does not cover local authorities, police forces and other public services. I simply predict that this will become a bigger issue as government starts to implement its plans for the adoption of AI in the public sector. The Government will find themselves well behind the curve of public opinion on this.

Earl of Effingham Portrait The Earl of Effingham (Con)
- View Speech - Hansard - - - Excerpts

My Lords, I thank all noble Lords for their contributions on the Bill, particularly the noble Lord, Lord Clement-Jones, who brought it forward. In an era increasingly shaped by the decisions of automated systems, it is the responsibility of all those using algorithmic and automated decision-making systems to safeguard individuals from the potential harm caused by them. We understand the goals of the Bill: namely, to ensure trustworthy artificial intelligence that garners public confidence, fosters innovation and contributes to economic growth. But His Majesty’s Official Opposition also see certain aspects of the Bill that we believe risk its effectiveness.

As the noble Viscount, Lord Camrose, pointed out at Second Reading, we suggest the Bill may be prescriptive. The definition of “algorithmic systems” in Clause 2(1) is broad, encompassing any process, even those unrelated to digital or computational systems. While the exemptions in Clause 2(2) and (4) are noted, we believe that adopting our White Paper definitions to focus on autonomous and adaptive systems would provide clarity and align the scope with the Bill’s purpose.

The Bill may also benefit from an alternative approach to addressing the blistering pace of artificial intelligence development. Requiring ongoing assessments for every update under Clause 3(3) could be challenging, given that systems often change daily. We may also find that unintended administrative burdens are created from the Bill. For example, Clause 2(1) requires a detailed assessment even before a system is purchased, which may be unworkable, particularly for pilot projects that may not yet operate in test environments, as described in Clause 2(2)(b). These requirements could risk dampening exploration and innovation within the public sector.

Finally, we might suggest that in order to avoid potentially large amounts of bureaucracy, a more effective approach would be to require public bodies to have due regard for the five principles of artificial intelligence as evidenced in our White Paper, those five principles being: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. His Majesty’s Official Opposition do of course value the importance of automated algorithmic tools in the public sector.

Lord Leong Portrait Lord in Waiting/Government Whip (Lord Leong) (Lab)
- View Speech - Hansard - - - Excerpts

My Lords, I thank the noble Lord, Lord Clement-Jones, for bringing the important issue of public sector algorithmic transparency for debate, both today and through the Data (Use and Access) Bill, and I thank the noble Earl, Lord Effingham, for his contribution.

The algorithmic transparency recording standard, or ATRS, is now mandatory for government departments. It is focused, first, on the 16 largest departments, including HMRC; some 85 ALBs; and local authorities. It has also now been endorsed by the Welsh Government. While visible progress on enforcing this mandate was slow for some time, new records are now being added to the online repository at pace. The first batch of 14 was added in December and a second batch of 10 was added just last week. I am assured that many more will follow shortly.

The blueprint for modern digital government, as mentioned by the noble Lord, Lord Clement-Jones, was published on 21 January, promising explicitly to commit to transparency and accountability by building on the ATRS. The blueprint also makes it clear that part of the new Government Digital Service role will be to offer specialist assurance support, including a service to rigorously test models and products before release.

The Government share the desire of the noble Lord, Lord Clement-Jones, to see algorithmic tools used in the public sector safely and transparently, and they are taking active steps to ensure that that happens. I hope that reassures the noble Lord, and I look forward to continuing to engage with him on this important issue.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- View Speech - Hansard - -

My Lords, I thank the noble Earl for taking the trouble to read my Bill quite carefully. I shall obviously dispute various aspects of it with him in due course; however, I welcome the fact that he has taken the trouble to look at its provisions. I thank the Minister for his careful reply. I do not think that the Government are going far enough, but time will tell.

Bill passed and sent to the Commons.