Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL] Debate
Full Debate: Read Full DebateLord Clement-Jones
Main Page: Lord Clement-Jones (Liberal Democrat - Life peer)Department Debates - View all Lord Clement-Jones's debates with the Department for Business and Trade
(5 days, 12 hours ago)
Lords ChamberMy Lords, I declare my AI interests as set out in the register. I thank Big Brother Watch, the Public Law Project and the Ada Lovelace Institute, which, each in their own way, have provided the evidence and underpinned my resolve to ensure that we regulate the adoption of algorithmic and AI tools in the public sector, which are increasingly being used across it to make and support many of the highest-impact decisions affecting individuals, families and communities across healthcare, welfare, education, policing, immigration and many other sensitive areas of an individual’s life. I also thank the Public Bill Office, the Library and other members of staff for all their assistance in bringing this Bill forward and communicating its intent and contents, and I thank all noble Lords who have taken the trouble to come to take part in this debate this afternoon.
The speed and volume of decision-making that new technologies will deliver is unprecedented. They have the potential to offer significant benefits, including improved efficiency and cost effectiveness in government operations, enhanced service delivery and resource allocation, better prediction and support for vulnerable people and increased transparency in public engagement. However, the rapid adoption of AI in the public sector also presents significant risks and challenges, with the potential for unfairness, discrimination and misuse through algorithmic bias and the need for human oversight, a lack of transparency and accountability in automated decision-making processes and privacy and data protection concerns.
Incidents such as the 2020 A-level and GCSE grading fiasco, where an algorithmic approach saw students, particularly those from lower-income areas, unfairly miss out on university places when an algorithm was used to estimate grades from exams that were cancelled because of Covid-19, have starkly illustrated the dangers of unchecked algorithmic systems in public administration disproportionately affecting those from lower-income backgrounds. That led to widespread public outcry and a loss of trust in government use of technology.
Big Brother Watch’s investigations have revealed that councils across the UK are conducting mass profiling and citizen scoring of welfare and social care recipients. Its report, entitled Poverty Panopticon [The Hidden Algorithms Shaping Britain’s Welfare State], uncovered alarming statistics. Some 540,000 benefits applicants are secretly assigned fraud risk scores by councils’ algorithms before accessing housing benefit or council tax support. Personal data from 1.6 million people living in social housing is processed by commercial algorithms to predict rent non-payers. Over 250,000 people’s data is processed by secretive automated tools to predict the likelihood of abuse, homelessness or unemployment.
Big Brother Watch criticises the nature of these algorithms, stating that most are secretive, unevidenced, incredibly invasive and likely discriminatory. It argues that these tools are being used without residents’ knowledge, effectively creating tools of automated suspicion. The organisation rightly expressed deep concern that these risk-scoring algorithms could be disadvantaging and discriminating against Britain’s poor. It warns of potential violations of privacy and equality rights, drawing parallels to controversial systems like the Metropolitan Police’s gangs matrix database, which was found to be operating unlawfully. From a series of freedom of information requests last June, Big Brother Watch found that a flawed DWP algorithm wrongly flagged 200,000 housing benefit claimants for possible fraud and error, which meant that thousands of UK households every month had their housing benefit claims unnecessarily investigated.
In August 2020, the Home Office agreed to stop using an algorithm to help sort visa applications after it was discovered that the algorithm contained entrenched racism and bias, and following a challenge from the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove. The algorithm essentially created a three-tier system for immigration, with a speedy boarding lane for white people from the countries most favoured by the system. Privacy International has raised concerns about the Home Office's use of a current tool called Identify and Prioritise Immigration Cases—IPIC—which uses personal data, including biometric and criminal records to prioritise deportation cases, arguing that it lacks transparency and may encourage officials to accept recommended decisions without proper scrutiny.
Automated decision-making has been proven to lead to harms in privacy and equality contexts, such as in the Harm Assessment Risk Tool, which was used by Durham Police until 2021, and which predicted reoffending risks partly based on an individual’s postcode in order to inform charging decisions. All these cases illustrate how ADM can perpetuate discrimination. The Horizon saga illustrates how difficult it is to secure proper redress once the computer says no.
There is no doubt that our new Government are enthusiastic about the adoption of AI in the public sector. Both the DSIT Secretary of State and Feryal Clark, the AI Minister, are on the record about the adoption of AI in public services. They have ambitious plans to use AI and other technologies to transform public service delivery. Peter Kyle has said:
“We’re putting AI at the heart of the government’s agenda to boost growth and improve our public services”,
and
“bringing together digital, data and technology experts from across Government under one roof, my Department will drive forward the transformation of the state”.—[Official Report, Commons, 2/9/24; col. 89.]
Feryal Clarke has emphasised the Administration’s desire to “completely transform digital Government” with DSIT. As the Government continue to adopt AI technologies, it is crucial to balance the potential benefits with the need for responsible and ethical implementation to ensure fairness, transparency and public trust.
The Ada Lovelace Institute warns of the unintended consequences of AI in the public sector, including the risk of entrenching existing practices, instead of fostering innovation and systemic solutions. As it says, the safeguards around automated decision-making, which exist only in data protection law, are therefore more critical than ever in ensuring people understand when a significant decision about them is being automated, why that decision is made, and have routes to challenge it, or ask for it to be decided by a human.
Our citizens need greater, not less, protection, but rather than accepting the need for these, we see the Government following in the footsteps of their predecessor by watering down such rights as there are under GDPR Article 22 not to be subject to automated decision-making. We will, of course, be discussing these aspects of the Data (Use and Access) Bill in Committee next week.
ADM safeguards are critical to public trust in AI, but progress has been glacial. Take the Algorithmic Transparency Recording Standard, which was created in 2022 and is intended to offer a consistent framework for public bodies to publish details of the algorithms used in making these decisions. Six records were published at launch, and only three more seem to have been published since then. The previous Government announced earlier this year that the implementation of the Algorithmic Transparency Recording Standard will be mandatory for departments. Minister Clark in the new Government has said,
“multiple records are expected to be published soon”,
but when will this be consistent across government departments? What teeth do the Central Digital and Data Office and the Responsible Technology Adoption Unit, now both within DSIT, have to ensure the adoption of the standard, especially in view of the planned watering down of the Article 22 GDPR safeguards? Where is the promised repository for ATRS records? What about the other public services in local government too?
The Public Law Project, which maintains a register called Tracking Automated Government, believes that in October last year there were more than 55 examples of public ADM systems use. Where is the transparency on those? The fact is that the Government’s Algorithmic Transparency Recording Standard, while a step in the right direction, remains voluntary and lacks comprehensive adoption or indeed a compliance mechanism or opportunity for redress. The current regulatory landscape is clearly inadequate to address these challenges. Despite the existing guidance and framework, there is no legally enforceable obligation on public authorities to be transparent about their use of ADM and algorithmic systems, or to rigorously assess their impact.
To address these challenges, several measures are needed. We need to see the creation of and adherence to ethical guidelines and accountability mechanisms for AI implementation; a clear regulatory framework and standards for use in the public sector; increased transparency and explainability of the adoption and use of AI systems; investment in AI education; and workforce development for public sector employees. We also need to see the right of redress, with a strengthened right for the individuals to challenge automated decisions.
My Bill aims to establish a clear mandatory framework for the responsible use of algorithmic and automated decision-making systems in the public sector. It will help to prevent the embedding of bias and discrimination in administrative decision-making, protect individual rights and foster public trust in government use of new technologies.
I will not adumbrate all the elements of the Bill. In an era when AI and algorithmic systems are becoming increasingly central to government ambitions for greater productivity and public service delivery, this Bill, I hope noble Lords agree, is crucial to ensuring that the benefits of these technologies are realised while safeguarding democratic values and individual rights. By ensuring that ADM systems are used responsibly and ethically, the Bill facilitates their role in improving public service delivery, making government operations more efficient and responsive.
The Bill is not merely a response to past failures but a proactive measure to guide the future use of technology within government and empower our citizens in the face of these powerful new technologies. I hope that the House and the Government will agree that this is the way forward. I beg to move.
My Lords, I thank the Minister for her response and all noble Lords who have taken part in this debate, which I thought was perfectly formed and very expert. I was interested in the fact that the noble Baroness, Lady Lane-Fox, has a role in the digital centre for government and in what she had to say about what might be desirable going forward, particularly in the areas of skills and procurement. The noble Baroness, Lady Freeman, said much the same, which indicates something to me.
By the way, I think the Minister has given new meaning to the word “reservations”. That was the most tactful speech I have heard for a long time. It is a dangerous confidence if the Government really think that the ATRS, combined with the watered-down ADM provisions in the GDPR, are going to be enough. They are going to reap the whirlwind if they are not careful, with public trust being eroded. We have seen what has happened in the NHS: unless you are absolutely on the case on this, you will see 3.3 million people opt out of sharing their data, as in the NHS. This is something live; it erupts without due warning.
The examples I gave show a pretty dangerous use of ADM systems. Big Brother Watch has gone into some detail on the particular models that I illustrated. If the Government think that the ATRS is adequate, alongside their watered-down GDPR provisions, then, as I said, they are heading for considerable problems.
As the noble Lord, Lord Knight, can see, if the Government have reservations about my limited Bill, they will have even more reservations about anything more broad.
I do not want to tread on the toes of the noble Lord, Lord Holmes, who I am sure will come back with another Bill at some stage, but I am very sympathetic to the need for algorithmic impact assessment, particularly in the workplace, as advocated by the Institute for the Future of Work. We may be inflicting more amendments on the Minister when the time comes in the ADM Bill.
This Bill is, as the noble Baroness, Lady Lane-Fox, mentioned, based on the Canadian experience. It is based on a Canadian directive that is now well under way and is perfectly practical.
The warning of the noble Lord, Lord Tarassenko, about the use of large language models, with their unpredictability and inability to produce the same result, was an object lesson in the need for proper understanding and training within the Civil Service in the future, and for the development of open source-type LLMs on the back of the existing large language models that are out there, to make sure that they are properly trained and tested as a sovereign capacity.
It is clear that I am not going to get a great deal further. I am worried that we are going to see a continuation, in the phrase used by my noble friend Lady Hamwee, of the culture of deference: the machine is going to continue saying no and our citizens will continue to be unable to challenge decisions in an effective way. That will lead to further trouble.
I thank the noble Viscount, Lord Camrose, for his in-principle support. If the Bill is to have a Committee stage, I look forward to debating some of the definitions. In the meantime, I commend the Bill to the House.