Employment Rights Bill Debate
Full Debate: Read Full DebateLord Holmes of Richmond
Main Page: Lord Holmes of Richmond (Conservative - Life peer)Department Debates - View all Lord Holmes of Richmond's debates with the Department for Business and Trade
(2 days, 14 hours ago)
Lords ChamberMy Lords, Amendment 111ZA seeks to introduce a requirement for workplace AI risk and impact assessments. This amendment is focused on addressing the profound and rapidly evolving impact of artificial intelligence systems on the modern workplace. There are many opportunities for its adoption but also risks and impacts. There is potentially massive job displacement. AI could displace 1 million to 3 million UK jobs overall. There are workplaces skills gaps; more than half the UK workforce lacks essential digital skills and the majority of the public has no AI education or training.
AI recruitment algorithms have resulted in race and sex discrimination. There are legal vulnerabilities. Companies risk facing costly lawsuits and settlements when unsuccessful job applicants claim unlawful discrimination by AI hiring systems. Meanwhile, AI adoption accelerates rapidly, and the UK’s regulatory framework is lagging behind.
Organisations such as the Trades Union Congress and the Institute for the Future of Work have consistently highlighted the critical need for robust regulation in this area. The TUC, through its artificial intelligence regulation and employment rights Bill, drafted with a multi-stakeholder task force, explicitly proposes workforce AI risk assessments and emphasises the need for worker consultation before AI systems are implemented. It also advocates for fundamental rights, such as a right to a human review for high-risk decisions. IFOW similarly calls for an accountability for algorithms Act that would mandate pre-emptive algorithmic impact assessments to identify and mitigate risks, ensuring greater transparency and accountability in the use of AI at work. Both organisations stress that existing frameworks are insufficient to protect workers from the potential harms of AI.
When I spoke to a similar amendment—Amendment 149—in Committee, the Minister acknowledged this and said:
“The Government are committed to working with trade unions, employers, workers and experts to examine what AI and new technologies mean for work, jobs and skills. We will promote best practice in safeguarding against the invasion of privacy through surveillance technology, spyware and discriminatory algorithmic decision-making … However, I assure the noble Lord, Lord Clement-Jones, that the Institute for the Future of Work will be welcome to make an input into that piece of work and the consultation that is going forward. I reassure the noble Baroness, Lady Bennett, and all noble Lords that this is an area that the Government are actively looking into, and we will consult on proposals in the make work pay plan in due course”.—[Official Report, 5/6/25; col. 878.]
This was all very reassuring, perhaps, but I have retabled this amendment precisely because we need more concrete specifics regarding this promised consultation.
The TUC and IFOW have been working on this for four years. Is it too much to ask the Government to take a clear position on what is proposed now? The Minister referred to the importance of proper consultation. This is a crucial area impacting the fundamental rights and well-being of workers right now, often without their knowledge, and AI systems are increasingly being introduced into the workforce, so the Government need to provide clarity on what kind of consultation is being undertaken, with whom they will engage beyond relevant stakeholders and what the precise timescale is for this consultation and any subsequent legislative action, particularly given the rapid introduction of AI into workplaces.
We cannot afford a wait-and-see approach. If comprehensive AI regulation cannot be addressed within this Bill as regards the workplace, we need an immediate and clear commitment to provision within dedicated AI legislation, perhaps coming down the track, to ensure that AI in the workplace truly benefits everyone. I beg to move.
My Lords, it is always a pleasure to follow my friend, the noble Lord, Lord Clement-Jones, who, in his single Nelsonian amendment, has covered a lot of the material in my more spread-out set of amendments. I support his Amendment 111ZA and will speak to my Amendments 168 to 176. I declare my interests in the register, particularly my technology interests, not least as a member of the advisory board of Endava plc and as a member of the technology and science advisory committee of the Crown Estate.
I will take one brief step backwards. From the outset, we have heard that the Government do not want to undertake cross-sector AI legislation and regulation. Rather, they want to take a domain-specific approach. That is fine; it is clearly the stated position, although it would not be my choice. But it is simultaneously interesting to ask how, if that choice is adopted, consistency across our economy and society is ensured so that, wherever an individual citizen comes up against AI, they can be assured of a consistent approach to the treatment of the challenges and opportunities of that AI. Similarly, what happens where there is no competent regulator or authority in that domain?
At the moment, largely, neither approach seems to be being adopted. Whenever I and colleagues have raised amendments around AI in what we might call domain-specific areas, such as the Product Regulation and Metrology Bill, the data Bill and now the Employment Rights Bill, we are told, “This is not the legislation for AI”. I ask the Minister for clarity as to whether, if a cross-sector approach to AI is not being taken, a domain-specific approach is, as opportunities are not being taken up when appropriate legislation comes before your Lordships’ House.
I turn to the amendments in my name. Amendment 168 goes to the very heart of the issue around employers’ use of AI. Very good, if not excellent, principles were set out in the then Government’s White Paper of 2023. I have transposed many of these into my Amendment 168. Would it not be beneficial to have these principles set in statute for the benefit of workers, in this instance, wherever they come across employers deploying AI in their workplace?
Amendment 169 lifts a clause largely from my Artificial Intelligence (Regulation) Private Member’s Bill and suggests that an AI responsible officer in all organisations that develop, deploy and use AI would be a positive thing for workers, employees and employers alike. This would not be seen as burdensome, compliant or a mere question of audit but as a positive, vibrant, dynamic role, so that the benefits of AI could be felt by workers right across their employment experience. It would be proportionate and right touch, with reporting requirements easily recognised as mirroring similar requirements set out for other obligations under the Companies Act. If we had AI responsible officers across our economy, across businesses and organisations deploying and using AI right now, this would be positive, dynamic and beneficial for workers, employees, employers, our economy and wider society.
Amendment 170 goes to the issue of IP copyright and labelling. It would put a responsibility on workers who are using AI to report to the relevant government department on the genesis of that IP and copyrighted material, and the data used in that AI deployment, by which means there would be clarity not only on where that IP copyright and data had emanated from but that it had been got through informed consent and that all IP and copyright obligations had been respected and adhered to.
Amendments 171 and 172 similarly look at where workers’ data may be ingested right now by employers’ use of AI. These are such rich, useful and economically beneficial sources of data for employers and businesses. Amendment 171 simply suggests that there should be informed consent from those workers before any of their data can be used, ingested and deployed.
I would like to take a little time on Amendment 174, around the whole area of AI in recruitment and employment. This goes back to one of my points at the beginning of this speech: for recruitment, there currently exists no competent authority or regulator. If the Government continue with their domain-specific approach, recruitment remains a gap, because there is no domain-specific competent authority or regulator that could be held responsible for the deployment and development of AI in that sector. If, for example, somebody finds themselves not making a shortlist, they may not know that AI has been involved in making that decision. Even if they were aware, they would find themselves with no redress and no competent authority to take their claim to.