Artificial Intelligence (Regulation and Workers’ Rights)

Wednesday 17th May 2023

(1 year ago)

Commons Chamber
Read Hansard Text Watch Debate
Motion for leave to bring in a Bill (Standing Order No. 23)
14:20
Mick Whitley Portrait Mick Whitley (Birkenhead) (Lab)
- View Speech - Hansard - - - Excerpts

I beg to move,

That leave be given to bring in a Bill to regulate the use of artificial intelligence technologies in the workplace; to make provision about workers’ and trade union rights in relation to the use of artificial intelligence technologies; and for connected purposes.

The pursuit of fairness, dignity and security in work is the founding purpose of the labour movement, from its earliest beginnings amid the turmoil of the first industrial revolution to the present day. As Sir Patrick Vallance warned when he appeared before the Science, Innovation, and Technology Committee on 3 May, we are now living through an AI revolution that will be every bit as far reaching as the first industrial revolution. Alongside the climate crisis, it will create the most profound social changes of any of our lifetimes.

For too long, the rapid advances in artificial intelligence have gone unremarked upon by policymakers, but the speed of progress in this field is now gaining such momentum that it is impossible to ignore. The pace of change is exceeding even the expectations of AI’s most enthusiastic supporters. Technologies that experts speculated only recently were a decade away from fruition, are now a reality. It is time that our laws caught up.

This technological revolution will impact every aspect of our society. The potential of AI to be used by malign actors to disseminate dangerous misinformation has serious implications for our national security. We also need to consider how to protect our constituents from fraud—the most commonly reported crime in the UK—when artificial intelligence can imitate banks and even loved ones with increasing sophistication.

The rise of AI will force to us rethink our long-held assumptions about the labour market. Research commissioned by the then Department for Business, Energy and Industrial Strategy suggests that 7% of all British jobs could be automated out of existence within just five years because of AI, rising to 30% within the next 20 years.

To ready ourselves for a world where machines can increasingly do the jobs of humans at a fraction of the price, we need to be prepared to break with old orthodoxies. That must mean considering the role that universal basic income has to play in a labour market that will see jobs becoming scarcer, as well as the necessity of investing in lifelong education and training in a world where few people can count on having a job for life.

The Bill I am introducing to the House today does not attempt to address all the issues relating to the uses and misuses of AI. I do not believe that Parliament is capable of even beginning to do that yet. If we are going to make sure that AI works in all our interests, we need to see genuine collaboration between government and civic society, including the trade unions and the communities that we represent, and the fostering of an environment in which everyone’s voices and interests can be heard.

The central purpose of the Bill is simple: it seeks to protect the rights of those who are working alongside AI in their shops, offices, factories and services, and to preserve those rights for future generations to come. Fundamentally, it is about recognising the importance of people in a world increasingly run by machines.

Artificial intelligence is already transforming many people’s experience at work. A growing number of employers are incorporating AI-powered technologies into their workplaces, often without their workers being consulted or even informed. According to Government statistics, 68% of large companies in the UK and 15% of all British businesses had adopted at least one form of AI by January 2022. With these technologies becoming more sophisticated and readily available, that number is set to soar.

The TUC’s AI working group has been at the forefront of exploring the implications of using AI in the workplace. Its report into the worker experience of AI is perhaps the most comprehensive and insightful study into the impact of artificial intelligence on workers ever conducted in this country. It provides a valuable insight into how AI-powered technologies are increasingly being used to monitor, evaluate and manage workers.

The report highlights how the responsibilities of human managers are being replaced by AIs. Such technologies are technically capable of performing management functions, but they lack a human’s capacity for empathy, their understanding that every person is different or their ability to contextualise behaviour. The TUC report shows all of that is too often being done without any meaningful consultation with or input from the workers themselves. As a result, workers are increasingly being forced to navigate workplaces in which their autonomy and privacy are being eroded, the distinctions between home and work life blurred, and substantial quantities of workers’ data is being stored and used with little or no transparency.

This technological revolution also poses a profound challenge to the hard fought for right to equality at work. Employers are relying ever more frequently on AI to make decisions about hiring and employee performance. The problem is that these technologies can all too often perpetuate very human prejudices. In one of the most high-profile cases, Amazon was forced to scrap an AI tool that had been used to sift through the CVs of job applicants after the tool had learned that the majority of previous hires in this disproportionately male industry were men and had taught itself to downgrade applications from women. In another case highlighted by the TUC, disabled job applicants felt that they had been unfairly discriminated against by an AI based on their voice and facial expressions.

I want to make it clear that I am not opposed to artificial intelligence. Just as I recognise that AI has the awesome potential to improve our lives for the better in creating positive health outcomes and driving economic growth, I also believe that, when applied correctly, AI can make our working lives easier and more fulfilling. If I am a luddite, it is only in the truest meaning of that often-misunderstood word, in that I believe that we need to critically engage with rather than blindly accept the technology that surrounds us.

I also believe that in the workplace, as in wider society, we must guarantee that artificial intelligence works in the interests of the many, not the few. That is why, for the first time, the Bill would give workers specific protections to mitigate against the harmful application of AI.

I regret that I have been unable to print the Bill in time for the debate, but it will shortly be available for all Members to consider. I want to be clear: I do not intend this Bill to gather dust in the House of Commons Library. While I am realistic about its chances of becoming law, I hope that it can at least begin a much-needed conversation in this place about the steps that we need to take to better protect workers.

In the meantime, I will highlight some of its key provisions. My Bill is rooted in three key principles: first, that everyone should be free from discrimination in the workplace; secondly, that workers have the right to have a say in the big decisions that affect them; and finally, that we all have a right to understand how our data is being used at work.

Drawing on the recommendations of the TUC manifesto, “Dignity at Work and the AI Revolution”, the Bill establishes that “high-risk” use of AI should be targeted for further regulation and requires the Secretary of State to produce sector-specific guidance on the meaning of high-risk AI, with full input from trade unions and civil society. The Bill would ensure workers’ themselves can shape a world being ever more frequently run by machines, by introducing a statutory duty for employers to meaningfully consult with employees and their trade unions before introducing AI into the workplace.

The Bill would strengthen existing equalities law to prevent discrimination by algorithm. This includes amending the Data Protection Act 2018 to explicitly state that discriminatory data processing is always unlawful; amending the Employment Rights Act 1996 to create a statutory right, enforceable in employment tribunals, that workers should not be subject to detrimental treatment as a result of the processing of inaccurate data; reversing the burden of proof in discrimination claims that challenge decisions made by AI; and making equality impact audits a mandatory part of the data protection impact assessment, which employers would also be obliged to publish.

The Bill would establish a universal and comprehensive right to human review of high-risk decisions that have been made by AI, as well as a right to human contact when high-risk decisions are being made. Finally, it would protect workers from intrusion into their private lives by establishing a right for them to disconnect—that cause, I know, is also being championed by my friends on the Front Bench—and would require the Government to publish statutory guidance for employers on how both article 8 of the European convention on human rights and data protection law should be applied in workplaces, so that they have enough clarity about the steps they need to take to protect the privacy and work-life balances of their employees.

In short, the Bill seeks to forge a people-focused and rights-based approach which will guarantee that workers are protected in all decisions made by employers and the Government.

Question put and agreed to.

Ordered,

That Mick Whitley, Kim Johnson, Jon Trickett, Kate Hollern, Ian Byrne, Mike Amesbury, John McDonnell, Ian Mearns, Richard Burgon, Zarah Sultana, Rebecca Long Bailey and Andy McDonald present the Bill.

Mick Whitley accordingly presented the Bill.

Bill read the First time; to be read a Second time on Friday 24 November, and to be printed (Bill 309).