Employment: Artificial Intelligence

(asked on 24th October 2023) - View Source

Question to the Department for Science, Innovation & Technology:

To ask His Majesty's Government whether the AI Safety Summit at Bletchley Park will address risks to workers posed by the spread of AI in the workplace, including regarding decisions on hire and fire, intrusive surveillance and potential increased discrimination; what plans they have to consider regulatory protections in the UK similar to the EU Artificial Intelligence Act; how they plan to ensure that workers' voice, concerns and ideas for solutions will be heard at the summit; and what plans they have to invite a representative from the Trade Union Congress to the summit.


Answered by
Viscount Camrose Portrait
Viscount Camrose
Shadow Minister (Science, Innovation and Technology)
This question was answered on 26th October 2023

The AI Safety Summit will focus on frontier risks posed by the most advanced AI systems. It will bring together key countries, as well as leading technology companies, academia and civil society, to drive targeted, rapid international action to drive safety at the frontier of this technology. The Summit will seek to advance international collaboration to understand, identify and mitigate frontier AI risks.

While the Summit is not focused on AI’s impact on the labour market and workers’ rights, these wider societal risks that AI, both at the frontier and not, can pose are issues the UK government takes extremely seriously at the highest levels. We are grateful for the engagement we have had to date from trade union representatives and the analysis they have shared, and we look forward to continuing that engagement after the Summit.

On regulatory protections, the AI regulation white paper was published in March this year. It set out five high-level principles that regulators should consider when thinking about AI: Safety and Security; Appropriate Transparency; Fairness; Accountability and Governance; and Contestability and Redress. On fairness, the White Paper set out that AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals, or create unfair market outcomes. The white paper proposed that regulators will need to ensure that AI systems in their domain are designed, deployed and used considering descriptions of fairness that they have developed for their remits. We expect that the implementation of fairness by existing regulators will be underpinned by existing law that protects against discrimination such as the Equality Act 2010 and Human Rights Act 1998 as well as data protection, consumer and competition law. Where AI might challenge someone’s human rights in the workplace, the UK has a strong system of legislation and enforcement of these protections, using both state and individual enforcement through specialist labour tribunals.

The UK notes the EU Artificial Intelligence Act with interest and highlights the importance of international cooperation and interoperability across AI governance approaches to ensure a global approach to responsible AI. While the EU are taking a statutory approach to AI regulation, the UK’s regulatory approach will be closely monitoring the impact of existing regulation on the wider ecosystem and we will consider whether further interventions are needed. We believe this approach strikes the right balance between responding to risks and maximising opportunities afforded by AI.

Reticulating Splines