Artificial Intelligence and the Labour Market Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Artificial Intelligence and the Labour Market

Damian Collins Excerpts
Wednesday 26th April 2023

(1 year, 7 months ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Damian Collins Portrait Damian Collins (Folkestone and Hythe) (Con)
- Hansard - -

It is a pleasure to serve under your chairship this afternoon, Dame Maria, and I congratulate the hon. Member for Birkenhead (Mick Whitley), both on securing this very important debate and on his excellent speech.

Artificial intelligence is an enabling technology. It is driving the digital age, but it is based on a series of points of data that are gathered by computer systems and processed in order to make decisions. It still requires a huge amount of human intervention in determining what data will be drawn on and therefore what decisions should be made. Consequently, there has to be a level of human responsibility, as well.

We can see already from the development of AI that it is not just question of computer systems learning from existing patterns of behaviour; they are also effectively thinking for themselves. The development of AI in chess is a good example of that. Not only are AI systems learning to make the moves that a human would make, always selecting the perfect combination and, therefore, being much more successful. When given the command to win the game, AI systems have also developed ways of playing that are unique, that the human mind has not thought of or popularised, and that are yet more efficient at winning. That is very interesting for those interested in chess. Perhaps not everyone is interested in chess, but that shows the power of AI to make autonomous decisions, based on data and information it is given. Humans invented the game of chess, but AI can learn to play it in ways not thought of by humans.

The application of AI in the defence space is even more scary, as touched on by the hon. Member for Birkenhead. AI-enabled weapons systems can be aggressive, make decisions quickly and behave in unpredictable ways. The human strategist is not able to keep pace with them and we would require AI-driven defence systems to protect ourselves from them. It would be alarming to live in a world where aggressive technology driven by AI can be combatted only by AI, with no human intervention in the process. It is scary to think of a security situation, like the Cuban missile crisis in the 1960s, where the strategies are pursued solely by AI. Therefore, we will have to think as we do in other areas of warfare, where we have bans on certain types of chemical weapons. There are certain systems that are considered so potentially devastating that they will not be used—there are moratoriums on their use and deployment. When thinking about AI in the defence space, we may well have to consider what security to build into it as well. We also need to think about the responsibility of companies that develop AI systems just for their commercial interests. What responsibility lies on them for the systems that they have created?

The hon. Gentleman was right to say that this is like an industrial revolution. With industrial revolutions comes great change. People’s ways of living and working can be disrupted, and they are replaced by something new. We cannot yet say with certainty what that something new could be. There are concerns, which I will come to in a moment, about the regulation of AI. There could be amazing opportunities, too. One can imagine working or classroom environments where children could visit historical events. I asked someone who works in education development how long it could take before children studying the second world war could put on a headset, sit in a virtual House of Commons and watch Winston Churchill deliver one of his famous speeches, as if they were actually sitting there. We are talking about that sort of technology being possible within the next decade.

The applications for learning are immense. Astronauts who practise going to the international space station do so from metaverse-style, AI-driven virtual spaces, where they can train. At the same time as we think about the good things that it can do, we should also consider the fact that very bad spaces could be created. In our debates on the Online Safety Bill, we have been concerned about abusive online behaviour. What if such abusive behaviour took place in a video chatroom, a virtual space, that looks just as real as this room? Who would be responsible for that?

It is beholden on the companies that develop these new technologies and systems to have responsibility for the output of those systems. The onus should be on the companies to demonstrate that what they are developing is safe. That is why my right hon. Friend the Chancellor of the Exchequer was right to set out in the Budget statement last year that the Government would fund a new AI sandbox. We have seen AI sandboxes developed in the EU. In Washington state in the United States, AI sandboxes are used to research new facial recognition technologies, which is particularly sensitive. The onus should be on the developer. The role of the regulator should be to say, “There are certain guidelines you work within, and certain things we might consider unsafe or unethical. You develop your technologies and new systems and put them through a sandbox trial. You make it easy for the regulator to ask about the data you are drawing from, the decisions the system you have put in place is making, the outcomes it is creating and whether they are safe.”

We have already seen that learned behaviour through data can create unfair biases in systems. There was a case where Amazon used AI to sift through CVs for recruitment. The AI learned that it was largely men hired for the roles, and therefore discarded the CVs of women applying for the position because it assumed they would not be qualified. We should be concerned about biases built into data systems being exacerbated by AI.

Some people talk about AI as if it is a future technology—something coming—but it exists today. Every one of us experiences or interacts with AI in some way. The most obvious way for a lot of people is through the use of apps. The business model of social media apps is driven by recommendation, which is an AI-driven system. The system—Facebook, TikTok, Instagram or whatever it is—is data profiling the user and recommending content to keep them engaged, based on data, and it is AI driving those recommendation tools.

We have to be concerned about whether those systems create unfair practices and behaviours in the workplace. That is why the hon. Member for Birkenhead is right to raise this issue. If a gig economy worker—a taxi driver or a delivery courier—is paid only when they are in receipt of jobs on the app, does the app create a false incentive for them to be available for work all the time? Do they have to commit to being available to the app for most of the day, because if they do not it drives the work to people who have high recommendation scores because they are always available? Do people who cannot make themselves available all the time find that the amount they can earn is much less, if they do not get paid for waiting time when they use such apps? If that becomes the principal way in which a lot of tasks are driven, AI systems, which are built to be efficient and make it easy for people to access the labour market, could create biases that favour some workers over others. People with other jobs or family commitment, in particular, might not be able to make themselves available.

We should consider not just the way the technology works but the rights that citizens and workers have if their job is based on using those apps. The employer—the app developer—should treat the people who work for them as employees, rather than as just freelance agency workers who happen to be available at any particular time of the day. They have some sort of working relationship that should be honoured and respected.

The basic principle that we should apply when we think about the future of AI and its enormous potential to create growth and new jobs, and build fantastic new businesses, is that the rights that people enjoy today—their rights as citizens and employees—should be translated into the future world of technology. A worker should not lose their working rights simply because their relationship with their employer or their customer is through an app, and because that experience is shaped by the collection and processing of data. Ultimately, someone is doing that processing, and someone has created that system in order to make money from it. The people doing that need to be responsible for the technology they have created.

--- Later in debate ---
Richard Thomson Portrait Richard Thomson (Gordon) (SNP)
- Hansard - - - Excerpts

It is a pleasure to serve under your chairship this afternoon, Dame Maria, and to take part in this particularly timely debate. I congratulate the hon. Member for Birkenhead (Mick Whitley) on securing it.

I begin by declaring a rather tenuous interest—a constituency interest of sorts—regarding the computing pioneer Alan Turing. The Turing family held the baronetcy of Foveran, which is a parish in my constituency between the north of Aberdeen and Ellon. Although there is no evidence that Alan Turing ever actually visited, it is a connection that the area clings to as fastly as it can.

Alan Turing, of course, developed what we now know as the Turing test—a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. One of the developments to come closest to that in recent times is, of course, ChatGPT, which several speakers have mentioned already. It is a natural-language processing tool driven by AI technology, which has the ability to generate text and interact with humans.

The hon. Member for Birkenhead was a bit braver than I was; I only toyed with the idea of using ChatGPT to produce some of my speech today. However, I was put off somewhat by a very good friend of mine, with an IT background, using the ChatGPT interface to produce a biography of me. He then shared it with his friendship group on Facebook.

I think it is fair to say that it shows up clearly that if ChatGPT does not know the answer to something, it will fill the gap by making up something that it thinks will sound plausible. In that sense, it is maybe no different from your average Cabinet Minister. However, that does mean that, in subject areas where the data on which it is drawing is rather scant, things can get quite interesting and inventive.

Damian Collins Portrait Damian Collins
- Hansard - -

The hon. Gentleman makes an incredibly important point. When AI systems such as that are asked questions that they do not know, rather than responding, “I don’t know,” they just make something up. A human is therefore required to understand whether what they are being showed is correct. The hon. Gentleman knows his own biography better than ChatGPT does, but someone else may not.

Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

I thank the hon. Member for that intervention. He has perhaps read ahead towards the conclusion of my speech, but it is an interesting dichotomy. Obviously, I know my biography best, but there are people out there, not in the AI world—Wikipedia editors, for example—who think that they know my biography better than I do in some respects.

However, to give the example, the biography generated by AI said that I had been a director at the Scottish Environmental Protection Agency, and, prior to that, I had been a senior manager at the National Trust for Scotland. I had also apparently served in the Royal Air Force. None of that is true, but, on one level, it does make me want to meet this other Richard Thomson who exists out there. He has clearly had a far more interesting life than I have had to date.

Although that level of misinformation is relatively benign, it does show the dangers that can be presented by the manipulation of the information space, and I think that the increasing use and application of AI raises some significant and challenging ethical questions.

Any computing system is based on the premise of input, process and output. Therefore, great confidence is needed when it comes to the quality of information that goes in—on which the outputs are based—as well as the algorithms used to extrapolate from that information to create the output, the purpose for which the output is then used, the impact it goes on to have, and, indeed, the level of human oversight at the end.

In March, Goldman Sachs published a report indicating that AI could replace up to 300 million full-time equivalent jobs and a quarter of all the work tasks in the US and Europe. It found that some 46% of administrative tasks and even 44% in the legal professions could be automated. GPT-4 recently managed to pass the US Bar exam, which is perhaps less a sign of machine intelligence than of the fact that the US Bar exam is not a fantastic test of AI capabilities—although I am sure it is a fantastic test of lawyers in the States.

Our fear of disruptive technologies is age-old. Although it is true to say that generally what we have seen from that disruption is the creation of new jobs and the ability to allow new technologies to take on more laborious and repetitive tasks, it is still extremely disruptive. Some 60% of workers are currently in occupations that did not exist in 1940, but there is still a real danger, as there has been with other technologies, that AI depresses wages and displaces people faster than any new jobs can be created. That ought to be of real concern to us.

In terms of ethical considerations, there are large questions to be asked about the provenance of datasets and the output to which they can lead. As The Guardian reported recently:

“The…datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks”

as well as all sorts of content created by others, who do not get reward for its use; the entire proceedings of 16 years of the European Parliament; or even the entirety of the proceedings that have ever taken place, and been recorded and digitised, in this place. The datasets can be drawn from a range of sources and they do not necessarily lead to balanced outputs.

ChatGPT has been banned from operating in Italy after the data protection regulator there expressed concerns that there was no legal basis to justify the collection and mass storage of the personal data needed to train GPT AI. Earlier this month, the Canadian privacy commissioner followed, with an investigation into OpenAI in response to a complaint that alleged that the collection, use and disclosure of personal information was happening without consent.

This technology brings huge ethical issues not just in the workplace but right across society, but questions need to be asked particularly when it comes to the workplace. For example, does it entrench existing inequalities? Does it create new inequalities? Does it treat people fairly? Does it respect the individual and their privacy? Is it used in a way that makes people more productive by helping them to be better at their jobs and work smarter, rather than simply forcing them—notionally, at least—to work harder? How can we be assured that at the end of it, a sentient, qualified, empowered person has proper oversight of the use to which the AI processes are being put? Finally, how can it be regulated as it needs to be—beneficially, in the interests of all?

The hon. Member for Birkenhead spoke about and distributed the TUC document “Dignity at work and the AI revolution”, which, from the short amount of time I have had to scrutinise it, looks like an excellent publication. There is certainly nothing in its recommendations that anyone should not be able to endorse when the time comes.

I conclude on a general point: as processes get smarter, we collectively need to make sure that, as a species, we do not consequentially get dumber. Advances in artificial intelligence and information processing do not take away the need for people to be able to process, understand, analyse and critically evaluate information for themselves.

--- Later in debate ---
Kevin Hollinrake Portrait Kevin Hollinrake
- Hansard - - - Excerpts

I have not actually posed that question, but perhaps I could later.

This is an important debate, and it is important that we look at the issue strategically. The Government and the Labour party probably have different approaches: the Labour party’s natural position on this kind of stuff is to regulate everything as much as possible, whereas we believe that free markets have had a tremendous effect on people’s lives right across the planet. Whether we look at education, tackling poverty or child mortality, many of the benefits in our society over the last 100 years have been delivered through the free market.

Our natural inclination is to support innovation but to be careful about its introduction and to look to mitigate any of its damaging effects, and that is what is set out in the national AI strategy. As we have seen, it has AI potential to become one of the most significant innovations in history—a technology like the steam engine, electricity or the internet. Indeed, my hon. Friend the Member for Folkestone and Hythe (Damian Collins) said exactly that: this is like a new industrial revolution, and I think it is a very exciting opportunity for the future. However, we also have key concerns, which have been highlighted by hon. Members today. Although the Government believe in the growth potential of these technologies, we also want to be clear that growth cannot come at the expense of the rights and protections of working people.

Only now, as the technology rapidly improves, are most of us beginning to understand the transformative potential of AI. However, the technology is already delivering fantastic social and economic benefits for real people. The UK’s tech sector is home to a third of Europe’s AI companies, and the UK AI sector is worth more than £15.6 billion. The UK is third in the world for AI investment, behind the US and China, and attracts twice as much venture capital investment as France and Germany combined. As impressive as they are, those statistics should be put into the context of the sector’s growth potential. Recent research predicts that the use of AI by UK businesses will more than double in the next 20 years, with more than 1.3 million UK businesses using AI by 2040.

The Government have been supporting the ethical adoption of AI technologies, with more than £2.5 billion of investment since 2015. We recently announced £100 million for the Foundation Models Taskforce to help build and adopt the next generation of safe AI, £110 million for our AI tech missions fund and £900 million to establish new supercomputer capabilities. These exascale computers were mentioned in the Budget by my right hon. Friend the Chancellor. These developments have incredible potential to bring forward new forms of clean energy, and indeed new materials that can deliver that clean energy, and to accelerate things such as medical treatment. There are exciting opportunities ahead.

If we want to become an AI superpower, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments. Our approach, laid out in the AI White Paper, is designed to be flexible. We are ensuring that we have a proportionate, pro-innovation regulatory regime for AI in the UK, which will build on the existing expertise of our world-leading sectoral regulators.

Our regulatory regime will function by articulating five key principles, which are absolutely key to this debate and tackle many of the points that have been made by hon. Members across the Chamber. Regulators should follow these five principles when regulating AI in their sectors: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. That feeds into the important points made by my hon. Friend the Member for Watford (Dean Russell), who held this ministerial position immediately prior to myself, about deception, scams and fraud. We can all see the potential for that, of course.

Clearly, right across the piece, we have regulators with responsibility in those five areas. Those regulators are there to regulate bona fide companies, which should do the right thing, although we have to make sure that they do. For instance, if somebody held a database with inappropriate data on it, the Information Commissioner’s Office could easily look at that, and it has significant financial penalties at its disposal, such as 4% of global turnover or a £17 million fine. My hon. Friend the Member for Watford made a plea for a Turing clause, which I am, of course, very happy to look at. I think he was referring to organisations that might not be bona fide, and might actually be looking to undertake nefarious activities in this area. I do not think we can regulate those people very effectively, because they are not going to comply with anybody’s regulations. The only way to deal with those people is to find them, catch them, prosecute them and lock them up.

Damian Collins Portrait Damian Collins
- Hansard - -

The Minister talks about safety, but does he agree that that has to be safety by design, and not just having response mechanisms built into the system so that a victim can appeal? I know he has looked at fraud a lot in the past, and there is a presumption that all will be done to combat fraud at its known source, rather than just providing redress to victims.

Kevin Hollinrake Portrait Kevin Hollinrake
- Hansard - - - Excerpts

That is absolutely right. We will not deal with everything in the world of AI in this respect, but there needs to be overarching responsibility for preventing fraud. That is something we have committed to bringing forward in another legislative vehicle—the Economic Crime and Corporate Transparency Bill, which is passing through Parliament now—but I agree with my hon. Friend that there should be a responsibility on organisations to prevent fraud and not simply deal with the after-effects.

Our proposed framework is aligned with and supplemented by a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. The Centre for Data Ethics and Innovation published its AI assurance road map in December 2021, and the AI Standards Hub—a world-leading collaboration led by the Alan Turing Institute with the National Physical Laboratory and the British Standards Institution—launched last October. The hub is intended to provide a co-ordinated contribution to standards development on issues such as transparency, security and uncertainty, with a view to helping organisations to demonstrate that AI is used safely and responsibly.

We are taking action to ensure that households, public services and businesses can trust this technology. Unless we build public trust, we will miss out on many of the benefits on offer. The reality is that AI, as with other general-purpose technologies, has the potential to be a net creator of jobs. I fully understand the points raised by the hon. Member for Birkenhead—of course, we do not want to see swathes of people put out of work because of this technology. I hasten to add that that has never been the case with other technologies. There have been many concerns over the ages about how new technologies will affect jobs, but they tend to create other jobs in different sectors. The World Economic Forum estimates that robotics, automation and artificial intelligence will displace 85 million jobs globally by 2025, but create 97 million new jobs in different sectors, which I will discuss in a second. I think the hon. Member for Birkenhead asked in his speech whether I would be willing to meet him to discuss these points; I am always very happy to do that, if we can convene at another time.

The hon. Member also raised the point about how AI in the workplace has the potential to liberate the workforce from monotonous tasks such as inputting data or scanning through documents for a single piece of information. I will address the bigger concerns he has around that, but in the public sector it would leave teachers with more time to teach, clinicians with more time to spend with patients and police officers with more time on the beat, rather than being behind a desk.

As was raised in a salient point by my hon. Friend the Member for Folkestone and Hythe, AI also has tremendous potential in defence and national security. That is absolutely critical. It was interesting that leading people in the world of technology, led by Elon Musk, recently wrote a letter asking for a six-month pause while we look at how we can properly moderate the impacts of AI. I am not sure that that is a good idea, because I am not sure China and Russia would play that game. It is important that we stay ahead of the curve, for exactly the reasons pointed out by my hon. Friend.

Damian Collins Portrait Damian Collins
- Hansard - -

The Minister is exactly right. That initiative also suggests that AI is not yet here but, actually, the issues we have discussed today exist already. We can look at them already; we do not need a six-month pause to do that.

Kevin Hollinrake Portrait Kevin Hollinrake
- Hansard - - - Excerpts

That is absolutely right. There is an opportunity but also a potential threat. It is important that we continue to invest, and it is great that the UK is ahead of the game in its investment, behind only the US and China, which are obviously much bigger economies.

The key thing is that we take action on skills, skilling up our workforce in the UK to take advantage of the potential of AI. Clearly, a good computing education is at the heart of that. We have overhauled the outdated information and communications technology curriculum and replaced it with computing, and invested £84 million in the National Centre for Computing Education to inspire the next generation of computer scientists. Our national skills fund offers to do just that, with free level 3 qualifications for adults and skills bootcamps in digital courses, including coding, AI and cyber-security, available across England.

On that point, as well as the opportunities in AI, we need to look at the new opportunities in the new economy. Some jobs will be displaced, so we need to ensure that we are skilling up our workforce for other opportunities in our new economy, be it data science or green jobs with the green jobs taskforce. Recently, in Hull, there were 3,000 new jobs in the wind turbine sector with a starting salary of £32,000, which illustrates the potential for green jobs in our economy. So although jobs might be displaced, others, hopefully better-paid jobs will replace them. We want a higher-wage, higher-skilled economy.

The Government are also supporting 16 centres for doctoral training, backed by an initial £100 million, delivering 1,000 PhDs. We expanded that programme with a further £117 million at the recent launch of the Government’s science and technology framework. Last year, we invested an additional £17 million in AI and data science postgraduate conversion courses and scholarships to increase the diversity of the tech workforce, on top of the £13 million that has been invested in the programme since 2019-20. We also invested £46 million to support the Turing AI fellowships to attract the best and brightest AI talent to work in the UK.

The point about protections for workers’ rights was raised by many Members in the debate, not least the hon. Members for Gordon (Richard Thomson) and for Birkenhead; the shadow Minister, the hon. Member for Ellesmere Port and Neston (Justin Madders); and my hon. Friends the Members for Folkestone and Hythe and for Watford. It is important to see the Government’s position on workers’ rights here. We are bolstering workers’ rights, raising the national living wage, with the highest increase on record—a near 10% increase—and six private Members’ Bills that increase workers’ rights, including on flexible working and other issues. There is also the Employment (Allocation of Tips) Bill, which is the favourite Bill of my hon. Friend the Member for Watford, who was its sponsor prior to becoming the Minister.

On the concerns many raised about workplace monitoring, we are committed to protecting workers. A number of laws are already in place that apply to the use of AI and data-driven technology in the workplace, including in decision making, which was raised by the hon. Member for Ellesmere Port and Neston. The Equality Act 2010 already requires employers and service providers not to discriminate against employees, job applicants and customers. That includes discrimination through actions taken as a result of an algorithm or a similar artificial intelligence mechanism. Tackling discrimination in AI is a major strand of the Equality and Human Rights Commission’s three-year strategy. Existing data protection legislation protects workers where personal data is involved, and that is one aspect of existing regulation on the development of AI systems and other technologies.

Reforms as part of the Data Protection and Digital Information Bill will cast article 22 of the UK GDPR as a right to specific safeguards, rather than as a general prohibition on solely automated decision making. These rights ensure that data subjects are informed about, and can seek human review of, significant decisions that are taken about them solely through automated means, which was a point raised by the shadow Minister. Employment law also offers protections. The Employment Rights Act 1996 provides that employees with two years of continuous service are protected from unfair dismissal, which would encompass circumstances where employees’ article 8 and UK GDPR rights have been breached in the algorithm decision-making process that led to the dismissal.

Of course, all good employers—by their very nature—should use human judgment. The best way we can help employers in any workplace is to have a strong jobs market where employers have to compete for employees. That is the kind of market we have delivered in this economy, despite some of the difficulties that surround it.

I once again thank the hon. Member for Birkenhead for tabling this timely and important debate. To be clear again, we have a strong ambition for the UK to become a science and technology superpower, and AI is a key part of that. However, the Government recognise the concerns around these technologies and appreciate that, as with all new technologies, trust has to be built. We will continue to build our understanding of how the employment rights framework operates in an era of increasing AI use. AI has the potential to make an incredibly positive contribution to creating a high-wage, high-skill and high-productivity economy. I very much look forward to seeing the further benefits as matters progress.