Artificial Intelligence and the Labour Market Debate

Full Debate: Read Full Debate
Department: Department for Business and Trade

Artificial Intelligence and the Labour Market

Dean Russell Excerpts
Wednesday 26th April 2023

(1 year, 7 months ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Dean Russell Portrait Dean Russell (Watford) (Con)
- Hansard - -

It is a privilege to speak in this debate, and I thank the hon. Member for Birkenhead (Mick Whitley) for securing it. I wanted to apply for it myself—he beat me to the chase, which is a wonderful thing.

Before I became an MP, one of my final clients was in the AI space. It dealt with artificial intelligence and psychology—I believe that my first entry in the Register of Members’ Financial Interests was my final bit of work for it—so I have seen this technology evolve over many years. We often talk about technology revolutions, but this has been an incredibly fast evolution.

We are seeing Moore’s law, which related to the size and scale of technology, affect society. The scale of what is happening right now is both inspirationally amazing and terrifying at the same time. It will absolutely shape the job market and the type of jobs that come through over the next few years. It will shape how people interface with their co-workers, with technology, with society and with politicians. It will affect every aspect of our lives.

I am particularly concerned about the use of artificial intelligence for deception. I have long said—not necessarily in the Chamber, so I put it on the record now—that there should be in law something that I would call the Turing clause. It would mean that when technology is used to deceive somebody into believing that they are talking to a real person or engaging with a real business, whether for entertainment or for any other purpose—for instance watching a deepfake, which is perhaps for entertainment purposes—it must be crystal clear to them that they are being deceived.

I will give some examples. I was recently speaking to somebody who works in the entertainment industry, running studios where they record sound, voiceovers and music. They said—I should declare that I do not know the scale of this issue and have not looked into the numbers—that lot of the studios are often being used to record voiceovers for AI companies, so that the AI can learn how to speak like a real person. We all know about fraud and scams in which somebody gets phoned up from a call centre and told, “Your insurance is up,” or by someone pretending to be from the Government. We saw, awfully, during the covid crisis how those horrible people would try to scam people. Doing that requires a number of people in a space.

Now imagine that AI can pretend to be somebody we know—a family member, for instance—and imitate their voice. It could call up and say, “I need some money now, because I am in trouble,” or, “I need some support.” Or it could say, “This is somebody from the Government; your tax affairs are an issue—send your details now.” There are a whole load of things going on in society that we will not know about until it is too late. That is why a Turing clause is absolutely essential, so that we are ahead of the curve on deception, deepfakes and areas where technology will be used to fool.

One incredibly important area in relation to the labour market that is not often talked about is the role of AI in creativity. DALL-E 2 is one of the tools, and there are many others popping up now. They can create artwork and videos almost at the speed of thought—typing in a particular phrase will create amazingly beautiful pictures—but they are pooling those from places where real artists and real musicians, with particular styles, have contributed. That is then presented as AI creativity. That could kill the graphic design industry. It could prevent people who are in the early stages of life as an artist, in both the visual and music worlds, from ever having an opportunity to be successful.

Just recently, Drake and the Weeknd—if I have those artists correct—had a song that was put online. I think that it even went on Spotify, but it was definitely on some streaming services. Everybody thought, “Gosh, this is a fantastic new collaboration.” It was not. It was AI pretending to be both of those artists with a brand new song. Artificial intelligence had created it. It was not until after the fact, and after the song had been streamed hundreds of thousands of times, that the big music companies said, “Hang on—that isn’t real. We need to stop this.” Then it was stopped.

In the case of social media, it took us many years to get to the fantastic Online Safety Bill. I was very fortunate to be on the Draft Online Safety Bill Joint Committee. Its Chair, my hon. Friend the Member for Folkestone and Hythe (Damian Collins), is in the room today, and he did a fabulous job. Getting to that point took 10 or 15 years. We do not have 10 or 15 months to legislate on AI. We probably do not have 10 or 15 weeks, given where we will be in a matter of days, with the new announcements and tools that are coming out.

Lisa Cameron Portrait Dr Cameron
- Hansard - - - Excerpts

I thank the hon. Gentleman for making those extremely important points. Just last week, we had the Children’s Parliament at the all-party parliamentary group on the metaverse and web 3.0. The children were excited about the opportunities of AI and the metaverse, and we were told on the day that the World Economic Forum predicts that technology will create 97 million new jobs by 2025 alone. But like the hon. Gentleman, they were also very concerned about what is real and what is not, and they were concerned about the mental health impact of spending much of the day in an altered reality setting. Does the hon. Gentleman agree that we need much more research into the mental health impact on staff and young people who are engaged in AI?

Dean Russell Portrait Dean Russell
- Hansard - -

I thank the hon. Member for her comments. Mental health is a passion of mine—I had a ten-minute rule Bill about ensuring that mental health first aiders are in the workplace—and I agree wholeheartedly. We saw that in evidence given to the Draft Online Safety Bill Joint Committee; Rio Ferdinand talked, including in his documentary, about the fact that what is said online can affect a person’s real life. The challenge with artificial intelligence is that it will not just be able to say those things; it will probably know precisely how to do the most harm, how to hit the right triggers to make people buy things and how to fool and deceive people to ensure they hand over money or their rights.

I will move on because I am conscious of time. I know we have quite a long time for this debate, but I do not intend to use it all; I promise. I think that the creativity part is absolutely essential. A few weeks ago, I predicted in Parliament that, in the next year or so, a No. 1 song will be created by artificial intelligence for the first time. I have no doubt that a No. 1 bestselling book will be written by artificial intelligence. I have no doubt that new songs in the voices of artists who are no longer around, such as Elvis Presley, will be released, and that actors who are sadly no longer alive will play starring roles in new films. We are seeing this already on a soft scale, but it is going to become more and more pervasive.

It is not all negative. I do not want to be a doomsayer. There are great opportunities: Britain—this wonderful country—could be the home of identifying and delivering transparency within those industries. We could be the country that creates the technology and the platforms to identify where artificial intelligence is being used; it could flag up when things are not real. It could, for example, force organisations to say who they are, what they are doing and whether they have used artificial intelligence. I think that will create a whole new world of labour markets and industries that will stem from this country and create all the jobs that we talked about earlier.

I am also concerned that we do not often talk in the same breath about artificial intelligence and robotics. In the industrial world, such as in warehouses and so on, there has been a rise in the use of robotics to replace real people. Office jobs are changing due to artificial intelligence. The role of accountants, of back-office staff and of both blue and white-collar workers will change.

As was stated earlier, the challenge with robotics is on things such as defence. Artificial intelligence is being used in robotics to get way ahead of the scale of where we are now. We really need to take that seriously. ChatGPT was probed. People tried to catch it out on different aspects of its response. When asked how it would steal the nuclear codes, it outlined how it would do it. I am not trying to give any bad actors out there any ideas, but it explained how it would use AI to control drones, and how they would be able to go in and do certain things. Hopefully, it got it all wrong. However, if AI is in not just our computers and mobile phones, but in drones and new robots that are incredibly sophisticated, incredibly small and not always identifiable, we need to be really wary.

There are many positives, such as for detection in the health sector and for identifying things such as breast cancer. Recently, I have seen lots of work about how artificial intelligence could be layered on the human aspect and insight, which was mentioned earlier, and enable the identification of things that we would not normally be able to see.

There is huge positive scope for using data. I have said previously that, if we were to donate our health data to live clinical trials in a way that was legitimate and pseudonymised, artificial intelligence could be used to identify a cure for cancer and for diseases that have affected our society for many centuries. In the same way that it has found new ways of playing chess, it might find new ways of changing and saving lives. There is great opportunity there.

Many years ago, I wrote an article called, “Me, Myself and AI”. In it, I commented on areas where AI is dangerous, but I also mentioned opportunities for positives. I would like to make one final point on this: we must also make sure that the data that goes into the AI is tracked not only for things such as royalties in creative industries, but for bias. I wrote an article on that a while ago. If we take a sample, say within a health context, and take that data based on only one ethnicity or demographic, the AI will develop options and solutions for that group. If we do not have the right data, regarding diversity, going into the analysis, we risk not being able to identify future issues. For example, sickle cell disease might get missed because the data that the AI is using is based only on clinical trials with white people.

There is a wide-ranging issue about what is being fed into the systems around AI and how we ensure that we identify where AI is being used—hence my point about a Turing clause when it comes to deception. We also need to know where it is being used, including in Government. We need to look at the opportunities, too: whole new industries around how we monitor AI, apply it and use the science of it.

AI is already there in the spelling of “Great Britain”. We have a great opportunity to be ahead of the curve, and we need to be because the curve will be moving beyond us within a matter of weeks or months—and definitely within years.

--- Later in debate ---
Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

I thank the hon. Member for that intervention. He has perhaps read ahead towards the conclusion of my speech, but it is an interesting dichotomy. Obviously, I know my biography best, but there are people out there, not in the AI world—Wikipedia editors, for example—who think that they know my biography better than I do in some respects.

However, to give the example, the biography generated by AI said that I had been a director at the Scottish Environmental Protection Agency, and, prior to that, I had been a senior manager at the National Trust for Scotland. I had also apparently served in the Royal Air Force. None of that is true, but, on one level, it does make me want to meet this other Richard Thomson who exists out there. He has clearly had a far more interesting life than I have had to date.

Although that level of misinformation is relatively benign, it does show the dangers that can be presented by the manipulation of the information space, and I think that the increasing use and application of AI raises some significant and challenging ethical questions.

Any computing system is based on the premise of input, process and output. Therefore, great confidence is needed when it comes to the quality of information that goes in—on which the outputs are based—as well as the algorithms used to extrapolate from that information to create the output, the purpose for which the output is then used, the impact it goes on to have, and, indeed, the level of human oversight at the end.

In March, Goldman Sachs published a report indicating that AI could replace up to 300 million full-time equivalent jobs and a quarter of all the work tasks in the US and Europe. It found that some 46% of administrative tasks and even 44% in the legal professions could be automated. GPT-4 recently managed to pass the US Bar exam, which is perhaps less a sign of machine intelligence than of the fact that the US Bar exam is not a fantastic test of AI capabilities—although I am sure it is a fantastic test of lawyers in the States.

Our fear of disruptive technologies is age-old. Although it is true to say that generally what we have seen from that disruption is the creation of new jobs and the ability to allow new technologies to take on more laborious and repetitive tasks, it is still extremely disruptive. Some 60% of workers are currently in occupations that did not exist in 1940, but there is still a real danger, as there has been with other technologies, that AI depresses wages and displaces people faster than any new jobs can be created. That ought to be of real concern to us.

In terms of ethical considerations, there are large questions to be asked about the provenance of datasets and the output to which they can lead. As The Guardian reported recently:

“The…datasets used to train the latest generation of these AI systems, like those behind ChatGPT and Stable Diffusion, are likely to contain billions of images scraped from the internet, millions of pirated ebooks”

as well as all sorts of content created by others, who do not get reward for its use; the entire proceedings of 16 years of the European Parliament; or even the entirety of the proceedings that have ever taken place, and been recorded and digitised, in this place. The datasets can be drawn from a range of sources and they do not necessarily lead to balanced outputs.

ChatGPT has been banned from operating in Italy after the data protection regulator there expressed concerns that there was no legal basis to justify the collection and mass storage of the personal data needed to train GPT AI. Earlier this month, the Canadian privacy commissioner followed, with an investigation into OpenAI in response to a complaint that alleged that the collection, use and disclosure of personal information was happening without consent.

This technology brings huge ethical issues not just in the workplace but right across society, but questions need to be asked particularly when it comes to the workplace. For example, does it entrench existing inequalities? Does it create new inequalities? Does it treat people fairly? Does it respect the individual and their privacy? Is it used in a way that makes people more productive by helping them to be better at their jobs and work smarter, rather than simply forcing them—notionally, at least—to work harder? How can we be assured that at the end of it, a sentient, qualified, empowered person has proper oversight of the use to which the AI processes are being put? Finally, how can it be regulated as it needs to be—beneficially, in the interests of all?

The hon. Member for Birkenhead spoke about and distributed the TUC document “Dignity at work and the AI revolution”, which, from the short amount of time I have had to scrutinise it, looks like an excellent publication. There is certainly nothing in its recommendations that anyone should not be able to endorse when the time comes.

I conclude on a general point: as processes get smarter, we collectively need to make sure that, as a species, we do not consequentially get dumber. Advances in artificial intelligence and information processing do not take away the need for people to be able to process, understand, analyse and critically evaluate information for themselves.

Dean Russell Portrait Dean Russell
- Hansard - -

This is one point—and a concern of mine—that I did not explore in my speech because I was conscious of its length. As has been pointed out, a speech has been given previously that was written by artificial intelligence, as has a question in Parliament. We politicians rely on academic research and on the Library. We also google and meet people to inform our discussions and debates. I will keep going on about my Turing clause—which connects to the hon. Gentleman’s point—because I am concerned that if we do not have something like that to highlight a deception, there is a risk that politicians will go into debates or votes that affect the government of this country having been deceived—potentially on purpose, by bad actors. That is a real risk, which is why there needs to be transparency. We need something crystal clear that says, “This is deceptive content” or “This has been produced or informed by AI”, to ensure the right and true decisions are being made based on actual fact. That would cover all the issues that have been raised today. Does the hon. Member share that view?

Richard Thomson Portrait Richard Thomson
- Hansard - - - Excerpts

Yes, I agree that there is a very real danger of this technology being used for the purposes of misinformation and disinformation. Our democracy is already exceptionally vulnerable to that. Just as the hon. Member highlights the danger of individual legislators being targeted and manipulated—they need to have their guard up firmly against that—there is also the danger of people trying to manipulate behaviour by manipulating wider political discourse with information that is untrue or misleading. We need to do a much better job of ensuring we are equipping everybody in society with critical thinking skills and the ability to analyse information objectively and rationally.

Ultimately, whatever benefits AI can bring, it is our quality of life and the quality of our collective human capital that counts. AI can only and should only ever be a tool and a servant to that end.