Thursday 29th June 2023

(1 year, 5 months ago)

Commons Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Matt Warman Portrait Matt Warman
- Hansard - - - Excerpts

The hon. Member is absolutely right that, when done well, AI allows us to identify discrimination and seek to eliminate it, but when done badly, it cements it into the system in the worst possible way. That is partly why I say that transparency about the use of AI will be absolutely essential, even if we largely do not need new legislation. We need principles. When done right, in time this technology could end up costing us less money and delivering greater rewards, be that in the fields of discrimination or public services and everywhere in between.

There is a second-order point, which is that we need to understand where loopholes that the technology creates are not covered by existing bits of legislation. If we think back to the time we spent in his House debating upskirting, we did not do that because voyeurism was somehow legal; we did it because a loophole had been created by a new technology and a new set of circumstances, and it was right that we sought to close it. We urgently need to understand where those loopholes are now, thanks to artificial intelligence, and we need to understand more about where they will have the greatest effects.

In a similar vein, we need to understand, as I raised at Prime Minister’s questions a few weeks ago, which parts of the economy and regions of the country will be most affected, so that we can focus the immense Government skills programmes on the areas that will be most affected. This is not a predictable industry, such as when we came to the end of the coalmining industry, and we are not able to draw obvious lines on obvious maps. We need to understand the economy and how this impacts on local areas. To take just one example, we know that call centres—those things that keep us waiting for hours on hold—are going to get a lot better thanks to artificial intelligence, but there are parts of the country that are particularly seeing an increase in local call centre employees. This will be a boom for the many people working in them, but it is also a hump that we need to get over, and we need to focus skills investment in certain areas and certain communities.

I do believe that, long term, we should be profoundly optimistic that artificial intelligence will create more jobs than it destroys, just as in every previous industrial revolution, but there will be a hump, and the Government need to help as much as they can in working with businesses to provide such opportunities. We should be optimistic that the agency that allows people to be happier in their work—personal agency—will be enhanced by the use of artificial intelligence, because it will take away some of the less exciting aspects of many jobs, particularly at the lower-paid end of the economy, but not by any means solely. There is no shame in eliminating dull parts of jobs from the economy, and there is no nobility in protecting people from inevitable technological change. History tells us that if we do seek to protect people from that technological change, we will impoverish them in the process.

I want to point to the areas where the Government surely must understand that potentially new offences are to be created beyond the tactical risk I have described. We know that it is already illegal to hack the NHS, for instance. That is a tactical problem, even if it might be somewhat different, so I want to take a novel example. We know that it is illegal to discriminate on the grounds of whether someone is pregnant or likely to get pregnant. Warehouses, many of them run by large businesses, gather a huge amount of data about their employees. They gather temperature data and movement data, and they monitor a huge amount. They gather data that goes far beyond anything we had previously seen just a few years ago, and from that data, companies can infer a huge amount, and they might easily infer from that whether someone is pregnant.

If we do that, which we already do, should we now say that it will be illegal to collect such data because it opens up a potential risk? I do not think we should, and I do not think anyone would seriously say we should, but it is open to a level of discrimination. Should we say that such discrimination is illegal, which is the situation now—companies can gather data but it is what they do with it that matters—or should we say that it actually exposes people to risk and companies to a legal risk, and that it may take us backwards rather than forwards? Unsurprisingly, I think there is a middle ground that is the right option.

Suddenly, however, a question as mundane as collecting data about temperature and movements, ostensibly for employee welfare and to meet existing commitments, turns into a political decision: what information is too much and what analysis is too much? It brings us as politicians to questions that suddenly and much more quickly revert to ethics. There is a risk of huge and potentially dangerous information asymmetry. Some people say that there should be a right to a human review and a right to know what cannot be done. All these are ethical issues that come about because of the advent of artificial intelligence in the way that they have not done so previously. I commend to all Members the brilliant paper by Oxford University’s Professor Adams-Prassl on a blueprint for regulating algorithmic management, and I commend it to the Government as well.

AI raises ethical considerations that we have to address in this place in order to come up with the principles-based regulation that we need, rather than trying to play an endless game of whack-a-mole with a system that is going to go far faster than the minds of legislators around the world. We cannot regulate in every instance; we have to regulate horizontally. As I say, the key theme surely must be transparency. A number of Members of Parliament have confessed—if that is the right word—to using AI to write their speeches, but I hope that no more people have used AI to write their speeches than those who have already confessed. Transparency has been key in this place, and it should be key in financial services and everywhere else. For instance, AI-generated videos could already be forced to use watermarking technology that would make it obvious that they are not the real deal. As we come up to an election, I think that such use of existing technology will be important. We need to identify the gaps—the lacunae—both in legislation and in practice.

Artificial intelligence is here with us today and it will be here for a very long time, at the very least augmenting human intelligence. Our endless creativity is what makes us human, and what makes us to some extent immune from being displaced by technology, but we also need to bear in mind that, ultimately, it is by us that decisions will be made about how far AI can be used and what AI cannot be used for. People see a threat when they read some of the most hyperbolic headlines, but these are primarily not about new crimes; they are about using AI for old crimes, but doing them a heck of a lot better.

I end by saying that the real risk here is not the risk of things being done to us by people using AI. The real risk is if we do not seize every possible opportunity, because seizing every possible opportunity will allow us to fend off the worst of AI and to make the greatest progress. If every student knows that teachers are not using it, far more fake essays will be submitted via ChatGPT. Every lawyer and every teacher should be encouraged to use this technology to the maximum safe extent, not to hope that it simply goes away. We know that judges have already seen lawyers constructing cases using AI and that many of the references in those cases were simply fictional, and the same is true of school essays.

The greatest risk to progress in our public services comes from not using AI: it comes not from malevolent people, but from our thinking that we should not embrace this technology. We should ask not what AI can do to us; we should ask what we can do with AI, and how Government and business can get the skills they need to do that best. There is a risk that we continue to lock in the 95% of AI compute that sits with just seven companies, or that we promote monopolies or the discrimination that the hon. Member for Brent Central (Dawn Butler) mentioned. This is an opportunity to avert that, not reinforce it, and to cement not prejudice but diversity. It means that we have an opportunity to use game-changing technology for the maximum benefit of society, and the maximum number of people in that society. We need to enrich the dialogue between Government, the private sector and the third sector, to get the most out of that.

This is a matter for regulation, and for global regulation, as is so much of the modern regulatory landscape. There will be regional variations, but there should also be global norms and principles. Outside the European Union and United States, Britain has that unique position I described, and the Prime Minister’s summit this autumn will be a key opportunity—I hope all our invites are in the post, or at least in an email. I hope that will be an opportunity not just for the Prime Minister to show genuine global leadership, but also an opportunity to involve academia, parliamentarians and broader society in having that conversation, and allow the Government to seize the opportunity and regain some trust on this technology.

I urge the Minister to crack on, seize the day, and take the view that artificial intelligence will be with us for as long as we are around. It will make a huge difference to our world. Done right, it will make everything better; done badly, we will be far poorer for it.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - -

I call the Chair of the AI Committee, Darren Jones.

--- Later in debate ---
Jo Gideon Portrait Jo Gideon
- Hansard - - - Excerpts

The hon. Lady makes a good point. Clearly, that is the big part of this debate: we have to have transparency, as it is essential. The Government’s current plans, set out in the AI White Paper, do not place any new obligations on public bodies to be transparent about their use of AI; to make sure their AI tools meet accuracy and non-discrimination standards, as she rightly said; or to ensure that there are proper mechanisms in place for challenging or getting redress when AI decisions go wrong. What the White Paper proposes is a “test and learn” approach to regulation, but we must also be proactive. Technology is changing rapidly, while policy lags behind. Once AI is beyond our control, implementing safeguards becomes implausible. We should acknowledge that we cannot afford to wait to see how its use might cause harm and undermine trust in our institutions.

While still encouraging sensible innovation, we should also learn from international experiences. We must encourage transparency and put in place the proper protections to avoid damage. Let us consider the financial sector, where banks traditionally analyse credit ratings and histories when deciding who to lend money to. I have recently been working with groups such as Burnley Savings and Loans, which manually underwrites all loans and assesses the risk of each loan by studying the business models and repayment plans of its customers. Would it be right to use AI to make such decisions? If we enter a world where there is no scope for gut feeling, human empathy and intuition, do we risk impoverishing our society? We need to be careful and consider how we want to use AI, being ethical and thoughtful, and remaining in control, rather than rolling it out wherever possible. We must strike the right balance.

Research indicates that AI and automation are most useful when complemented by human roles. The media can be negative about AI’s impact, leading to a general fear that people will lose their jobs as a result of its growth. However, historically, new technology has also led to new careers that were not initially apparent. It has been suggested that the impact of AI on the workplace could rival that of the industrial revolution. So the Government must equip the workforce of the future through skills forecasting and promoting education in STEM—science, technology, engineering and maths.

Furthermore, we must remain competitive in AI on the global stage, ensuring agility and adaptability, in order to give future generations the best chances. In conjunction with the all-party group on youth affairs, the YMCA has conducted polling on how young people feel about the future and the potential impact of AI on their careers. The results are going to be announced next month. It found that AI could not only lead to a large amount of job displacement, but provide opportunities for those from non-traditional backgrounds. More information on skills and demand will help inform young people to identify their career choices and support industries and businesses in preparing for the impact of AI.

I am pleased that the Department for Education has already launched a consultation on AI education, which is open until the end of August. Following that, we should work hard to ensure that schools and universities can quickly adapt to AI’s challenges. Cross-departmental discussion is important, bringing together AI experts and educators, to ensure that the UK is at the cutting edge of developments with AI and to provide advice to adapt to younger generations.

AI is hugely powerful and possesses immense potential. ChatGPT has recently caught everybody’s attention, and it can create good stories and news articles, like the one I shared. But that technology has been used for years and, right now, we are not keeping up. We need to be quicker at adapting to change, monitoring closely and being alert to potential dangers, and stepping in when and where necessary, to ensure the safe and ethical development of AI for the future of our society and the welfare of future generations.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - -

Recalling a conversation that we had earlier in the day, I am tempted to call Robin Millar in the style of Winston Churchill.

--- Later in debate ---
John Nicolson Portrait John Nicolson (Ochil and South Perthshire) (SNP)
- View Speech - Hansard - - - Excerpts

I will keep my speech short and snappy, and not repeat anything that any other Member has said—I know that is unfashionable in this place. I begin by congratulating the hon. Member for Boston and Skegness (Matt Warman) on introducing the debate. He was one of the very best Ministers I have ever come across in my role on the Front Bench, and I am sorry to see him on the Back Benches; he is well due promotion, I would say. I am sure that has just damned his prospects for all eternity.

As my party’s culture spokesperson, I am very keenly aware of the arts community’s concerns about AI and its risks to the arts. I have now been twice—like you, Mr Deputy Speaker, I am sure—to “ABBA Voyage”, once in my role on the Culture, Media and Sport Committee and once as a guest of the wonderful Svana, its producer. As I am sure you know, Mr Deputy Speaker, the show uses AI and motion capture technology combined with a set of massive, ultra-high-quality screens to create an utterly magnificent gig. It felt like the entire audience was getting to see ABBA in their prime; indeed, it was perhaps even better than it would have been originally, because we now have ultra-modern sound quality, dazzling light shows and a vast arena in which to enjoy the show. It was history, airbrushed to perfection and made contemporary. It seems to be a success, having sold over 1 million tickets so far and with talk of its touring the world. In fact, it was so good that towards the end, some of the audience started waving at Agnetha and Björn. They had become completely convinced that they were not in fact AI, but real people. There were tears as people looked at Agnetha, which says something about the power of technology to persuade us, does it not?

Soon, I will be going to see Nile Rodgers—that really is a very good gig, as I do not need to tell the other Front Benchers present. Again, I am going to be his guest. He is a legendary guitarist, songwriter and singer; he gave evidence to our Select Committee; and he has sold 500 million albums worldwide. Nile will be incredible —he always is—but he will also be 70 years of age. It will not be a 1970s early funk gig. The audience will include the mature, people in the prime of middle youth such as myself, and also the Glastonbury generation. It is easy to envisage an AI Nile Rodgers, produced by a record company and perhaps touring in competition with the very real Nile Rodgers, competing for ticket sales with the great man himself. Indeed, it is easy to envisage the young recording artists of today signing away their rights to their likenesses and vocals in perpetuity, with long-term consequences.

Many in the arts sphere feel safe from AI, as they suspect that human creativity at the artistic level cannot be replicated. I very much hope that they are right, but once that human creativity has been captured, it can be reproduced eternally, perhaps with higher production levels. It is not, I feel, the sole responsibility of artists, musicians and playwrights to be concerning themselves with radical developments in AI. They have work to do as it is, and surely the job to protect them is ours. We need to get on top of the copyright issues, and we need to protect future performers from having their rights sold away along with their very first contracts. We as parliamentarians must think deeply, listen and research widely. I have heard some heartening—sometimes lengthy —speeches that show there is, cross party, an awareness and a willingness to grasp this, and that is deeply encouraging.

However, the UK Government have much to work on in their White Paper. They have a lot to do when they look at this and listen to the submissions, and they must provide improvements. It allows public institutions and private companies to use new experimental AI on us, and then try to correct the flaws subsequently. It uses us, our communities and our industries as guinea pigs to try out untested code to see whether that makes matters better or worse. I think the risks are many for the arts community, which is concerned deeply about fakery, and there is an argument that the AI White Paper empowers such digital fakery.

In closing, it is absolutely key that we listen to experts in this field, as we should always do to inform our decision making, but in particular to those in the arts and music industry because they will be so deeply affected.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - -

I call the shadow Minister.