Thursday 29th June 2023

(1 year, 5 months ago)

Commons Chamber
Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Nigel Evans Portrait Mr Deputy Speaker (Mr Nigel Evans)
- Hansard - - - Excerpts

I remind everybody that following the end of the debate that is about to begin, we will have a statement on the migration and economic development partnership. Anybody wishing to ask a question in that debate should start to make their way to the Chamber as soon as the wind-ups in the artificial intelligence debate begin.

13:54
Matt Warman Portrait Matt Warman (Boston and Skegness) (Con)
- View Speech - Hansard - - - Excerpts

I beg to move,

That this House has considered artificial intelligence.

Is it not extraordinary that we have not previously had a general debate on what is the issue of our age? Artificial intelligence is already with us today, but its future impact has yet to truly be felt, or indeed understood.

My aim in requesting this debate—I am very grateful to the Backbench Business Committee for awarding it—is twofold. First, it is to allow Members to express some views on an issue that has moved a long way since I was partially Minister for it, and even since the Government White Paper came out, which happened only very recently. Secondly, it is to provide people with an opportunity to express their views on a technology that has to be regulated in the public interest, but also has to be seized by Government to deliver the huge improvements in public services that we all know it is capable of. I hope that the industry will hear the views of parliamentarians, and—dare I say it?—perhaps better understand where the gaps in parliamentarians’ knowledge might be, although of course those gaps will be microscopic.

I will begin with a brief summary of where artificial intelligence is at, which will be self-avowedly superficial. At its best, AI is already allowing the NHS to analyse images better than ever before, augmenting the expertise of our brilliant and expanding workforce with technology that is in a way analogous to something like adaptive cruise control—it helps; it does not replace. It is not a technology to be scared of, and patients will welcome that tool being put at the disposal of staff.

We are already seeing AI being used to inform HR decisions such as hiring and firing—an area that is much more complex and much more in need of some kind of regulation. We see pupils using it to research—and sometimes write—their essays, and we sometimes see schools using AI to detect plagiarism. Every time I drive up to my constituency of Boston and Skegness, I listen to Politico’s “Playbook”, voiced by Amazon’s Polly AI system. It is everywhere; it is in the car too, helping me to drive it. AI is creating jobs in prompt engineering that did not exist just a few years ago, and while it is used to generate horrific child sex abuse images, it is also used to detect them.

I want to take one example of AI going rogue that a senior American colonel talked about. It was claimed that a drone was awarded points for destroying a certain set of targets. It consulted its human controller on whether it should take a certain course of action, and was told that it should not. Because it got points for those targets, it decided that the logical thing to do was to kill its human controller, and when it was told that it should not do so, it tried to target the control tower that was communicating with its controller. That is the stuff of nightmares, except for the fact that that colonel was later declared to have misspoken. No such experiment ever took place, but just seconds ago, some people in this House might have believed that it did. AI is already damaging public trust in technology. It is damaging public trust in leadership and in democracy; that has already happened, and we must guard against it happening further. Both here in and America, elections are coming up soon.

Even in the most human sector, the creative industries, one radio presenter was recently reported to have uploaded her previous shows so that the artificial intelligence version of her could cover for her during the holidays. How are new staff to get their first break, if not on holiday cover? Millions of jobs in every sector are at stake. We also hear of analysts uploading the war games of Vladimir Putin to predict how he will fight in Ukraine, with remarkable accuracy. We hear of AI being used by those interested in antibiotics and by those interested in bioweapons. There are long-term challenges here, but there are very short-term ones too.

The Government’s White Paper promotes both innovation and regulation. It does so in the context of Britain being the most advanced nation outside America and China for AI research, development and, potentially, regulation. We can and should cement that success; we are helped by DeepMind, and by OpenAI’s decision only yesterday to open its first office outside the US in London. The Prime Minister’s proposed autumn summit should allow us to build a silicon bridge to the most important technology of this century, and I welcome it hugely.

I want to lay out some things that I hope could be considered at the summit and with this technology. First, the Government clearly need to understand where AI will augment existing possibilities and challenges, and most of those challenges will already be covered by legislation. Employment, for instance, is already regulated, and whether or not companies use AI to augment their HR system, it is already illegal to discriminate. We need to make sure that those existing laws continue to be reinforced, and that we do not waste time reinventing the wheel. We do not have that time, because the technology is already with us. Transparency will be key.

Dawn Butler Portrait Dawn Butler (Brent Central) (Lab)
- Hansard - - - Excerpts

The hon. Member is making an important speech. Is he aware of the AI system that, in identifying potential company chief executive officers, would identify only male CEOs because of the data that had been input? Even though there is existing legislation, we have to be mindful of the data that is going into new technology and AI systems.

Matt Warman Portrait Matt Warman
- Hansard - - - Excerpts

The hon. Member is absolutely right that, when done well, AI allows us to identify discrimination and seek to eliminate it, but when done badly, it cements it into the system in the worst possible way. That is partly why I say that transparency about the use of AI will be absolutely essential, even if we largely do not need new legislation. We need principles. When done right, in time this technology could end up costing us less money and delivering greater rewards, be that in the fields of discrimination or public services and everywhere in between.

There is a second-order point, which is that we need to understand where loopholes that the technology creates are not covered by existing bits of legislation. If we think back to the time we spent in his House debating upskirting, we did not do that because voyeurism was somehow legal; we did it because a loophole had been created by a new technology and a new set of circumstances, and it was right that we sought to close it. We urgently need to understand where those loopholes are now, thanks to artificial intelligence, and we need to understand more about where they will have the greatest effects.

In a similar vein, we need to understand, as I raised at Prime Minister’s questions a few weeks ago, which parts of the economy and regions of the country will be most affected, so that we can focus the immense Government skills programmes on the areas that will be most affected. This is not a predictable industry, such as when we came to the end of the coalmining industry, and we are not able to draw obvious lines on obvious maps. We need to understand the economy and how this impacts on local areas. To take just one example, we know that call centres—those things that keep us waiting for hours on hold—are going to get a lot better thanks to artificial intelligence, but there are parts of the country that are particularly seeing an increase in local call centre employees. This will be a boom for the many people working in them, but it is also a hump that we need to get over, and we need to focus skills investment in certain areas and certain communities.

I do believe that, long term, we should be profoundly optimistic that artificial intelligence will create more jobs than it destroys, just as in every previous industrial revolution, but there will be a hump, and the Government need to help as much as they can in working with businesses to provide such opportunities. We should be optimistic that the agency that allows people to be happier in their work—personal agency—will be enhanced by the use of artificial intelligence, because it will take away some of the less exciting aspects of many jobs, particularly at the lower-paid end of the economy, but not by any means solely. There is no shame in eliminating dull parts of jobs from the economy, and there is no nobility in protecting people from inevitable technological change. History tells us that if we do seek to protect people from that technological change, we will impoverish them in the process.

I want to point to the areas where the Government surely must understand that potentially new offences are to be created beyond the tactical risk I have described. We know that it is already illegal to hack the NHS, for instance. That is a tactical problem, even if it might be somewhat different, so I want to take a novel example. We know that it is illegal to discriminate on the grounds of whether someone is pregnant or likely to get pregnant. Warehouses, many of them run by large businesses, gather a huge amount of data about their employees. They gather temperature data and movement data, and they monitor a huge amount. They gather data that goes far beyond anything we had previously seen just a few years ago, and from that data, companies can infer a huge amount, and they might easily infer from that whether someone is pregnant.

If we do that, which we already do, should we now say that it will be illegal to collect such data because it opens up a potential risk? I do not think we should, and I do not think anyone would seriously say we should, but it is open to a level of discrimination. Should we say that such discrimination is illegal, which is the situation now—companies can gather data but it is what they do with it that matters—or should we say that it actually exposes people to risk and companies to a legal risk, and that it may take us backwards rather than forwards? Unsurprisingly, I think there is a middle ground that is the right option.

Suddenly, however, a question as mundane as collecting data about temperature and movements, ostensibly for employee welfare and to meet existing commitments, turns into a political decision: what information is too much and what analysis is too much? It brings us as politicians to questions that suddenly and much more quickly revert to ethics. There is a risk of huge and potentially dangerous information asymmetry. Some people say that there should be a right to a human review and a right to know what cannot be done. All these are ethical issues that come about because of the advent of artificial intelligence in the way that they have not done so previously. I commend to all Members the brilliant paper by Oxford University’s Professor Adams-Prassl on a blueprint for regulating algorithmic management, and I commend it to the Government as well.

AI raises ethical considerations that we have to address in this place in order to come up with the principles-based regulation that we need, rather than trying to play an endless game of whack-a-mole with a system that is going to go far faster than the minds of legislators around the world. We cannot regulate in every instance; we have to regulate horizontally. As I say, the key theme surely must be transparency. A number of Members of Parliament have confessed—if that is the right word—to using AI to write their speeches, but I hope that no more people have used AI to write their speeches than those who have already confessed. Transparency has been key in this place, and it should be key in financial services and everywhere else. For instance, AI-generated videos could already be forced to use watermarking technology that would make it obvious that they are not the real deal. As we come up to an election, I think that such use of existing technology will be important. We need to identify the gaps—the lacunae—both in legislation and in practice.

Artificial intelligence is here with us today and it will be here for a very long time, at the very least augmenting human intelligence. Our endless creativity is what makes us human, and what makes us to some extent immune from being displaced by technology, but we also need to bear in mind that, ultimately, it is by us that decisions will be made about how far AI can be used and what AI cannot be used for. People see a threat when they read some of the most hyperbolic headlines, but these are primarily not about new crimes; they are about using AI for old crimes, but doing them a heck of a lot better.

I end by saying that the real risk here is not the risk of things being done to us by people using AI. The real risk is if we do not seize every possible opportunity, because seizing every possible opportunity will allow us to fend off the worst of AI and to make the greatest progress. If every student knows that teachers are not using it, far more fake essays will be submitted via ChatGPT. Every lawyer and every teacher should be encouraged to use this technology to the maximum safe extent, not to hope that it simply goes away. We know that judges have already seen lawyers constructing cases using AI and that many of the references in those cases were simply fictional, and the same is true of school essays.

The greatest risk to progress in our public services comes from not using AI: it comes not from malevolent people, but from our thinking that we should not embrace this technology. We should ask not what AI can do to us; we should ask what we can do with AI, and how Government and business can get the skills they need to do that best. There is a risk that we continue to lock in the 95% of AI compute that sits with just seven companies, or that we promote monopolies or the discrimination that the hon. Member for Brent Central (Dawn Butler) mentioned. This is an opportunity to avert that, not reinforce it, and to cement not prejudice but diversity. It means that we have an opportunity to use game-changing technology for the maximum benefit of society, and the maximum number of people in that society. We need to enrich the dialogue between Government, the private sector and the third sector, to get the most out of that.

This is a matter for regulation, and for global regulation, as is so much of the modern regulatory landscape. There will be regional variations, but there should also be global norms and principles. Outside the European Union and United States, Britain has that unique position I described, and the Prime Minister’s summit this autumn will be a key opportunity—I hope all our invites are in the post, or at least in an email. I hope that will be an opportunity not just for the Prime Minister to show genuine global leadership, but also an opportunity to involve academia, parliamentarians and broader society in having that conversation, and allow the Government to seize the opportunity and regain some trust on this technology.

I urge the Minister to crack on, seize the day, and take the view that artificial intelligence will be with us for as long as we are around. It will make a huge difference to our world. Done right, it will make everything better; done badly, we will be far poorer for it.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - - - Excerpts

I call the Chair of the AI Committee, Darren Jones.

14:11
Darren Jones Portrait Darren Jones (Bristol North West) (Lab)
- View Speech - Hansard - - - Excerpts

Thank you, Mr Deputy Speaker. I am Chair of the Business and Trade Committee, but if there is an AI Committee I am certainly interested in serving on it. I declare my interest, as set out in the Register of Members’ Financial Interests, and I thank the hon. Member for Boston and Skegness (Matt Warman) and the Backbench Business Committee for organising and agreeing to this important debate.

I will make the case for the Government to be more involved in the technology revolution, and explain what will happen if we leave it purely to the market. It is a case for a technology revolution that works in the interests of the British people, not against our interests. In my debate on artificial intelligence a few weeks ago, I painted a picture of the type of country Britain can become if we shape the technology revolution in our interests. It is a country where workers are better paid, have better work and more time off. It is a country where public servants have more time to serve the public, with better access and outcomes from our public services, at reduced cost to the taxpayer. It is a country where the technological revolution is seen as an exciting opportunity for workers and businesses alike—an opportunity to learn new things, improve the quality of our work, and create an economy that is successful, sustainable, and strong.

I also warned the House about the risks of the technology revolution if we merely allow ourselves to be shaped by it. That is a country where technology is put upon people, instead of being developed with them, and where productivity gains result in economic growth and higher profits, but leave workers behind with reduced hours or no job at all. It is where our public services remain in the analogue age and continue to fail, with increased provision from the private sector only for those who can afford it. It is a world in which the pace of innovation races ahead of society, creatively destroying the livelihoods of many millions of people, and where other countries leap ahead of our own, as we struggle to seize the economic opportunities of the technology revolution for our own economy, and through the potential for exports to support others.

The good news is that we are only really at the start of that journey, and we can shape the technology revolution in our interests if we choose to do so. But that means acting now. It means remembering, for all our discussions about artificial intelligence and computers, that we serve the people. It means being honest about the big questions that we do not yet have answers to. It is on some of those big questions that I will focus my remarks. That is not because I have fully formed answers to all of them at this stage, but because I think it important to put those big questions on the public record in this Parliament.

The big questions that I wish to address are these: how do we maintain a thriving, innovative economy for the technology sector; how can we avoid the risk of a new age of inequality; how can we guarantee the availability of work for people across the country; and how can we balance the power that workers have, and their access to training and skills? Fundamental to all those issues is the role and capacity of the state to support people in the transition.

We will all agree that creating a thriving, innovative economy is a good idea, and we all want Britain to be the go-to destination for investment, research and innovation. We all want the British people, wherever they are from and from whatever background, to know that if they have an idea, they can turn it into a successful business and benefit from it. As the hon. Member for Boston and Skegness alluded to, that means getting the balance right between regulation and economic opportunity, and creating the services that will support people in that journey. Ultimately, it means protecting the United Kingdom’s status as a great place to invest, start, and scale up technology businesses.

Although we are in a relatively strong position today, we risk falling behind quickly if we do not pay attention. In that context, the risk of a new age of inequality is perhaps obvious. If the technology revolution is an extractive process, where big tech takes over the work currently done by humans and restricts the access to markets needed by new companies, power and wealth will be taken from workers and concentrated in the already powerful, wealthy and largely American big-tech companies. I say that not because I am anti-American or indeed anti-big tech, but because it is our job to have Britain’s interest at the front of our minds.

Will big tech pick up the tab for universal credit payments to workers who have been made redundant? Will it pay for our public services in a situation where fewer people are in work paying less tax? Of course not. So we must shape this process in the interests of the British people. That means creating inclusive economic opportunities so that everybody can benefit. For example, where technology improves productivity and profits, workers should benefit from that with better pay and fewer days at work. Where workers come up with innovative ideas on how to use artificial intelligence in their workplace, they should be supported to protect their intellectual property and start their own business.

The availability of work is a more difficult question, and it underpins the risk of a new age of inequality. For many workers, artificial intelligence will replace the mundane and the routine. It can result in human workers being left with more interesting and meaningful work to do themselves. But if the productivity gains are so significant, there is conceivably a world in which we need fewer human workers than we have today. That could result in a four-day week, or even fewer days than that, with work being available still for the majority of people. The technology revolution will clearly create new jobs—a comfort provided to us by the history of previous industrial revolutions. However, that raises two questions, which relate to my next point about the power of workers and their access to training and skills.

There are too many examples today of technology being put upon workers, not developed with them. That creates a workplace culture that is worried about surveillance, oppression, and the risk of being performance managed or even fired by an algorithm. That must change, not just because it is the right thing to do but because, I believe, it is in the interests of business managers and owners for workers to want to use these new technologies, as opposed to feeling oppressed by them. On training, if someone who is a worker today wants to get ahead of this revolution, where do they turn? Unless they work in a particularly good business, the likelihood is that they have no idea where to go to get access to such training or skill support. Most people cannot just give up their job or go part time to complete a higher education course, so how do we provide access to free, relevant training that workers are entitled to take part in at work? How does the state partner with business to co-create and deliver that in the interests of our country and the economy? The role of the Government in this debate is not about legislation and regulation; it is about the services we provide, the welfare state and the social contract.

That takes me to my next point: the role and capacity of the Government to help people with the technology transition. Do we really think that our public services today are geared towards helping people benefit from what will take place? Do we really believe our welfare system is fit for purpose in helping people who find themselves out of work? Artificial intelligence will not just change the work of low-paid workers, who might just be able to get by on universal credit; it will also affect workers on middle and even higher incomes, including journalists, lawyers, creative sector workers, retail staff, public sector managers and many more. Those workers will have mortgages or rents to pay, and universal credit payments will go nowhere near covering their bills. If a significant number of people in our country find themselves out of work, what will they do? How will the Government respond? The system as it is designed today is not fit for that future.

I raise those questions not because I have easy answers to them, but because the probability of those outcomes is likely. The severity of the problem will be dictated by what action we take now to mitigate those risks. In my view, the state and the Government must be prepared and must get themselves into a position to help people with the technology transition. There seems now to be political consensus about the opportunities of the technology revolution, and I welcome that, but the important unanswered question is: how? We cannot stop this technology revolution from happening. As I have said, we either shape it in our interests or face being shaped by it. We can sit by and watch the market develop, adapt and innovate, taking power and wealth away from workers and creating many of the problems I have explained today, leaving the Government and our public services to pick up the pieces, probably without sufficient resources to do so. Alternatively, we can decide today how this technology revolution will roll out across our country.

I was asked the other day whether I was worried that this technology-enabled future would create a world of despair for my children. My answer was that I am actually more worried about the effects of climate change. I say that because we knew about the causes and consequences of climate change in the 1970s, but we did nothing about it. We allowed companies to extract wealth and power and leave behind the damage for the public to pick up. We are now way behind where we need to be, and we are actively failing to turn it around, but with this technology revolution, we have an opportunity in front of us to show the public that a different, more hopeful future is possible for our country—a country filled with opportunity for better work, better pay and better public services. Let us not make the same mistakes as our predecessors in the 1970s, and let us not be trapped in the current debate of doom and despair for our country, even though there are many reasons to feel like that.

Let us seize this opportunity for modernisation and reform, remembering that it is about people and our country. We can put the technology revolution at the heart of our political agenda and our vision for a modern Britain with a strong, successful and sustainable economy. We can have a technology revolution that works in the interests of the British people and a Britain that is upgraded so that it works once again. However, to shape the technology revolution in our interests, that work must start now.

14:23
Greg Clark Portrait Greg Clark (Tunbridge Wells) (Con)
- View Speech - Hansard - - - Excerpts

It is a pleasure to speak in this debate, and I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing it and on his excellent speech and introduction. It is a pleasure to follow my fellow Committee Chair, the hon. Member for Bristol North West (Darren Jones). Between the Business and Trade Committee and the Science, Innovation and Technology Committee, we have a strong mutual interest in this debate, and I know all of our members take our responsibilities seriously.

This is one of the most extraordinary times for innovation and technology that this House has ever witnessed. If we had not been talking so much about Brexit and then covid, and perhaps more recently, Russia and Ukraine, our national conversation and—this goes to the point made by my hon. Friend the Member for Boston and Skegness—debates in this Chamber, would have been far more about the technological revolution that is affecting all parts of the world and our national life.

It is true to say that, perhaps as well as the prominence that the discovery of vaccines against covid has engendered, AI has punctured through into public consciousness as a change in the development of technology. It has got people talking about it, and not before time. I say that because, as both Members who have made speeches have said, it is not a new technology, in so far as it is a technology at all. In fact, in a laconic question to one of the witnesses in front of our Committee, one member observed, “Was artificial intelligence not just maths and computers?” In fact, one of the witnesses said that in his view it was applied statistics. This has been going on for some time.

My Committee, the Science, Innovation and Technology Committee—I am delighted to see my colleague the hon. Member for Brent Central (Dawn Butler) here—is undertaking a fascinating and, we hope, impactful inquiry into the future governance of AI. We are taking it seriously to understand the full range of issues that do not have easy or glib answers—if they do, those are best avoided—and we want to help inform this House and the Government as to the best resolutions to some of the questions in front of us. We intend to publish a report in the autumn, but given the pace of debate on these issues and, as I am sure the hon. Lady will agree, the depth of the evidence we have heard so far, we hope to publish an interim report sooner than that. It would be wrong for me as Chair of the Committee to pre-empt the conclusions of our work, but we have taken a substantial amount of evidence in public, both oral and written, so I will draw on what we have found so far.

Having said that AI is not new—it draws on long-standing research and practice—it is nevertheless true to say that we are encountering an acceleration in its application and depth of progress. To some extent, the degree of public interest in it, without resolution to some of the policy questions that the hon. Member for Bristol North West alluded to, carries some risks. In fact, the nomenclature “artificial intelligence” is in some ways unhelpful. The word “artificial” is usually used in a pejorative, even disdainful way. When combined with the word “intelligence”, which is one of the most prized human attributes, the “artificial” rather negates the positivity of the “intelligence”, leading to thoughts of dystopia, rather than the more optimistic side of the argument to which my hon. Friend the Member for Boston and Skegness referred. Nevertheless, it is a subject matter with which we need to grapple.

In terms of the pervasiveness of AI, much of it is already familiar to us, whether it is navigation by sat-nav or suggestions of what we might buy from Amazon or Tesco. The analysis of data on our behaviour and the world is embedded, but it must be said that the launch of ChatGPT to the public just before Christmas has catapulted to mass attention the power already available in large language models. That is a breakthrough moment for millions of people around the world.

As my hon. Friend said, much of the current experience of AI is not only benign, but positively beneficial. The evidence that our Committee has taken has looked at particular applications and sectors. If we look at healthcare, for example, we took evidence from a medical company that has developed a means of recognising potential prostate cancer issues from MRI scans far before any symptoms present themselves, and with more accuracy than previous procedures. We heard from the chief executive of a company that is using AI to accelerate drug discovery. It is designing drugs from data, and selecting the patients who stand to benefit from them. That means that uses could be found, among more accurately specified patient groups, for drugs that have failed clinical trials on the grounds not of safety but of efficacy. That could lead to a very early prospect of better health outcomes.

We heard evidence that the positive effects of AI on education are significant. Every pupil is different; we know that. Every good teacher tailors their teaching to the responses and aptitudes of each student, but that can be done so much better if the tailoring is augmented through the use of technology. As Professor Rose Luckin of University College London told us,

“students who might have been falling through the net can be helped to be brought back into the pack”

with the help of personalised AI. In the field of security, if intelligence assessments of a known attacker are paired with AI-rich facial recognition technology, suspects may be pinpointed and apprehended before they have the chance to execute a deadly attack.

There are many more advantages of AI, but we must not only observe but act on the risks that arise from the deployment of AI. Some have talked about the catastrophic potential of AI. Much of what is suggested, as in the case of the example given by my hon. Friend the Member for Boston and Skegness, is speculative, the work of fiction, and certainly in advance of any known pathway. It is important to keep a cool head on these matters. There has been talk in recent weeks of the possibility of AI killing many humans in the next couple of years. We should judge our words carefully. There are important threats, but portents of disaster must be met with thinking from cool, analytical heads, and concrete proposals for steps to take.

I very much applaud the seriousness with which the Government are approaching the subject of the governance of AI. For example, a very sensible starting point is making use of the deep knowledge of applications among our sector regulators, many of which enjoy great respect. I have mentioned medicine; take the medical regulator, the Medicines and Healthcare products Regulatory Agency. With its deep experience of supervising clinical trials and the drug discovery process, it is clear that it is the right starting point; we should draw on its experience and expertise. If AI is to be used in drug discovery or diagnostics, it makes sense to draw on the MHRA’s years of deep experience, for which it is renowned worldwide.

It is also right to require regulators to come together to develop a joint understanding of the issues, and to ask them to work collectively on regulatory approaches, so that we avoid inconsistency and inadvertently applying different doctrines in different sectors. It is right that regulators should talk to each other, and that there should be coherence. Given the commonalities, there should be a substantial, well-funded, central capacity to develop regulatory competence across AI, as the Government White Paper proposed.

I welcome the Prime Minister’s initiative, which the hon. Member for Bristol North West mentioned. In Washington, the Prime Minister agreed to convene a global summit on AI safety in the UK in the autumn. Like other technologies, AI certainly does not respect national boundaries. Our country has an outstanding reputation on AI, the research and development around it, and—at our best—regulatory policy and regulation, so it is absolutely right that we should lead the summit. I commend the Prime Minister for his initiative in securing that very important summit.

The security dimension will be of particular importance. Like-minded countries, including the US and Japan, have a strong interest in developing standards together. That reflects the fact that we see the world through similar eyes, and that the security of one of us is of prime importance to the others. The hon. Member for Bristol North West, in his debate a few weeks ago, made a strong point about international collaboration.

One reason why a cool-headed approach needs to be taken is that the subject is susceptible to the involvement of hot heads. We must recognise that heading off the risks is not straightforward; it requires deep reflection and consideration. Knee-jerk regulatory responses may prove unworkable, will not be widely taken up by other countries, and may therefore be injurious to the protections that policy innovation aims to deliver. I completely agree with the hon. Gentleman that there is time for regulation, but not much time. We cannot hang around, but we need to take the appropriate time to get this right. My Committee will do what it can to assist on that.

If the Government reflect on these matters over the summer, their response should address a number of challenges that have arisen in this debate, and from the evidence that my Committee took. Solutions must draw on expertise from different sectors and professions, and indeed from people with expertise in the House, such as those contributing to this debate. Let me suggest briefly a number of challenges that a response on AI governance should address. One that has emerged is a challenge on bias and discrimination. My hon. Friend the Member for Brent Central has been clear and persistent in asking questions to ensure that the datasets on which algorithms are trained do not embed a degree of bias, leading to results that we would not otherwise tolerate. I dare say she will refer to those issues in her speech. For example, as has been mentioned, in certain recruitment settings, if data reflects the gender or ethnic background of previous staff, the profile of an “ideal” candidate may owe a great deal to past biases. That needs to be addressed in the governance regime.

There is a second and related point on the black box challenge. One feature of artificial intelligence is that the computer system learns from itself. The human operator or commissioner of the software may not know why the algorithm or AI software has made a recommendation or proposed a course of action. That is a big challenge for those of us who take an interest in science policy. The scientific method is all about transparency; it is about putting forward a hypothesis, testing it against the data, and either confirming or rejecting the hypothesis. That is all done publicly; publication is at the heart of the scientific method. If important conclusions are reached —and they may be accurate conclusions, with great predictive power—but we do not know how, because that is deep within the networks of the AI, that is a profound challenge to the scientific method and its applications.

Facial recognition software is a good example. The Metropolitan police is using facial recognition software combined with AI. It commissioned a study—a very rigorous study—from the National Physical Laboratory, which looks at whether there is any racial bias that can be determined from the subjects that are detected through the AI algorithms. The study finds that there is no evidence of that, but that is on the basis of a comparison of outputs against other settings; it is not based on a knowledge of the algorithms, which in this case is proprietary. It may or may not be possible to look into the black box, but that is one question that I think Governments and regulators will need to address.

Dawn Butler Portrait Dawn Butler
- View Speech - Hansard - - - Excerpts

In evidence to the Committee—of which I am a member— the Met said that there was no bias in its facial recognition system, whereas its own report states that there is bias in the system, and a bias with regard to identifying black and Asian women. In fact, the results are 86% incorrect. There are lots of ways of selling the benefits of facial recognition. Other countries across Europe have banned certain facial recognition, while the UK has not. Does the right hon. Gentleman think that we need to look a lot more deeply into current applications of facial recognition?

Greg Clark Portrait Greg Clark
- Hansard - - - Excerpts

The hon. Lady makes an excellent point. These challenges, as I put them, do not often have easy resolution. The question of detecting bias is a very important one. Both of us have taken evidence in the Committee and in due course we will need to consider our views on it, but she is right to highlight that as a challenge that needs to be addressed if public confidence and justice are to be served. It cannot be taken lightly or as read. We need to look at it very clearly.

There is a challenge on securing privacy. My hon. Friend the Member for Boston and Skegness made a very good point about an employer taking people’s temperatures, whether they could be an indication of pregnancy and the risk that that may be used in an illegal way. That is one example. I heard an example about the predictive power of financial information. The transaction that pays money to a solicitors’ firm that is known to have a reputation for advising on divorce can be a very powerful indicator of a deterioration in the financial circumstances of a customer in about six months’ time. Whether the bank can use that information, detecting a payment to a firm of divorce solicitors, to downgrade a credit rating in anticipation is a matter that I think at the very least should give rise to debate in this House. It shows that there are questions of privacy: the use of data gathered for one purpose for another.

Since we are talking about data, there is also a challenge around access to data. There is something of a paradox about this. The Committee has taken evidence from many software developers, which quite often are small businesses founded by a brilliant and capable individual. However, to train AI software, they need data. The bigger the dataset the more effective the training is, so there are real returns to economies of scale when it comes to data. There is a prospective contrast between potentially very small software developers who cannot do anything without access to data that may be in the hands of very large companies. Those of us who use Google know that it has a lot of information on us. I mentioned banks. They have a lot of information on us, too. That is not readily accessible to small start- ups, so access to data is something we will need to address.

Another challenge we need to address is access to compute, which is to say, the power to analyse data. Again, the bigger the computer, the bigger the compute power and the more effective and successful algorithms will be, but that can be a barrier to entry to smaller firms. If they are reserved to giants, that has profound consequences for the development of the industry. It is one of the reasons why I think the Government are right to consider plans for a dedicated compute resource in this country.

Those issues combine to make for what we might call an anti-trust challenge, to which the hon. Member for Bristol North West referred. There is a great danger that already we may concentrate market power in the hands of a very small number of companies, from which it is very difficult thereafter to diversify and have the degree of contestability and competition that the full benefits of AI should be able to respond to. Our regulators, in particular our competition regulators, will need to pay close attention to that.

Related to that is the law and regulation around intellectual property and copyright. In the creative industries, our copyright gives strong protection to people who create their own original work. The degree of modification or use without payment and licensing that is tolerable without damaging the returns and the vibrancy of our crucial creative sector is very important.

Another challenge is on liability, which mirrors some of the debates taking place about our large social media platforms. If we develop a piece of AI in an application that is used for illegal purposes, should we, as the developer or the person who licenses it, be responsible for its use by an end user or should that be a matter for them? In financial services, we have over time imposed strong requirements on providers of financial services, such as banks, to, in the jargon, know your customer—KYC. It is not sufficient just to say, “I had no reason to suppose that my facilities were going to be used for money laundering or drug trafficking.” There is a responsibility to find out what the intended use is. Those questions need to be addressed here. The hon. Member for Bristol North West raised questions about employment and the transition to a new model of employment, many of which have some upsides.

One of the classic definitions of a sentient computer is that it passes the Turing test: if there was a screen between a person and the computer they were interacting with, would they know that it was a computer, or would they think it was a human being? The experience of a lot of my constituents when dealing with some large bureaucracies is that even if there is a human on the end of the telephone, they might as well be a computer because they are driven by the script and the software. In fact, one might say that they fail the Turing test. The greater personalisation of AI may overcome what can be a pretty dispiriting experience for employees who have to park their humanity and read out a script to a consumer. There are big challenges but also opportunities there.

A couple of other things have been referred to, such as the challenge of international co-ordination. We have the agency to set our own rules, but there is no point in doing so without taking the opportunity to influence the world. We will be stronger if we have—at least among like-minded countries, and preferably beyond—a strong consensus about how we should proceed.

David Davis Portrait Mr David Davis (Haltemprice and Howden) (Con)
- Hansard - - - Excerpts

My right hon. Friend’s words, “at least among like-minded countries”, triggered a thought. If we do not include China—in lots of other areas we exclude it for moral and ethical reasons—it will be a futile exercise. As far as I can tell, China wants to be involved. What is his view on involving countries such as China?

Greg Clark Portrait Greg Clark
- Hansard - - - Excerpts

My view is that it should be a global initiative. At the very least, strong security aspects will combine like-minded nations. We should advance that; we may put protections in place with other linked nations. I completely agree with my right hon. Friend that we should look to establish a global consensus. There is sometimes pessimism about whether it is possible to regulate genies that have come out of the bottle, but if we think of available technologies such as human cloning, there is not a country in the world—as far as I am aware —that has not recognised it as ethically wrong and acted against it. In fact, I think there is a person in China in jail at the moment for having attempted that.

I will draw my remarks to a close, having set out the rich range of challenges that stand before Governments around the world and our regulators. They are not easy things to get right, but it is of profound importance that we think carefully and put in place the best possible governance system to maximise the benefits and see off the harms that may result. For the Minister and his colleagues across Whitehall, it means a busy summer preparing for the summit, but I wish them well, and I look forward to the House taking a great interest in and participating in and around the planned summit.

14:51
Dawn Butler Portrait Dawn Butler (Brent Central) (Lab)
- Hansard - - - Excerpts

It is a pleasure to follow the Chair of the Science, Innovation and Technology Committee, of which I am a member. I congratulate the hon. Member for Boston and Skegness (Matt Warman) on securing the debate and the Backbench Business Committee on allowing it.

I used to be a computer programmer, so I like tech— I need to put that out there. This debate on AI follows the categories of three types of people. I compare those three types of people to some friends that I used to go out partying with. One would get completely wasted; one would just sit in the corner and not want to get involved; and I would be the driver, so I would stay sober but enjoy the party. In this debate I like to think that I am staying sober, enjoying the party and making sure that everyone, myself included, gets home safely.

Darren Jones Portrait Darren Jones
- Hansard - - - Excerpts

Could I invite my hon. Friend to inform the House which of us in this debate is the one who got wasted?

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

I do not think my friend would look kindly on me if I said their name. I always made sure they got home safely—that is the main thing.

My hon. Friend the Member for Streatham (Bell Ribeiro-Addy) told me the other day that she asked ChatGPT to write a letter from her to constituents on the dangers of AI. This is what ChatGPT churned out:

“Dear [Constituent],

Thank you for reaching out to me with your concerns about large language model AI. I share your worries about the potential risks that this technology poses and I appreciate the opportunity to address them.

As you may be aware, large language model AI is an advanced form of artificial intelligence that can generate human-like text by training on vast amounts of data. While this technology has shown tremendous potential in various fields, including language translation, chatbots, and content creation, it also presents significant ethical, social, and legal challenges.

One of the main concerns about large language model AI is its potential to perpetuate biases and discrimination. If the data used to train these models are biased, the AI will learn and replicate those biases in its output, leading to harmful outcomes. Moreover, these models have the ability to generate highly realistic but entirely fake text, which could be used to spread disinformation, manipulate public opinion, and even harm individuals.

Therefore, I believe that the regulation and governance of large language model AI are essential to ensure that it is used ethically and responsibly. The UK government has been actively engaging with the tech industry, academia, and civil society to understand the implications of AI and develop a regulatory framework that promotes innovation while safeguarding human rights and values.”

That, to me, is an illustration of the fact that even AI itself recognises that there are some issues with AI. It shows that we need clear regulation, which we do not quite have at the moment. There is still time for the Government’s White Paper to change that, and I hope that debates of this kind will enable change to happen.

Many Members have referred to the use of AI for medical advances, and quantum computers will certainly enable medicines and medical solutions to be found much more quickly. However, as I said when evidence was being given to the Science, Innovation and Technology Committee, even something as simple as body mass index, which is used in the medical world, is a flawed measurement. The use of BMI in the building of AI will integrate that bias into anything that the AI produces. Members may not be aware that the BMI scale was created not by a doctor but by an astronomer and mathematician in the 1800s. What he was trying to do was identify l’homme moyen—the average man—in statistical terms. The scale was never meant to be used in the medical world in the way that it is. People can be prevented from having certain medical procedures if their BMI is too high. The Committee was given no evidence that we would rule out, or mitigate, a flawed system such as BMI in the medical profession and the medical world. We should be worried about this, because in 10 or 20 years’ time it will be too late to explain that BMI was always discriminatory against women, Asian men and black people. It is important for us to get this right now.

I recognise the huge benefits that AI can have, but I want to stress the need to stay sober and recognise the huge risks as well. When we ask certain organisations where they get their data from, the response is very opaque: they do not tell us where they are getting their data from. I understand that some of them get their mass data scraping from sites such as Reddit, which is not really where people would go to become informed on many things.

If we do not take this seriously, we will be automating discrimination. It will become so easy just to accept what the system is telling us, and people who are already marginalised will become further marginalised. Many, if not most, AI-powered systems have been shown to contain bias, whether against people of colour, women, people with disabilities or those with other protected characteristics. For instance, in the case of passport applications, the system keeps on saying that a person’s eyes are closed when in fact they have a disability. We must ensure that we measure the impact on the public’s rights and freedoms alongside the advances in AI. We cannot become too carried away—or drunk—with all the benefits, without thinking about everything else.

At the beginning, I thought it reasonable for the Government to say, “We will just expand legislation that we already have,” but when the Committee was taking evidence, I realised that we need to go a great deal further—that we need something like a digital Bill of Rights so that people understand and know their rights, and so that those rights are protected. At the moment, that is not the case.

There was a really stark example when we heard some information in regard to musicians, music and our voices. Our voices are currently not protected, so with the advancements of deepfake, anybody in this House can have their voice attached to something using deepfake and we would have no legal recourse, because at the moment our voices are not protected. I believe that we need a digital Bill of Rights that would outlaw the most dangerous uses of AI, which should have no place in a real democracy.

The Government should commit to strengthening the rights of the public so that they know what is AI-generated or whether facial recognition—the digital imprint of their face—is being used in any way. We know, for instance, that the Met police have on file millions of people’s images—innocent people—that should not be there. Those images should be taken off the police database. If an innocent person’s face is on the database and, at some point, that is put on a watch list, the domino effect means that they could be accused of doing something they have not done.

The UK’s approach to AI currently diverges from that of our closest trading partners, and I find that quite strange. It is not a good thing and there is an apparent trade-off between progress and safety. I think we should always err on the side of safety and ethics. Progress will always happen; we cannot stop progress. Companies will always invest in AI. It is the future, so we do not have to worry about that—people will run away with that. What we have to do is ensure that we protect people’s safety, because otherwise, instead of being industry leaders in the UK, we will be known as the country that has shoddy or poor practices. Nobody really wants that.

There are countries that are outlawing how facial recognition is used, for instance, but we are not doing that in the UK, so we are increasingly looking like the outlier in this discussion and protection around AI. Government’s first job is to protect their citizens, so we should protect citizens now from the dangers of AI.

Harms are already arising from AI. The Government’s recently published White Paper takes the view that strong, clear protections are simply not needed. I think the Government are wrong on that. Strong, clear protections are most definitely needed—and needed now. Even if the Government just catch up with what is happening in Europe and the US, that would be more than we are doing at the moment. We need new, legally binding regulations.

The White Paper currently has plans to water down data rights and data protection. The Data Protection and Digital Information (No. 2) Bill paints an alarming picture. It will redefine what counts as personal data. All these things have been put in place piecemeal to ensure that personal data is protected. If we lower the protection in the definition of what is personal data, that will mean that any company can use our personal data for anything it wants and we will have very limited recourse to stop that. At the end of the day, our personal data is ultimately what powers many AI systems, and it will be left ripe for exploitation and abuse. The proposals are woefully inadequate.

The scale of the challenge is vast, but instead of reining in this technology, the Government’s approach is to let it off the leash, and that is problematic. When we received evidence from a representative from the Met police, she said that she has nothing to hide so what is the problem, for instance, in having the fingerprint, if you like, of her face everywhere that she goes? I am sure that we all have either curtains or blinds in our houses. If we are not doing anything illegal, why have curtains or blinds? Why not just let everyone look into our house? Most abuse happens in the home so, by the same argument, surely allowing everyone to look into each other’s houses would eliminate a lot of abuse.

In our country we have the right to privacy, and people should have that right. Our digital fingerprints should not be taken without our consent, as we have policing by consent. The Met’s use of live facial recognition and retrospective facial recognition is worrying. I had a meeting with Mark Rowley the other day and, to be honest, he did not really understand the implications, which is a worry.

Like many people, I could easily get carried away and get drunk with this AI debate, but I am the driver. I need to stay sober to make sure everyone gets home safely.

15:05
Jo Gideon Portrait Jo Gideon (Stoke-on-Trent Central) (Con)
- View Speech - Hansard - - - Excerpts

It is a pleasure to follow the hon. Member for Brent Central (Dawn Butler). I join everyone in congratulating my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing this important debate.

Everybody is talking about artificial intelligence, which is everywhere. An article in The Sentinel, Stoke’s local paper, recently caught my eye. Last week, the Home Secretary visited my constituency to open a Home Office facility in Hanley, a development providing more than 500 new jobs in Stoke-on-Trent. The article reflected on the visit and, amusingly, compared the Home Secretary’s responses to questions posed by the local media with the responses from an AI. Specifically, the Home Secretary was asked whether Stoke-on-Trent had taken more than its fair share of asylum seekers through the asylum dispersal scheme, and about the measures she is taking to ensure that asylum seekers are accommodated more evenly across the country. She replied:

“The new Home Office site is a vote of confidence in Stoke-on-Trent... They will be helping to bring down the asylum backlog and process applications more quickly.”

The same question was posed to ChatGPT, which was asked to respond as if it were the Home Secretary. The AI responded:

“I acknowledge the city has indeed taken on a significant number of asylum seekers. This kind of uneven distribution can place stress on local resources and create tension within communities. It is clear we need a more balanced approach that ensures all regions share responsibility and benefits associated with welcoming those in need.”

The AI also referred to reviewing the asylum dispersal scheme, strengthening collaboration with local authorities, infrastructure development and the importance of public awareness and engagement.

We all know what it is like to be on the receiving end of media questions, and a simple and straightforward answer is not always readily available. I suppose the AI’s response offers more detail but, unsurprisingly, it does not tell us anything new. It is, after all, limited by the information that is currently on the internet when formulating its answers. Thankfully, AI is not taken to making things up—hopefully that will not happen, but it is one of the big debates.

This begs the question: what is truth? That is the fundamental question on this topic. We must develop a robust ethical framework for artificial intelligence. The UK should be commended for embracing the spirit of an entrepreneurial and innovative approach to artificial intelligence. We know that over-regulation stifles creativity and all the good things it has to offer. However, AI has become consumer-focused and increasingly accessible to people without technical expertise. Our regulatory stance must reflect this shift. Although there should be a departure from national regulatory micromanagement, the Government have a role to play in protecting the public against potential online harms. It cannot be left to self-regulation by individual companies.

Let us also remember that artificial intelligence operates within a global space. We cannot regulate the companies that are developing this technology if they are based in another nation. This is a complicated space in which to navigate and create safeguards.

Balancing those concerns is increasingly complex and challenging, and conversations such as this must help us to recognise that regulation is not impossible and that it is incredibly important to get it right. For example, when the tax authorities in the Netherlands employed an AI tool to detect potential childcare benefit fraud, it made mistakes, resulting in innocent families facing financial ruin and thousands of children being placed in state custody as a result of accusations. When the victims tried to challenge the decision, they were told that officials could not access the algorithmic inputs, so they were unable to establish how decisions had been made. That underlines the importance of checks and balances.

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

The hon. Lady is absolutely right on these concerns, especially as regards the Home Office. Big Brother Watch’s “Biometric Britain” report spoke about how much money the Home Office is paying to companies, but we do not know who they are. If we do not know who these companies are, we will not then know how they gather, develop and use their data. Does she think it is important that we know who is getting money for what?

Jo Gideon Portrait Jo Gideon
- Hansard - - - Excerpts

The hon. Lady makes a good point. Clearly, that is the big part of this debate: we have to have transparency, as it is essential. The Government’s current plans, set out in the AI White Paper, do not place any new obligations on public bodies to be transparent about their use of AI; to make sure their AI tools meet accuracy and non-discrimination standards, as she rightly said; or to ensure that there are proper mechanisms in place for challenging or getting redress when AI decisions go wrong. What the White Paper proposes is a “test and learn” approach to regulation, but we must also be proactive. Technology is changing rapidly, while policy lags behind. Once AI is beyond our control, implementing safeguards becomes implausible. We should acknowledge that we cannot afford to wait to see how its use might cause harm and undermine trust in our institutions.

While still encouraging sensible innovation, we should also learn from international experiences. We must encourage transparency and put in place the proper protections to avoid damage. Let us consider the financial sector, where banks traditionally analyse credit ratings and histories when deciding who to lend money to. I have recently been working with groups such as Burnley Savings and Loans, which manually underwrites all loans and assesses the risk of each loan by studying the business models and repayment plans of its customers. Would it be right to use AI to make such decisions? If we enter a world where there is no scope for gut feeling, human empathy and intuition, do we risk impoverishing our society? We need to be careful and consider how we want to use AI, being ethical and thoughtful, and remaining in control, rather than rolling it out wherever possible. We must strike the right balance.

Research indicates that AI and automation are most useful when complemented by human roles. The media can be negative about AI’s impact, leading to a general fear that people will lose their jobs as a result of its growth. However, historically, new technology has also led to new careers that were not initially apparent. It has been suggested that the impact of AI on the workplace could rival that of the industrial revolution. So the Government must equip the workforce of the future through skills forecasting and promoting education in STEM—science, technology, engineering and maths.

Furthermore, we must remain competitive in AI on the global stage, ensuring agility and adaptability, in order to give future generations the best chances. In conjunction with the all-party group on youth affairs, the YMCA has conducted polling on how young people feel about the future and the potential impact of AI on their careers. The results are going to be announced next month. It found that AI could not only lead to a large amount of job displacement, but provide opportunities for those from non-traditional backgrounds. More information on skills and demand will help inform young people to identify their career choices and support industries and businesses in preparing for the impact of AI.

I am pleased that the Department for Education has already launched a consultation on AI education, which is open until the end of August. Following that, we should work hard to ensure that schools and universities can quickly adapt to AI’s challenges. Cross-departmental discussion is important, bringing together AI experts and educators, to ensure that the UK is at the cutting edge of developments with AI and to provide advice to adapt to younger generations.

AI is hugely powerful and possesses immense potential. ChatGPT has recently caught everybody’s attention, and it can create good stories and news articles, like the one I shared. But that technology has been used for years and, right now, we are not keeping up. We need to be quicker at adapting to change, monitoring closely and being alert to potential dangers, and stepping in when and where necessary, to ensure the safe and ethical development of AI for the future of our society and the welfare of future generations.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - - - Excerpts

Recalling a conversation that we had earlier in the day, I am tempted to call Robin Millar in the style of Winston Churchill.

15:15
Robin Millar Portrait Robin Millar (Aberconwy) (Con)
- View Speech - Hansard - - - Excerpts

For the benefit of Members present, Mr Deputy Speaker and I had the chance to discuss and look at the qualities of ChatGPT. Within a matter of seconds, ChatGPT produced a 200-word speech in the style of Winston Churchill on the subject of road pricing. It was a powerful demonstration of what we are discussing today.

I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on conceiving the debate and bringing it to the Floor of the House. I thank the Chair of the Business and Trade Committee, the hon. Member for Bristol North West (Darren Jones), and the Chair of the Science, Innovation and Technology Committee, my right hon. Friend the Member for Tunbridge Wells (Greg Clark), for their contributions. As a Back Bencher, it was fascinating to hear about their role as Chairs of those Committees and how they pursue lines of inquiry into a subject as important as this one.

I have been encouraged greatly by hon. Members from across the House by the careful and measured consideration they have taken of the subject. I congratulate the hon. Member for Brent Central (Dawn Butler) on perhaps the most engaging introduction to a speech that I have heard in many a week. My own thoughts went to the other character in the party who thinks they are sober, but everyone else can see that they are not. I leave it to those listening to the debate to decide which of us fits which caricature.

I have come to realise that this House is at its best when we consider and discuss the challenges and opportunities to our society, our lives and our ways of working. The debate addresses both challenge and opportunity. First, I will look at what AI is, because without knowing that, we cannot build on the subject or have meaningful discussion about what lies beyond. In considering the development of AI, I will look at how we in the UK have a unique advantage. I will also look at the inevitability of destruction, as some risk and challenge lies ahead. Finally, I hope to end on a more optimistic and positive note, and with some questions about what the future holds.

Like many of us, I remember where I was when I saw Nelson Mandela make that walk to freedom. I remember where I was when I saw the images on television of the Berlin wall coming down. And I remember where I was, sitting in a classroom, when I saw the tragedy of the NASA shuttle falling from the sky after its launch. I also remember where I was, and the computer I was sitting at, when I first engaged with ELIZA. Those who are familiar with artificial intelligence will know that ELIZA was a dummy program that provided the role of a counsellor or someone with whom people could engage. My right hon. Friend the Member for Tunbridge Wells has already alluded to the Turing test, so I will not speak more of that, but that is where my fascination and interest with this matter started.

To bring things right up to date, as mentioned by Mr Deputy Speaker, we now have ChatGPT and the power of what that can do. I am grateful to my hon. Friend the Member for Stoke-on-Trent Central (Jo Gideon) and to the hon. Member for Brent Central because I am richer, not only for their contributions, but because I had a private bet with myself that at least two Members would use and quote from ChatGPT in the course of the debate, so I thank them both for an extra fiver in my jar as a result of their contributions.

In grounding our debate in an understanding of what AI is, I was glad that my hon. Friend the Member for Boston and Skegness mentioned the simulation of an unarmed aerial vehicle and how it took out the operator for being the weak link in delivering what it had been tasked with doing. That, of course, is not the point of the story and he did well to go on to mention that the UAV had adapted—adapted to take that step. As a simulation, when that rule changed, it then changed again and said, “Now I will take out the communication means by which that operator, who I can no longer touch, controls myself”.

The principle there is exactly as hon. Members have mentioned: it can work only to the data that it is given and the rules with which it is set. That is the lesson from apocryphal stories such as those. In that particular case, there is a very important principle—it is this idea of a “human in the loop”. Within that cycle of data, processing, decision making and action, there must remain a human hand guiding it. The more critical the consequence—the more critical the action—the more important it is that that is there.

If we think of the potential application of AI in defence, it would be very straightforward—complex but straightforward—and certainly in the realms of what is possible, for AI to be used to interpret real-time satellite imagery to detect troop movements and to respond accordingly, or to recommend a response accordingly, and that is where the human in the loop becomes critical. These things are all possible with the technology that we have.

What AI does well is to find, learn and recognise patterns. In fact, we live our life in patterns at both a small and a large scale. AI is incredibly good—we could even say superhuman—at seeing those patterns and predicting next steps. We have all experienced things such as TikTok and Facebook on our phones. We find ourselves suddenly shaking our head and thinking, “Gosh, I have just lost 15 minutes or longer, scrolling through.” It is because the algorithms in the software are spotting a pattern of what we like to see, how long we dwell on it, what we do with that, and it then feeds us another similar item for us to consume.

Perhaps more constructively, artificial intelligence is now used in agriculture. Tractors will carry booms across their backs with multiple robots. Each one of those little robots will be using an optical sensor to look at individual plants that it is passing over and it will, in a split second, identify whether that plant is a crop that is wanted, or a weed that is not. More than that, it will identify whether it is a healthy plant, whether it is infected with a parasite or a mould, or whether it is infested with insects. It will then deliver a targeted squirt of whatever substance is needed—a nutrient, a weedkiller or a pesticide —to deal with that single plant. This is all being done in a tractor that is moving across a field without a driver, because it is being guided by GPS and an autonomous system to maximise the efficiency of the coverage of that area. AI is used in all these things, but, again, it is about recognising patterns. There are advantages in that. There are no more harmful blanket administrations of pesticides, or the excessive use of chemicals, because these can now be very precisely targeted.

To any experts listening to this, let me say that I make no pretence of expertise. This is in some ways my own mimicry of the things that I have read and learned and am fascinated by. Experts will say that it is not patterns that AI is good at; it is abstractions. That can be a strange concept, but the idea of an abstraction is one of how we pull out of and create a model of what we are looking at. Without going into too much detail, there is something in what the hon. Member for Brent Central was talking about in terms of bias and prejudice within systems. I suggest that that does not actually exist within the system unless it is intentionally programmed. It is a layer that we apply on top of what the system produces and we call it this thing. The computer has no understanding of bias or prejudice; it is just processing—that is all. We apply an interpretation on top that can indeed be harmful and dangerous. We just need to be careful about that distinction.

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

The hon. Gentleman is absolutely right: AI does not create; it generates. It generates from the data that is being inputted. The simplified version is “rubbish in, rubbish out”—it is more complex than that, but that is the simplest way of saying it. If we do not sort out the biases before we put in the data, the data will be biased.

Robin Millar Portrait Robin Millar
- Hansard - - - Excerpts

The hon. Lady—my hon. Friend, if I may—is absolutely correct. It is important to understand that we are dealing with something that, as I will come onto in a moment, does not have a generalised intelligence, but is an artificial intelligence. That is why, if hon. Members will forgive me, I am perhaps labouring the point a little.

A good example is autonomous vehicles and the abstraction of events that the AI must create. It might be a car being driven erratically, for example. While the autonomous vehicle is driving along, its cameras are constantly scanning what is happening around it on the road. It needs to do that in order to recognise patterns against that abstraction and respond to them. Of course, once it has that learning, it can act very quickly: there are videos on the internet from the dashcams of cars driven autonomously and without a driver, slowing down, changing lane or moving to one side of the road because the car has predicted, based on the behaviour it is seeing of other cars on the road, that an accident is going to happen—and sure enough, seconds later, the accident occurs ahead, but the AI has successfully steered the vehicle to one side.

That is important, but the limitation is that, if the AI only learns about wandering cars and does not also learn about rocks rolling on to the road, a falling tree, a landslide, a plane crash, an animal running into the road, a wheelchair, a child’s stroller or an empty shopping cart, it will not know how to respond to those. These are sometimes called edge cases, because they are not the mainstream but happen on the edges. They are hugely important and they all have to be accounted for. Even in the event of a falling tree, the abstraction must allow for trees that are big or small, in leaf or bare, falling towards the car or across the road, so we can see both the challenges of what AI must do, and the accomplishment in how well it has done what it has done so far.

That highlights the Achilles heel of AI, because what I have tried to describe is what is called a generalised intelligence. Generalised intelligence is something that we as humans turn out to be quite good at, or at least something that it is hard for computers to replicate reliably. What a teenager can learn in a few hours—that is, driving a car—it takes billions of images and videos and scenarios for an AI to learn. A teenager in a car intuitively knows that a rock rolling down a hillside or a falling tree presents a real threat to the road and its users. The AI has to learn those things; it has to be told those things. Crucially, however, once AI knows those things, it can generate them faster and respond much more quickly and much more reliably.

I will just make the comment that it does have that ability to learn. To go back to the agricultural example, the years of gathering images of healthy and poorly plants, creating libraries and then teaching, can now be done much faster because of this ability to learn. That is another factor in what lies ahead. We have to think not just that change will come, but that the ability to change will also be faster in the future. I hope it is clear then that what AI is not is a mind of its own. There is no ghost in the machine. It cannot have motivation of its own origin, nor can it operate beyond the parameters set by its programs or the physical constraints built into its hardware.

As an aside, I should make a comment about hardware, since my right hon. Friend the Member for Tunbridge Wells and others may comment on it. In terms of hardware constraints, the suggestion is that the probability of the sudden take-off of general artificial intelligence in the future is very small. AI derives its abilities to make rapid calculations from parallelisation, that is, simultaneously running multiple calculations across central processing units.

The optimisation and instruction programme appears to have hit rapidly diminishing returns in the mid to late 2010s, as such processing speed is increasingly constrained by the number of CPUs available. An order-of-magnitude increase in throughput therefore requires similar increases in available hardware or an exceedingly expensive endeavour. In other words, basic engineering parameters mean that we cannot be suddenly blindsided, I would suggest, by the emergence of a malevolent global intelligence, as the movies would have us believe.

I am grateful for your indulgence, Mr Deputy Speaker, as I establish this baseline about what AI can and cannot do. It is important to do so in order then to consider the question of development. The key point that I highlight is the opportunity we have to create in the UK—specifically in the post-Brexit UK—an environment for the development of AI. If colleagues will indulge me—I mean not to make political points—I will make an observation on the contrast between the environment we have here compared with other parts of the world.

In any rapidly developing area of technology, it is important to differentiate the unethical application of technology and the technology itself. Unfortunately the EU’s AI Act illustrates a failure to recognise that distinction. By banning models capable of emotional and facial recognition, for example, EU lawmakers may believe that they have banned a tool of mass surveillance, but in fact, they risk banning the development of a technology that may have a myriad of otherwise very good applications, such as therapies and educational tools that can adjust to user responses.

The same holds for the ban on models that use behaviour patterns to predict future actions. Caution around that is wise, but a rule preventing AI from performing a process that is already used by insurers, credit scorers, interest-rate setters and health planners across the world for fear that it might be used to develop a product for sale to nasty dictators is limiting. Perhaps the most egregious example of that conflation is the ban on models trained on published literature, a move that effectively risks lobotomising large language model research applications such as ChatGPT in the name of reducing the risk of online piracy. We might compare that to banning all factories simply to ensure that none is used to manufacture illegal firearms.

In short, and in words of one syllable: it is easy to ban stuff. But it is much harder—and this is the task to which we must apply ourselves—to create a moral framework within which regulation can help technology such as AI to flourish. To want to control and protect is understandable, but an inappropriate regulatory approach risks smothering the AI industry as it draws its first breaths. In fact, as experts will know better than me, AI is exceptionally good at finding loopholes in rules-based systems, so there is a deep irony to the idea that it might be the subject of a rules-based system but not find or use a way to navigate around it.

I am encouraged by the Government’s contrasting approach and the strategy that they published last year. We have recognised that Britain is in a position to do so much better. Rather than constraining development before applications become apparent, we seek to look to those applications. We can do that because, unlike the tradition of Roman law, which is inherently prescriptive and underlines the thinking of many nations and, indeed, of the EU, the common law, as we have in this country, allows us to build an ethical framework for monitoring industries without resorting to blanket regulation that kills the underlying innovation.

That means that, in place of prescriptive dictates, regulators and judges, we can—in combination with industry leaders—innovate, evolve and formalise best practice proportionate to evolving threats. Given that the many applications of AI will be discoverable only through the trial and error of hundreds of dispersed sectors of the economy, that is the only option open to us that does not risk culling future prosperity and—without wishing to overdramatise—creating an invisible graveyard of unsaved lives.

It is a most un-British thing to say, but this British system is a better way. Indeed, it is being introduced to nations around the world. They are switching from a regulatory approach to one of common law for many reasons. First, it facilitates progress. Just as no legislator can presume to know all the positive applications of a new technology such as AI, they are also blind to its potential negative applications. In the UK, in this environment, AI could prove to be a game-changer for British bioengineering. The world-leading 100,000 Genomes Project and UK Biobank, combined with our upcoming departure from the GDPR, promise AI-equipped researchers an unparalleled opportunity to uncover the genetic underpinnings of poor health and pharmaceutical efficacy, to the benefit of health services around the world.

The second reason is that it is more adaptable to threats. Decentralised systems of monitoring, involving industry professionals with a clear understanding of the technology, is the most effective form of risk management we can realistically devise. An adaptable system has the potential to insulate us from another risk of the AI era: technology in the hands of hostile powers and criminals. As in previous eras, unilateral disarmament would not make us safer. Instead, it would leave us without the tools to counteract the superior predictive abilities of our foes, rendering us a contemporary Qing dynasty marvelling at the arrival of steamships.

It is vital to recognise that AI is going to bring destruction. This is perhaps the most revolutionary technological innovation of our lifetime, and with it, AI brings the potential for creative destruction across the economy at a faster pace than even the world wide web. I will quote Oppenheimer when he cited the Bhagavad Gita, which says:

“Now I am become Death, the destroyer of worlds.”

That is not to sensationalise and fall into the same trap I warned of at the start of my remarks, but it is important to recognise that there will be change. Every bit as much as we have seen the stripping out of personnel in factories as they are replaced by machinery, we will see the loss of sectors to this technology. The critical point is not to stop it but to recognise it, adapt and use it for its strengths to develop.

We should be upfront about this. A failure to do so risks a backlash to excess. We cannot react with regulation; we must harness this. The industrial revolution brought both unprecedented economic prosperity and massive disruption. For all we know, had the luddites enjoyed a world of universal suffrage, their cause may have triumphed, dooming us to material poverty thereafter. If Britain is to reap the benefits of this new era of innovation, we must be frank about its potential, including its disruptive potential, and be prepared to make a strong case to defend the future it promises. Should we fail in this task, surrendering instead to the temptations of reactionary hysteria, our future may not look like an apocalyptic Hollywood blockbuster. It will, however, resemble that common historical tale of a once-great power sleepwalking its way into irrelevance.

On a more hopeful note, I turn to the question of where next? I spoke before of the pattern-based approaches that amplify conformity, such as we see on TikTok and Facebook. This quality may be attractive to technocrats—predictability, patterns, finding gaps and filling them—but that points to an increasing conformity that I, and I think many others, find boring. Artificial intelligence should be exploring what is new and innovative.

What about awe—the experience and the reaction of our mind when seeing or realising something genuinely new that does not conform to past patterns? A genuinely intelligent system would regularly be creating a sense of awe and wonder as we experience new things. Contrast the joy when we find a new film of a type we have not seen before—it covers the pages of the newspapers, dominates conversations with our friends and brings life to our souls, even—with being fed another version of the same old thing we have got used to, as some music apps are prone to do. Consider the teacher who encouraged us to try new things and have new experiences, and how we grew through taking those risks, rather than just hearing more of the same.

This begs key questions of governance, too. We have heard about a Bill of digital rights, and questions of freedom were rightly raised by the hon. Member for Brent Central, but what about a genuinely free-thinking future? What would AI bring to politics? We must address that question in this place. What system of government has the best record of dealing with such issues? Would it support an ultimate vision of fairness and equity via communism? Could it value and preserve traditions and concepts of beauty that could only be said, as Scruton argued, to have true value in a conservative context? These have always been big questions for any democracy, and I believe that AI may force us to address them in depth and at pace in the near future.

That brings me to a final point: the question of a moral approach. Here, I see hope and encouragement. My hon. Friend the Member for Stoke-on-Trent Central talked about truth, and I believe that ultimately, all AI does is surface these deeper questions and issues. The one I would like to address, very briefly, is the point of justice. The law is a rulebook; patterns, abstractions, conformity and breach are all suited to AI, but such a system does not predict or produce mercy or forgiveness. As we heard at the national parliamentary prayer breakfast this week, justice opens the door to mercy and forgiveness. It is something that is vital to the future of any modern society.

We all seek justice—we often hear about it in this House—but I would suggest that what we really seek is what lies beyond: mercy and forgiveness. Likewise, when we talk about technology, it is often not the technology itself but what lies beyond it that is our aim. As such, I am encouraged that there will always be a place for humanity and those human qualities in our future. Indeed, I would argue, they are essential foundations for the future that lies ahead.

15:40
John Nicolson Portrait John Nicolson (Ochil and South Perthshire) (SNP)
- View Speech - Hansard - - - Excerpts

I will keep my speech short and snappy, and not repeat anything that any other Member has said—I know that is unfashionable in this place. I begin by congratulating the hon. Member for Boston and Skegness (Matt Warman) on introducing the debate. He was one of the very best Ministers I have ever come across in my role on the Front Bench, and I am sorry to see him on the Back Benches; he is well due promotion, I would say. I am sure that has just damned his prospects for all eternity.

As my party’s culture spokesperson, I am very keenly aware of the arts community’s concerns about AI and its risks to the arts. I have now been twice—like you, Mr Deputy Speaker, I am sure—to “ABBA Voyage”, once in my role on the Culture, Media and Sport Committee and once as a guest of the wonderful Svana, its producer. As I am sure you know, Mr Deputy Speaker, the show uses AI and motion capture technology combined with a set of massive, ultra-high-quality screens to create an utterly magnificent gig. It felt like the entire audience was getting to see ABBA in their prime; indeed, it was perhaps even better than it would have been originally, because we now have ultra-modern sound quality, dazzling light shows and a vast arena in which to enjoy the show. It was history, airbrushed to perfection and made contemporary. It seems to be a success, having sold over 1 million tickets so far and with talk of its touring the world. In fact, it was so good that towards the end, some of the audience started waving at Agnetha and Björn. They had become completely convinced that they were not in fact AI, but real people. There were tears as people looked at Agnetha, which says something about the power of technology to persuade us, does it not?

Soon, I will be going to see Nile Rodgers—that really is a very good gig, as I do not need to tell the other Front Benchers present. Again, I am going to be his guest. He is a legendary guitarist, songwriter and singer; he gave evidence to our Select Committee; and he has sold 500 million albums worldwide. Nile will be incredible —he always is—but he will also be 70 years of age. It will not be a 1970s early funk gig. The audience will include the mature, people in the prime of middle youth such as myself, and also the Glastonbury generation. It is easy to envisage an AI Nile Rodgers, produced by a record company and perhaps touring in competition with the very real Nile Rodgers, competing for ticket sales with the great man himself. Indeed, it is easy to envisage the young recording artists of today signing away their rights to their likenesses and vocals in perpetuity, with long-term consequences.

Many in the arts sphere feel safe from AI, as they suspect that human creativity at the artistic level cannot be replicated. I very much hope that they are right, but once that human creativity has been captured, it can be reproduced eternally, perhaps with higher production levels. It is not, I feel, the sole responsibility of artists, musicians and playwrights to be concerning themselves with radical developments in AI. They have work to do as it is, and surely the job to protect them is ours. We need to get on top of the copyright issues, and we need to protect future performers from having their rights sold away along with their very first contracts. We as parliamentarians must think deeply, listen and research widely. I have heard some heartening—sometimes lengthy —speeches that show there is, cross party, an awareness and a willingness to grasp this, and that is deeply encouraging.

However, the UK Government have much to work on in their White Paper. They have a lot to do when they look at this and listen to the submissions, and they must provide improvements. It allows public institutions and private companies to use new experimental AI on us, and then try to correct the flaws subsequently. It uses us, our communities and our industries as guinea pigs to try out untested code to see whether that makes matters better or worse. I think the risks are many for the arts community, which is concerned deeply about fakery, and there is an argument that the AI White Paper empowers such digital fakery.

In closing, it is absolutely key that we listen to experts in this field, as we should always do to inform our decision making, but in particular to those in the arts and music industry because they will be so deeply affected.

Roger Gale Portrait Mr Deputy Speaker (Sir Roger Gale)
- Hansard - - - Excerpts

I call the shadow Minister.

15:46
Alex Davies-Jones Portrait Alex Davies-Jones (Pontypridd) (Lab)
- View Speech - Hansard - - - Excerpts

It is an honour to close this debate on behalf of the Opposition. I thank all colleagues for their contributions, and I pay tribute to the hon. Member for Boston and Skegness (Matt Warman) for bringing forward this interesting and thoughtful debate.

We can all agree that artificial intelligence has tremendous potential for social good. Indeed, we know that artificial intelligence technologies already contribute about £3.7 billion to the UK economy. There is some genuinely incredible innovation out there, much of which I have had the privilege of seeing at first hand over the past 18 months. Whether it be trained robots working with our armed forces as part of our defence and recovery efforts, apps to support female health or AI programmes that could one day make our working lives easier and more flexible, the opportunities really are endless.

It is no surprise, therefore, that the Government have been shouting as loudly as possible about their plans to capitalise on this innovation. However, it is crucial that innovation does not come at the expense of everyday working people. While Labour welcomes this debate, as a proud Welsh MP, I am clear that the Government need to go further to ensure that the discourse on AI and innovation is not focused entirely on the opportunities here in London.

That said, we can all recognise that technologies such as AI have the power to truly transform lives. This could range from improving medical services and delivering better, more efficient public services to working to deliver jobs and employment opportunities for all for generations to come. While AI and ChatGPT have been mentioned heavily today and are regularly in the headlines, much of this technology has been around for years or decades. I am therefore interested to hear from the Minister exactly why it took his Department so long to produce the long-overdue UK science and technology framework, which finally came out in March this year.

The same can be said of the Government’s AI White Paper, which is out of date just months after being published. In the White Paper’s foreword, the Secretary of State—the right hon. Member for Chippenham (Michelle Donelan)—claims:

“My vision for an AI-enabled country is one where our NHS heroes are able to save lives using AI technologies that were unimaginable just a few decades ago.”

However, that points to the exact issue with this Government’s approach to tech, which is that it absolutely fails to be forward-thinking.

The Government’s current plan does not place any new obligations on public bodies to be transparent about their use of AI. That was put most powerfully by my good friend, my hon. Friend the Member for Brent Central (Dawn Butler). AI tools need to meet accuracy and non-discrimination standards, and they need to ensure that there are proper mechanisms for challenge or redress when AI decisions do—as inevitably they will—go wrong. Instead, the White Paper promises a test and learn approach to regulation, which essentially translates to “hurt first, fix later”. This is a worrying approach for all involved. Let us be clear: our country is facing a choice right now about who benefits from the huge disruption that tech and AI will bring, and, in my hon. Friend’s words, we need to “stay sober”. Will it be those who already hold wealth and power, or will it be the starter firms trying to break in and disrupt the industry, the patients trying to book an appointment with their GP, or the workers using technology to enhance and improve their role?

The UK has many brilliant AI companies based here, and thriving sectors such as life sciences and professional services, which can support and capitalise on new technologies, but they risk being underutilised. The lack of certainty from the Government, who have no proper industrial strategy, is not only holding back UK tech businesses; it is stifling economic growth at the worst possible time. The reality is that other countries are already light years ahead. In Israel, police, fire and emergency services now come as a package deal, thanks to AI technology. Simple changes, such as having different phone numbers to call for separate emergency services, have allowed AI to play a central role in saving lives.

Of course, with any modernisation we must ensure that our laws keep up. Colleagues will be aware that the Digital Markets, Competition and Consumers Bill is in Committee right now, and that important Bill will go some way to address the large monopolies that have been allowed to proliferate online for far too long. Yet again, the Government have been too slow to act on getting the right balance between innovation and regulation. Labour recognises the challenges ahead, and none of us wants AI, or other intelligence technologies, to operate without proper regulation.

We recognise the concerns about risks, from the immediate to the existential, which need to be handled with care. However, the Government have failed even to cover the basics in their AI White Paper. Instead, they are doing as they have with too many other policy areas in this brief, and kicking the can down the road with consultations and road maps that will take up to two years to complete. I invite the Minister to imagine what technological developments will take place during that timeline, and I urge the Department to hurry up and get on with the job.

We have already heard that there are steps the Government could be taking right now to get ahead, including addressing growing calls for regulation to address foundation AI models. It does not take an expert to recognise that AI systems are not built from nothing, so what assessment has the Minister made of the merits of regulating those models now? I am sure he would have widespread support from colleagues, including those on the Conservative Benches, about concerns over AI, as well as from those who want to support start-ups and scale-ups, and who need clarity before developing their tech for the masses. We all want the UK tech industry to continue to thrive, but a responsible approach must also be part of that conversation.

The Government have an obligation to protect their citizens, and given their approach to online safety, with their last-minute amendments that severely weakened the Online Safety Bill, it will come as no surprise that I have concerns that this Government are not up to the job when it comes to regulating AI. That is why the Government must work harder to ensure that our laws are keeping pace. The only way we can ensure that they do is to have a Government in power who will harness technologies such as AI and constantly think to the future. It has become incredibly clear that that is not the Conservative Government’s approach, and I am afraid that their lines on tech are simply not getting traction with the public, well rehearsed though they are.

It is all very well that the Prime Minister spent London Tech Week meeting AI CEOs and announcing that the UK will soon host a global summit on AI, but the Government have done little to reassure everyday working families that their lives will be improved, not impacted, by developments in the tech industry. We cannot put people’s jobs at risk and simply hand them over to tech giants without thoughtful regulation. Many of our constituents have already paid a heavy price thanks to this Government’s utter mishandling of the energy crisis and the increasing cost of living. They deserve better than to have their jobs put at further risk if the Government fail to take a sensible approach to regulating tech and AI.

There is also much work to be done to ensure that the opportunities afforded by these sectors truly are open to all. When we speak about AI and innovation, it can often feel as though it is a closed conversation, open only to those with specific educational paths or career trajectories. Although it is clear that the Prime Minister has a personal interest in the industry—frankly, I am not sure we heard much from his predecessors in recent years about it—the barriers still exist.

Ultimately, two-thirds of employers are struggling to recruit workers with digital skills. Skills such as software engineering are no longer sector specific, and the economy of the future will require those with digital skills across all industries. AI technologies need more than just mathematicians and statisticians; there is also strong demand for designers, creators and people who can think creatively. Labour will ensure that we have the skills across our economy to win the global race for the technologies of the future, by establishing a new national body to oversee a national effort to meet the skills needs of the coming decades across all regions and nations of the UK.

The Government talk a great deal about levelling up, but we all know it must be more than just an empty slogan. I am keen to hear from the Minister about the exact steps his Department is taking to address these issues.

Lastly, and perhaps most importantly, these industries rely on our ability to get online. That is a simple premise for some, but the unfortunate reality is that it is not so easy for most people. The Government’s so-called commitment to getting people online is laughable. If they cannot get the basics right, including a reliable, fast broadband connection, how on earth can people across the UK be reassured that this Government’s approach to AI and tech will not see them worse off, too?

Broadband is central to powering our increasingly digital economy, but the Government’s slow roll-out has left parts of the UK, such as my hometown, stuck decades behind. In addition, once people are online, the Government have failed to legislate to educate. The Government have failed to commit to strong media literacy provisions in the Online Safety Bill. In fact, those were dropped in an earlier draft. How can we be assured that the Government will work to ensure that tech more widely is understood by the masses? The Government could have put these simple policies in place years ago, but instead they focus their efforts on landing press coverage for their minimal announcements during London Tech Week, which will, let us be honest, change little for the lives of the majority of people in the UK.

On the other hand, Labour is listening. We are ambitious for technologies such as AI, and we want to see them embedded in our everyday services, whether to speed up welfare claims or diagnose patients in hospitals. Labour is committed to doing so responsibly, and we will work in partnership with businesses to face the future and address the challenges, opportunities and risks head-on. The Government’s record on AI is limited, and far too often it is a case of too little, too late. Those in the industry are desperate for guidance, and Labour is all too ready to provide that clarity. I hope the Minister is listening.

15:55
Paul Scully Portrait The Parliamentary Under-Secretary of State for Science, Innovation and Technology (Paul Scully)
- View Speech - Hansard - - - Excerpts

I start by conveying my appreciation to my hon. Friend the Member for Boston and Skegness (Matt Warman) for securing today’s debate and for speaking so powerfully in opening what has been on the whole—until the word soup of the hon. Member for Pontypridd (Alex Davies-Jones), which I will cover in a second—a thoughtful debate about this important and complex topic.

We have had some considered speeches, and I will touch on some of those. We heard from the Chairman of the Business and Trade Committee, the hon. Member for Bristol North West (Darren Jones), about the risk to workers. My right hon. Friend the Member for Tunbridge Wells (Greg Clark) spoke about how we have to choose our words carefully and keep cool heads in regulation, and that goes to the heart of what we are talking about today. The hon. Member for Brent Central (Dawn Butler) talked about how, instead of constraining the technology, the Government are letting it off the leash, and I do not think that is right. When we talk about the AI White Paper, it is the flexibility that keeps it up to date, rather than it being out of date.

We heard from my hon. Friends the Members for Stoke-on-Trent Central (Jo Gideon) and for Aberconwy (Robin Millar), and the hon. Member for Ochil and South Perthshire (John Nicolson) talked about the gigs he gets to go to. In the Department for Science, Innovation and Technology, we have the sharp focus to look at AI and the digital skills that the hon. Member for Pontypridd was talking about. Six months ago, when I was in the Department for Digital, Culture, Media and Sport, I had to leave a digital economy council meeting to go to a dinner with Dell. When I explained that, they said, “You’re going to dinner with Adele?” I said, “No, it isn’t. It is just Dell, unfortunately.” We now have that sharp focus to address the AI White Paper.

First, let me talk about the fact that AI is fast becoming part of our daily lives. It is in our phones, our cars, our offices and our workplaces. The explosion in the use of AI tools such as DALL-E, Midjourney, ChatGPT and Bard shows that we are on the cusp of a new era of artificial intelligence. As my hon. Friend the Member for Boston and Skegness rightly asserted, it has the potential to bring enormous benefits to our society, and we must always remember that. We have to be aware of the risks and manage them carefully on an international basis, which is summed up by the global summit that the Prime Minister is hosting here this autumn, but we must always look to the opportunities, too, and how AI will change the world. That includes in the NHS, where the use of automated lip readers such as Liopa are bringing a voice to the voiceless by improving treatments for patients who cannot speak, and where risk prediction tools, such as the Scottish Patients at Risk of Readmission and Admission tool, or SPARRA, can provide GPs in Scotland with monthly risk scores for patients and predict the likelihood of their being admitted to hospital.

AI can also change our economy, driving greater consumer choice, efficiencies and productivity. One only has to look at AI’s impact through the widespread use of virtual assistants such as Siri, Cortana, Google Assistant and Alexa to see how AI is helping consumers to manage their daily lives more efficiently.

However, there are unique risks, too, so it is right that Governments around the world play their part in ensuring that this technology is developed and applied in a safe, transparent way. In the UK, the Government have long recognised the transformative potential of this technology, and we have sought to be ahead of the curve. With respect, I say to the hon. Member for Pontypridd that since 2014 we have invested £2.5 billion in building a thriving AI ecosystem; we are recognised as having the third biggest AI ecosystem in the world after America and China.

The AI sector deal that we announced back in 2018 was followed by our national AI strategy in 2021. That set out our 10-year vision for ensuring that the UK remains at the forefront of the AI revolution by investing in skills and infrastructure, driving adoption across sectors, and governing AI effectively through regulation, technical standards and assurance. The House will know that my right hon. Friend the Prime Minister laid out his ambitions for the UK on AI at London Tech Week earlier this month. That ambition is for us to lead at home and abroad, and to lead change in our public services.

A theme discussed at some length today is the regulatory environment for artificial intelligence. As hon. Members will know, the Government committed to reviewing the AI regulatory and governance landscape in our national AI strategy. We subsequently published our AI regulation White Paper in March. The approach that the White Paper advocates is proportionate and adaptable. The proposed regulatory framework draws on the expertise of regulators. It supports them in considering AI in their sector by applying a set of high-level principles, which are outcomes-focused and designed to promote responsible AI innovation and adoption. We will work with and through regulators and others in the sector.

On the criticism of the White Paper, I have to say that industry supports our plans. We engaged with over 130 organisations on the proposals last year, and developers, business users and funders praised the flexibility of our approach, which will support innovation and build public trust. The White Paper remains very much in date because of its flexibility. Those who have read it know that its outcomes-focused, adaptable approach is deliberately designed to allow us to manage emerging and unforeseen risks, as well as those risks that we already know about.

The White Paper proposes a number of central support functions, which will be initially provided from within Government, but we will leverage activities and expertise from across the broader economy where possible. That will ensure that the framework effectively addresses AI risks in a way that is proportionate, future-proof and responsive.

Several people raised the issue of international co-operation. There we have shown true leadership. No country can tackle AI on its own, given its global nature. My right hon. Friend the Prime Minister announced earlier this month that we will host the first major global summit on AI safety this autumn. The summit will consider the risks of AI, including frontier systems, and will discuss how those risks can be mitigated through internationally co-ordinated action. The summit will also be a platform where countries can work together on developing a shared approach to mitigating risks.

However, the summit cannot be viewed in isolation. It builds on the extensive work that we have done on strengthening AI safety with the OECD, the Council of Europe, the Global Partnership on Artificial Intelligence, and the UN, and through the G7 Hiroshima AI process. Bilaterally, we have also made great strides in co-ordinating on AI safety with key international partners. In June, the UK signed the Atlantic declaration with the US, in which we agreed to accelerate co-operation on AI, with a focus on ensuring its safe and responsible development. Further, in May, the UK agreed the Hiroshima accord with Japan, in which we committed to focusing UK-Japan AI discussions on promoting human-centric and trustworthy AI, and on interoperability between our AI governance frameworks. We intend to go even further. As per the G7 Hiroshima leaders May 2023 communiqué, we have committed to advancing international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI that is aligned with shared democratic values.

The hon. Member for Ochil and South Perthshire spoke about AI in the creative industries. Obviously, the advent of AI has sent ripples of transformation across multiple industries, and the creative sphere is no exception. There are plenty of opportunities there, but there are also challenges that we have to address. The ability to automate creative tasks means that, in some cases, work such as copywriting, which could have taken hours if not days, could now take merely a few minutes. Some Members spoke about the risk of homogenising creativity, with the obvious concerns about intellectual property that stem from that. Again, I think it is right that we strike an appropriate balance in the regulation of AI to ensure that we do not stifle innovation, but that we ensure we protect the UK’s thriving creative industries.

In conclusion, the Government remain entirely committed to ensuring that AI develops and is applied safely not just here, but around the world. By effectively addressing the risks that Members have highlighted today, we can also seize the many opportunities that AI has to offer, from transforming our NHS with the discovery of new drugs, new treatments and new ways of supporting patients, to helping us race ahead to net zero and building a greener, fairer, stronger economy. We want to continue engaging with Members across this House, along with our partners in industry and academia, to deliver on those missions. We want to build the broadest possible coalition to ensure that the appropriate guard rails are in place for this technology to develop in a safe, fair and transparent way that will keep the UK right at the forefront of the AI revolution now and in the future. That is our vision and, working with hon. Members across the House, that is what we will deliver.

16:05
Matt Warman Portrait Matt Warman
- View Speech - Hansard - - - Excerpts

I thank all Members who contributed to what has been an important and, I hope, informative debate. We discussed a number of issues whose impact on humanity will be profound.

I want to touch briefly on discrimination, which the hon. Member for Brent Central (Dawn Butler) raised. If we get AI right, it will be the end of so much of the discrimination that has blighted society. If we get it wrong, it will supercharge it. If we have our eye on one thing for the future impact of AI, it must be fairness: fairness for workers across the country to take advantage of a technology that will make their jobs better and their lives happier and healthier; and fairness for people who have previously seen discrimination.

This technology will change huge aspects of this country. I am confident that the Government’s approach, and the summit the Minister alluded to just a few seconds ago, will be a key part in Britain showing leadership; that this is a country where our values, which are so firmly against discrimination and so firmly in favour of opportunity, fairness and innovation, can lead the world. The summit will be a hugely important moment for the future of a technology that will shape our world. I look forward to the Prime Minister playing an important role in that and I look forward to the development of the policies the Minister outlined, to all our benefit. I thank everyone for the debate.

Question put and agreed to.

Resolved,

That this House has considered artificial intelligence.