(2 months, 3 weeks ago)
Commons ChamberI am grateful for the hon. Gentleman’s intervention. As I will say in a moment, all of this is contingent on one key principle: building trust with the public. We need to do so through actions, not just words. We have to take the public with us every step of the way, because otherwise we will not have the permission to deliver the transformation that, ultimately, will be profoundly beneficial for them. I have striven throughout this speech and since I have had the pleasure of this role, in opposition and in government—
I thank the Secretary of State for giving way; he is being generous with his time. On that point about the public being confident in any systems we roll out, does he agree that we need to ensure guard rails are in place so that organisations and companies know what their responsibilities are?
I am grateful. The interventions are building up, but I think I can answer both together to satisfy both Members. Yes, safety has to be built in at the outset and the public need to see that. We have inherited a problem with safety in our country. Women and girls do not feel safe outside after dark. Parents do not think their children are safe online. We have an issue with safety that we need to get a grip of. I feel incredibly strongly, as do Ministers and the Department, that we need to reassure people that as we embrace the technological advances that sit before us, we do so in a way that has safety built in from the outset. That is something we will do, and we have high expectations that others will do so too. As I will mention in a moment, we are setting statutory obligations on people at the pioneering side of AI.
Yes. The exascale investment was being delivered through UK Research and Innovation, an enterprise that receives nearly £9 billion every single year and that, under our manifesto, would have had a growing level of investment across the entirety of the spending review. There were plans in place to deliver the investment, which is why Edinburgh was so confident that it would be delivered. It was a clear priority in our spending plans and communicated in writing by the Secretary of State’s predecessor to the chief executive of UKRI. Notwithstanding the fact that the Treasury seems to have got his tongue immediately upon taking office, a project that the Treasury never loved seems to have been mysteriously cancelled. The project was being delivered by UKRI, an organisation with significant financial resources that far exceeded the £1.3 billion cost of the supercomputer. It is the wrong decision at the wrong time.
I wonder how the shadow Secretary of State feels about the Rosalind Franklin Laboratory in Leamington Spa, which received over £1 billion in Government funding. The last Government put it on Rightmove.
The last Government did not do that; it was an independent institute that had multiple sources of funding. As the Secretary of State and his Ministers will discover, funding of that nature is competitive funding that is allotted by independent research councils. It would not have been within the gift of me or any other Minister to abrogate that competitive funding process.
Inevitably, there are projects that are funded and projects that are not funded, but the exascale computer was a very clear priority. It sat within the overall financial resources of UKRI and, under our Government, there was an expanding level of resource. People should have absolute confidence that the programme would have continued and been delivered in the context of the much larger amount of money that is spent through the Department, but by the Government as a whole. That was a good decision, and it would have had huge benefits to the UK. The chief executive of UKRI has talked at length about the benefits, and I think the Government are making the wrong decision. I urge the Secretary of State to go back, lock horns with the Treasury and seek to continue the project before it is too late, before contracts are cancelled and before technology is not procured.
(1 year, 4 months ago)
Commons ChamberI beg to move,
That this House has considered artificial intelligence.
Is it not extraordinary that we have not previously had a general debate on what is the issue of our age? Artificial intelligence is already with us today, but its future impact has yet to truly be felt, or indeed understood.
My aim in requesting this debate—I am very grateful to the Backbench Business Committee for awarding it—is twofold. First, it is to allow Members to express some views on an issue that has moved a long way since I was partially Minister for it, and even since the Government White Paper came out, which happened only very recently. Secondly, it is to provide people with an opportunity to express their views on a technology that has to be regulated in the public interest, but also has to be seized by Government to deliver the huge improvements in public services that we all know it is capable of. I hope that the industry will hear the views of parliamentarians, and—dare I say it?—perhaps better understand where the gaps in parliamentarians’ knowledge might be, although of course those gaps will be microscopic.
I will begin with a brief summary of where artificial intelligence is at, which will be self-avowedly superficial. At its best, AI is already allowing the NHS to analyse images better than ever before, augmenting the expertise of our brilliant and expanding workforce with technology that is in a way analogous to something like adaptive cruise control—it helps; it does not replace. It is not a technology to be scared of, and patients will welcome that tool being put at the disposal of staff.
We are already seeing AI being used to inform HR decisions such as hiring and firing—an area that is much more complex and much more in need of some kind of regulation. We see pupils using it to research—and sometimes write—their essays, and we sometimes see schools using AI to detect plagiarism. Every time I drive up to my constituency of Boston and Skegness, I listen to Politico’s “Playbook”, voiced by Amazon’s Polly AI system. It is everywhere; it is in the car too, helping me to drive it. AI is creating jobs in prompt engineering that did not exist just a few years ago, and while it is used to generate horrific child sex abuse images, it is also used to detect them.
I want to take one example of AI going rogue that a senior American colonel talked about. It was claimed that a drone was awarded points for destroying a certain set of targets. It consulted its human controller on whether it should take a certain course of action, and was told that it should not. Because it got points for those targets, it decided that the logical thing to do was to kill its human controller, and when it was told that it should not do so, it tried to target the control tower that was communicating with its controller. That is the stuff of nightmares, except for the fact that that colonel was later declared to have misspoken. No such experiment ever took place, but just seconds ago, some people in this House might have believed that it did. AI is already damaging public trust in technology. It is damaging public trust in leadership and in democracy; that has already happened, and we must guard against it happening further. Both here in and America, elections are coming up soon.
Even in the most human sector, the creative industries, one radio presenter was recently reported to have uploaded her previous shows so that the artificial intelligence version of her could cover for her during the holidays. How are new staff to get their first break, if not on holiday cover? Millions of jobs in every sector are at stake. We also hear of analysts uploading the war games of Vladimir Putin to predict how he will fight in Ukraine, with remarkable accuracy. We hear of AI being used by those interested in antibiotics and by those interested in bioweapons. There are long-term challenges here, but there are very short-term ones too.
The Government’s White Paper promotes both innovation and regulation. It does so in the context of Britain being the most advanced nation outside America and China for AI research, development and, potentially, regulation. We can and should cement that success; we are helped by DeepMind, and by OpenAI’s decision only yesterday to open its first office outside the US in London. The Prime Minister’s proposed autumn summit should allow us to build a silicon bridge to the most important technology of this century, and I welcome it hugely.
I want to lay out some things that I hope could be considered at the summit and with this technology. First, the Government clearly need to understand where AI will augment existing possibilities and challenges, and most of those challenges will already be covered by legislation. Employment, for instance, is already regulated, and whether or not companies use AI to augment their HR system, it is already illegal to discriminate. We need to make sure that those existing laws continue to be reinforced, and that we do not waste time reinventing the wheel. We do not have that time, because the technology is already with us. Transparency will be key.
The hon. Member is making an important speech. Is he aware of the AI system that, in identifying potential company chief executive officers, would identify only male CEOs because of the data that had been input? Even though there is existing legislation, we have to be mindful of the data that is going into new technology and AI systems.
The hon. Member is absolutely right that, when done well, AI allows us to identify discrimination and seek to eliminate it, but when done badly, it cements it into the system in the worst possible way. That is partly why I say that transparency about the use of AI will be absolutely essential, even if we largely do not need new legislation. We need principles. When done right, in time this technology could end up costing us less money and delivering greater rewards, be that in the fields of discrimination or public services and everywhere in between.
There is a second-order point, which is that we need to understand where loopholes that the technology creates are not covered by existing bits of legislation. If we think back to the time we spent in his House debating upskirting, we did not do that because voyeurism was somehow legal; we did it because a loophole had been created by a new technology and a new set of circumstances, and it was right that we sought to close it. We urgently need to understand where those loopholes are now, thanks to artificial intelligence, and we need to understand more about where they will have the greatest effects.
In a similar vein, we need to understand, as I raised at Prime Minister’s questions a few weeks ago, which parts of the economy and regions of the country will be most affected, so that we can focus the immense Government skills programmes on the areas that will be most affected. This is not a predictable industry, such as when we came to the end of the coalmining industry, and we are not able to draw obvious lines on obvious maps. We need to understand the economy and how this impacts on local areas. To take just one example, we know that call centres—those things that keep us waiting for hours on hold—are going to get a lot better thanks to artificial intelligence, but there are parts of the country that are particularly seeing an increase in local call centre employees. This will be a boom for the many people working in them, but it is also a hump that we need to get over, and we need to focus skills investment in certain areas and certain communities.
I do believe that, long term, we should be profoundly optimistic that artificial intelligence will create more jobs than it destroys, just as in every previous industrial revolution, but there will be a hump, and the Government need to help as much as they can in working with businesses to provide such opportunities. We should be optimistic that the agency that allows people to be happier in their work—personal agency—will be enhanced by the use of artificial intelligence, because it will take away some of the less exciting aspects of many jobs, particularly at the lower-paid end of the economy, but not by any means solely. There is no shame in eliminating dull parts of jobs from the economy, and there is no nobility in protecting people from inevitable technological change. History tells us that if we do seek to protect people from that technological change, we will impoverish them in the process.
I want to point to the areas where the Government surely must understand that potentially new offences are to be created beyond the tactical risk I have described. We know that it is already illegal to hack the NHS, for instance. That is a tactical problem, even if it might be somewhat different, so I want to take a novel example. We know that it is illegal to discriminate on the grounds of whether someone is pregnant or likely to get pregnant. Warehouses, many of them run by large businesses, gather a huge amount of data about their employees. They gather temperature data and movement data, and they monitor a huge amount. They gather data that goes far beyond anything we had previously seen just a few years ago, and from that data, companies can infer a huge amount, and they might easily infer from that whether someone is pregnant.
If we do that, which we already do, should we now say that it will be illegal to collect such data because it opens up a potential risk? I do not think we should, and I do not think anyone would seriously say we should, but it is open to a level of discrimination. Should we say that such discrimination is illegal, which is the situation now—companies can gather data but it is what they do with it that matters—or should we say that it actually exposes people to risk and companies to a legal risk, and that it may take us backwards rather than forwards? Unsurprisingly, I think there is a middle ground that is the right option.
Suddenly, however, a question as mundane as collecting data about temperature and movements, ostensibly for employee welfare and to meet existing commitments, turns into a political decision: what information is too much and what analysis is too much? It brings us as politicians to questions that suddenly and much more quickly revert to ethics. There is a risk of huge and potentially dangerous information asymmetry. Some people say that there should be a right to a human review and a right to know what cannot be done. All these are ethical issues that come about because of the advent of artificial intelligence in the way that they have not done so previously. I commend to all Members the brilliant paper by Oxford University’s Professor Adams-Prassl on a blueprint for regulating algorithmic management, and I commend it to the Government as well.
AI raises ethical considerations that we have to address in this place in order to come up with the principles-based regulation that we need, rather than trying to play an endless game of whack-a-mole with a system that is going to go far faster than the minds of legislators around the world. We cannot regulate in every instance; we have to regulate horizontally. As I say, the key theme surely must be transparency. A number of Members of Parliament have confessed—if that is the right word—to using AI to write their speeches, but I hope that no more people have used AI to write their speeches than those who have already confessed. Transparency has been key in this place, and it should be key in financial services and everywhere else. For instance, AI-generated videos could already be forced to use watermarking technology that would make it obvious that they are not the real deal. As we come up to an election, I think that such use of existing technology will be important. We need to identify the gaps—the lacunae—both in legislation and in practice.
Artificial intelligence is here with us today and it will be here for a very long time, at the very least augmenting human intelligence. Our endless creativity is what makes us human, and what makes us to some extent immune from being displaced by technology, but we also need to bear in mind that, ultimately, it is by us that decisions will be made about how far AI can be used and what AI cannot be used for. People see a threat when they read some of the most hyperbolic headlines, but these are primarily not about new crimes; they are about using AI for old crimes, but doing them a heck of a lot better.
I end by saying that the real risk here is not the risk of things being done to us by people using AI. The real risk is if we do not seize every possible opportunity, because seizing every possible opportunity will allow us to fend off the worst of AI and to make the greatest progress. If every student knows that teachers are not using it, far more fake essays will be submitted via ChatGPT. Every lawyer and every teacher should be encouraged to use this technology to the maximum safe extent, not to hope that it simply goes away. We know that judges have already seen lawyers constructing cases using AI and that many of the references in those cases were simply fictional, and the same is true of school essays.
The greatest risk to progress in our public services comes from not using AI: it comes not from malevolent people, but from our thinking that we should not embrace this technology. We should ask not what AI can do to us; we should ask what we can do with AI, and how Government and business can get the skills they need to do that best. There is a risk that we continue to lock in the 95% of AI compute that sits with just seven companies, or that we promote monopolies or the discrimination that the hon. Member for Brent Central (Dawn Butler) mentioned. This is an opportunity to avert that, not reinforce it, and to cement not prejudice but diversity. It means that we have an opportunity to use game-changing technology for the maximum benefit of society, and the maximum number of people in that society. We need to enrich the dialogue between Government, the private sector and the third sector, to get the most out of that.
This is a matter for regulation, and for global regulation, as is so much of the modern regulatory landscape. There will be regional variations, but there should also be global norms and principles. Outside the European Union and United States, Britain has that unique position I described, and the Prime Minister’s summit this autumn will be a key opportunity—I hope all our invites are in the post, or at least in an email. I hope that will be an opportunity not just for the Prime Minister to show genuine global leadership, but also an opportunity to involve academia, parliamentarians and broader society in having that conversation, and allow the Government to seize the opportunity and regain some trust on this technology.
I urge the Minister to crack on, seize the day, and take the view that artificial intelligence will be with us for as long as we are around. It will make a huge difference to our world. Done right, it will make everything better; done badly, we will be far poorer for it.
It is a pleasure to speak in this debate, and I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing it and on his excellent speech and introduction. It is a pleasure to follow my fellow Committee Chair, the hon. Member for Bristol North West (Darren Jones). Between the Business and Trade Committee and the Science, Innovation and Technology Committee, we have a strong mutual interest in this debate, and I know all of our members take our responsibilities seriously.
This is one of the most extraordinary times for innovation and technology that this House has ever witnessed. If we had not been talking so much about Brexit and then covid, and perhaps more recently, Russia and Ukraine, our national conversation and—this goes to the point made by my hon. Friend the Member for Boston and Skegness—debates in this Chamber, would have been far more about the technological revolution that is affecting all parts of the world and our national life.
It is true to say that, perhaps as well as the prominence that the discovery of vaccines against covid has engendered, AI has punctured through into public consciousness as a change in the development of technology. It has got people talking about it, and not before time. I say that because, as both Members who have made speeches have said, it is not a new technology, in so far as it is a technology at all. In fact, in a laconic question to one of the witnesses in front of our Committee, one member observed, “Was artificial intelligence not just maths and computers?” In fact, one of the witnesses said that in his view it was applied statistics. This has been going on for some time.
My Committee, the Science, Innovation and Technology Committee—I am delighted to see my colleague the hon. Member for Brent Central (Dawn Butler) here—is undertaking a fascinating and, we hope, impactful inquiry into the future governance of AI. We are taking it seriously to understand the full range of issues that do not have easy or glib answers—if they do, those are best avoided—and we want to help inform this House and the Government as to the best resolutions to some of the questions in front of us. We intend to publish a report in the autumn, but given the pace of debate on these issues and, as I am sure the hon. Lady will agree, the depth of the evidence we have heard so far, we hope to publish an interim report sooner than that. It would be wrong for me as Chair of the Committee to pre-empt the conclusions of our work, but we have taken a substantial amount of evidence in public, both oral and written, so I will draw on what we have found so far.
Having said that AI is not new—it draws on long-standing research and practice—it is nevertheless true to say that we are encountering an acceleration in its application and depth of progress. To some extent, the degree of public interest in it, without resolution to some of the policy questions that the hon. Member for Bristol North West alluded to, carries some risks. In fact, the nomenclature “artificial intelligence” is in some ways unhelpful. The word “artificial” is usually used in a pejorative, even disdainful way. When combined with the word “intelligence”, which is one of the most prized human attributes, the “artificial” rather negates the positivity of the “intelligence”, leading to thoughts of dystopia, rather than the more optimistic side of the argument to which my hon. Friend the Member for Boston and Skegness referred. Nevertheless, it is a subject matter with which we need to grapple.
In terms of the pervasiveness of AI, much of it is already familiar to us, whether it is navigation by sat-nav or suggestions of what we might buy from Amazon or Tesco. The analysis of data on our behaviour and the world is embedded, but it must be said that the launch of ChatGPT to the public just before Christmas has catapulted to mass attention the power already available in large language models. That is a breakthrough moment for millions of people around the world.
As my hon. Friend said, much of the current experience of AI is not only benign, but positively beneficial. The evidence that our Committee has taken has looked at particular applications and sectors. If we look at healthcare, for example, we took evidence from a medical company that has developed a means of recognising potential prostate cancer issues from MRI scans far before any symptoms present themselves, and with more accuracy than previous procedures. We heard from the chief executive of a company that is using AI to accelerate drug discovery. It is designing drugs from data, and selecting the patients who stand to benefit from them. That means that uses could be found, among more accurately specified patient groups, for drugs that have failed clinical trials on the grounds not of safety but of efficacy. That could lead to a very early prospect of better health outcomes.
We heard evidence that the positive effects of AI on education are significant. Every pupil is different; we know that. Every good teacher tailors their teaching to the responses and aptitudes of each student, but that can be done so much better if the tailoring is augmented through the use of technology. As Professor Rose Luckin of University College London told us,
“students who might have been falling through the net can be helped to be brought back into the pack”
with the help of personalised AI. In the field of security, if intelligence assessments of a known attacker are paired with AI-rich facial recognition technology, suspects may be pinpointed and apprehended before they have the chance to execute a deadly attack.
There are many more advantages of AI, but we must not only observe but act on the risks that arise from the deployment of AI. Some have talked about the catastrophic potential of AI. Much of what is suggested, as in the case of the example given by my hon. Friend the Member for Boston and Skegness, is speculative, the work of fiction, and certainly in advance of any known pathway. It is important to keep a cool head on these matters. There has been talk in recent weeks of the possibility of AI killing many humans in the next couple of years. We should judge our words carefully. There are important threats, but portents of disaster must be met with thinking from cool, analytical heads, and concrete proposals for steps to take.
I very much applaud the seriousness with which the Government are approaching the subject of the governance of AI. For example, a very sensible starting point is making use of the deep knowledge of applications among our sector regulators, many of which enjoy great respect. I have mentioned medicine; take the medical regulator, the Medicines and Healthcare products Regulatory Agency. With its deep experience of supervising clinical trials and the drug discovery process, it is clear that it is the right starting point; we should draw on its experience and expertise. If AI is to be used in drug discovery or diagnostics, it makes sense to draw on the MHRA’s years of deep experience, for which it is renowned worldwide.
It is also right to require regulators to come together to develop a joint understanding of the issues, and to ask them to work collectively on regulatory approaches, so that we avoid inconsistency and inadvertently applying different doctrines in different sectors. It is right that regulators should talk to each other, and that there should be coherence. Given the commonalities, there should be a substantial, well-funded, central capacity to develop regulatory competence across AI, as the Government White Paper proposed.
I welcome the Prime Minister’s initiative, which the hon. Member for Bristol North West mentioned. In Washington, the Prime Minister agreed to convene a global summit on AI safety in the UK in the autumn. Like other technologies, AI certainly does not respect national boundaries. Our country has an outstanding reputation on AI, the research and development around it, and—at our best—regulatory policy and regulation, so it is absolutely right that we should lead the summit. I commend the Prime Minister for his initiative in securing that very important summit.
The security dimension will be of particular importance. Like-minded countries, including the US and Japan, have a strong interest in developing standards together. That reflects the fact that we see the world through similar eyes, and that the security of one of us is of prime importance to the others. The hon. Member for Bristol North West, in his debate a few weeks ago, made a strong point about international collaboration.
One reason why a cool-headed approach needs to be taken is that the subject is susceptible to the involvement of hot heads. We must recognise that heading off the risks is not straightforward; it requires deep reflection and consideration. Knee-jerk regulatory responses may prove unworkable, will not be widely taken up by other countries, and may therefore be injurious to the protections that policy innovation aims to deliver. I completely agree with the hon. Gentleman that there is time for regulation, but not much time. We cannot hang around, but we need to take the appropriate time to get this right. My Committee will do what it can to assist on that.
If the Government reflect on these matters over the summer, their response should address a number of challenges that have arisen in this debate, and from the evidence that my Committee took. Solutions must draw on expertise from different sectors and professions, and indeed from people with expertise in the House, such as those contributing to this debate. Let me suggest briefly a number of challenges that a response on AI governance should address. One that has emerged is a challenge on bias and discrimination. My hon. Friend the Member for Brent Central has been clear and persistent in asking questions to ensure that the datasets on which algorithms are trained do not embed a degree of bias, leading to results that we would not otherwise tolerate. I dare say she will refer to those issues in her speech. For example, as has been mentioned, in certain recruitment settings, if data reflects the gender or ethnic background of previous staff, the profile of an “ideal” candidate may owe a great deal to past biases. That needs to be addressed in the governance regime.
There is a second and related point on the black box challenge. One feature of artificial intelligence is that the computer system learns from itself. The human operator or commissioner of the software may not know why the algorithm or AI software has made a recommendation or proposed a course of action. That is a big challenge for those of us who take an interest in science policy. The scientific method is all about transparency; it is about putting forward a hypothesis, testing it against the data, and either confirming or rejecting the hypothesis. That is all done publicly; publication is at the heart of the scientific method. If important conclusions are reached —and they may be accurate conclusions, with great predictive power—but we do not know how, because that is deep within the networks of the AI, that is a profound challenge to the scientific method and its applications.
Facial recognition software is a good example. The Metropolitan police is using facial recognition software combined with AI. It commissioned a study—a very rigorous study—from the National Physical Laboratory, which looks at whether there is any racial bias that can be determined from the subjects that are detected through the AI algorithms. The study finds that there is no evidence of that, but that is on the basis of a comparison of outputs against other settings; it is not based on a knowledge of the algorithms, which in this case is proprietary. It may or may not be possible to look into the black box, but that is one question that I think Governments and regulators will need to address.
In evidence to the Committee—of which I am a member— the Met said that there was no bias in its facial recognition system, whereas its own report states that there is bias in the system, and a bias with regard to identifying black and Asian women. In fact, the results are 86% incorrect. There are lots of ways of selling the benefits of facial recognition. Other countries across Europe have banned certain facial recognition, while the UK has not. Does the right hon. Gentleman think that we need to look a lot more deeply into current applications of facial recognition?
The hon. Lady makes an excellent point. These challenges, as I put them, do not often have easy resolution. The question of detecting bias is a very important one. Both of us have taken evidence in the Committee and in due course we will need to consider our views on it, but she is right to highlight that as a challenge that needs to be addressed if public confidence and justice are to be served. It cannot be taken lightly or as read. We need to look at it very clearly.
There is a challenge on securing privacy. My hon. Friend the Member for Boston and Skegness made a very good point about an employer taking people’s temperatures, whether they could be an indication of pregnancy and the risk that that may be used in an illegal way. That is one example. I heard an example about the predictive power of financial information. The transaction that pays money to a solicitors’ firm that is known to have a reputation for advising on divorce can be a very powerful indicator of a deterioration in the financial circumstances of a customer in about six months’ time. Whether the bank can use that information, detecting a payment to a firm of divorce solicitors, to downgrade a credit rating in anticipation is a matter that I think at the very least should give rise to debate in this House. It shows that there are questions of privacy: the use of data gathered for one purpose for another.
Since we are talking about data, there is also a challenge around access to data. There is something of a paradox about this. The Committee has taken evidence from many software developers, which quite often are small businesses founded by a brilliant and capable individual. However, to train AI software, they need data. The bigger the dataset the more effective the training is, so there are real returns to economies of scale when it comes to data. There is a prospective contrast between potentially very small software developers who cannot do anything without access to data that may be in the hands of very large companies. Those of us who use Google know that it has a lot of information on us. I mentioned banks. They have a lot of information on us, too. That is not readily accessible to small start- ups, so access to data is something we will need to address.
Another challenge we need to address is access to compute, which is to say, the power to analyse data. Again, the bigger the computer, the bigger the compute power and the more effective and successful algorithms will be, but that can be a barrier to entry to smaller firms. If they are reserved to giants, that has profound consequences for the development of the industry. It is one of the reasons why I think the Government are right to consider plans for a dedicated compute resource in this country.
Those issues combine to make for what we might call an anti-trust challenge, to which the hon. Member for Bristol North West referred. There is a great danger that already we may concentrate market power in the hands of a very small number of companies, from which it is very difficult thereafter to diversify and have the degree of contestability and competition that the full benefits of AI should be able to respond to. Our regulators, in particular our competition regulators, will need to pay close attention to that.
Related to that is the law and regulation around intellectual property and copyright. In the creative industries, our copyright gives strong protection to people who create their own original work. The degree of modification or use without payment and licensing that is tolerable without damaging the returns and the vibrancy of our crucial creative sector is very important.
Another challenge is on liability, which mirrors some of the debates taking place about our large social media platforms. If we develop a piece of AI in an application that is used for illegal purposes, should we, as the developer or the person who licenses it, be responsible for its use by an end user or should that be a matter for them? In financial services, we have over time imposed strong requirements on providers of financial services, such as banks, to, in the jargon, know your customer—KYC. It is not sufficient just to say, “I had no reason to suppose that my facilities were going to be used for money laundering or drug trafficking.” There is a responsibility to find out what the intended use is. Those questions need to be addressed here. The hon. Member for Bristol North West raised questions about employment and the transition to a new model of employment, many of which have some upsides.
One of the classic definitions of a sentient computer is that it passes the Turing test: if there was a screen between a person and the computer they were interacting with, would they know that it was a computer, or would they think it was a human being? The experience of a lot of my constituents when dealing with some large bureaucracies is that even if there is a human on the end of the telephone, they might as well be a computer because they are driven by the script and the software. In fact, one might say that they fail the Turing test. The greater personalisation of AI may overcome what can be a pretty dispiriting experience for employees who have to park their humanity and read out a script to a consumer. There are big challenges but also opportunities there.
A couple of other things have been referred to, such as the challenge of international co-ordination. We have the agency to set our own rules, but there is no point in doing so without taking the opportunity to influence the world. We will be stronger if we have—at least among like-minded countries, and preferably beyond—a strong consensus about how we should proceed.
It is a pleasure to follow the Chair of the Science, Innovation and Technology Committee, of which I am a member. I congratulate the hon. Member for Boston and Skegness (Matt Warman) on securing the debate and the Backbench Business Committee on allowing it.
I used to be a computer programmer, so I like tech— I need to put that out there. This debate on AI follows the categories of three types of people. I compare those three types of people to some friends that I used to go out partying with. One would get completely wasted; one would just sit in the corner and not want to get involved; and I would be the driver, so I would stay sober but enjoy the party. In this debate I like to think that I am staying sober, enjoying the party and making sure that everyone, myself included, gets home safely.
Could I invite my hon. Friend to inform the House which of us in this debate is the one who got wasted?
I do not think my friend would look kindly on me if I said their name. I always made sure they got home safely—that is the main thing.
My hon. Friend the Member for Streatham (Bell Ribeiro-Addy) told me the other day that she asked ChatGPT to write a letter from her to constituents on the dangers of AI. This is what ChatGPT churned out:
“Dear [Constituent],
Thank you for reaching out to me with your concerns about large language model AI. I share your worries about the potential risks that this technology poses and I appreciate the opportunity to address them.
As you may be aware, large language model AI is an advanced form of artificial intelligence that can generate human-like text by training on vast amounts of data. While this technology has shown tremendous potential in various fields, including language translation, chatbots, and content creation, it also presents significant ethical, social, and legal challenges.
One of the main concerns about large language model AI is its potential to perpetuate biases and discrimination. If the data used to train these models are biased, the AI will learn and replicate those biases in its output, leading to harmful outcomes. Moreover, these models have the ability to generate highly realistic but entirely fake text, which could be used to spread disinformation, manipulate public opinion, and even harm individuals.
Therefore, I believe that the regulation and governance of large language model AI are essential to ensure that it is used ethically and responsibly. The UK government has been actively engaging with the tech industry, academia, and civil society to understand the implications of AI and develop a regulatory framework that promotes innovation while safeguarding human rights and values.”
That, to me, is an illustration of the fact that even AI itself recognises that there are some issues with AI. It shows that we need clear regulation, which we do not quite have at the moment. There is still time for the Government’s White Paper to change that, and I hope that debates of this kind will enable change to happen.
Many Members have referred to the use of AI for medical advances, and quantum computers will certainly enable medicines and medical solutions to be found much more quickly. However, as I said when evidence was being given to the Science, Innovation and Technology Committee, even something as simple as body mass index, which is used in the medical world, is a flawed measurement. The use of BMI in the building of AI will integrate that bias into anything that the AI produces. Members may not be aware that the BMI scale was created not by a doctor but by an astronomer and mathematician in the 1800s. What he was trying to do was identify l’homme moyen—the average man—in statistical terms. The scale was never meant to be used in the medical world in the way that it is. People can be prevented from having certain medical procedures if their BMI is too high. The Committee was given no evidence that we would rule out, or mitigate, a flawed system such as BMI in the medical profession and the medical world. We should be worried about this, because in 10 or 20 years’ time it will be too late to explain that BMI was always discriminatory against women, Asian men and black people. It is important for us to get this right now.
I recognise the huge benefits that AI can have, but I want to stress the need to stay sober and recognise the huge risks as well. When we ask certain organisations where they get their data from, the response is very opaque: they do not tell us where they are getting their data from. I understand that some of them get their mass data scraping from sites such as Reddit, which is not really where people would go to become informed on many things.
If we do not take this seriously, we will be automating discrimination. It will become so easy just to accept what the system is telling us, and people who are already marginalised will become further marginalised. Many, if not most, AI-powered systems have been shown to contain bias, whether against people of colour, women, people with disabilities or those with other protected characteristics. For instance, in the case of passport applications, the system keeps on saying that a person’s eyes are closed when in fact they have a disability. We must ensure that we measure the impact on the public’s rights and freedoms alongside the advances in AI. We cannot become too carried away—or drunk—with all the benefits, without thinking about everything else.
At the beginning, I thought it reasonable for the Government to say, “We will just expand legislation that we already have,” but when the Committee was taking evidence, I realised that we need to go a great deal further—that we need something like a digital Bill of Rights so that people understand and know their rights, and so that those rights are protected. At the moment, that is not the case.
There was a really stark example when we heard some information in regard to musicians, music and our voices. Our voices are currently not protected, so with the advancements of deepfake, anybody in this House can have their voice attached to something using deepfake and we would have no legal recourse, because at the moment our voices are not protected. I believe that we need a digital Bill of Rights that would outlaw the most dangerous uses of AI, which should have no place in a real democracy.
The Government should commit to strengthening the rights of the public so that they know what is AI-generated or whether facial recognition—the digital imprint of their face—is being used in any way. We know, for instance, that the Met police have on file millions of people’s images—innocent people—that should not be there. Those images should be taken off the police database. If an innocent person’s face is on the database and, at some point, that is put on a watch list, the domino effect means that they could be accused of doing something they have not done.
The UK’s approach to AI currently diverges from that of our closest trading partners, and I find that quite strange. It is not a good thing and there is an apparent trade-off between progress and safety. I think we should always err on the side of safety and ethics. Progress will always happen; we cannot stop progress. Companies will always invest in AI. It is the future, so we do not have to worry about that—people will run away with that. What we have to do is ensure that we protect people’s safety, because otherwise, instead of being industry leaders in the UK, we will be known as the country that has shoddy or poor practices. Nobody really wants that.
There are countries that are outlawing how facial recognition is used, for instance, but we are not doing that in the UK, so we are increasingly looking like the outlier in this discussion and protection around AI. Government’s first job is to protect their citizens, so we should protect citizens now from the dangers of AI.
Harms are already arising from AI. The Government’s recently published White Paper takes the view that strong, clear protections are simply not needed. I think the Government are wrong on that. Strong, clear protections are most definitely needed—and needed now. Even if the Government just catch up with what is happening in Europe and the US, that would be more than we are doing at the moment. We need new, legally binding regulations.
The White Paper currently has plans to water down data rights and data protection. The Data Protection and Digital Information (No. 2) Bill paints an alarming picture. It will redefine what counts as personal data. All these things have been put in place piecemeal to ensure that personal data is protected. If we lower the protection in the definition of what is personal data, that will mean that any company can use our personal data for anything it wants and we will have very limited recourse to stop that. At the end of the day, our personal data is ultimately what powers many AI systems, and it will be left ripe for exploitation and abuse. The proposals are woefully inadequate.
The scale of the challenge is vast, but instead of reining in this technology, the Government’s approach is to let it off the leash, and that is problematic. When we received evidence from a representative from the Met police, she said that she has nothing to hide so what is the problem, for instance, in having the fingerprint, if you like, of her face everywhere that she goes? I am sure that we all have either curtains or blinds in our houses. If we are not doing anything illegal, why have curtains or blinds? Why not just let everyone look into our house? Most abuse happens in the home so, by the same argument, surely allowing everyone to look into each other’s houses would eliminate a lot of abuse.
In our country we have the right to privacy, and people should have that right. Our digital fingerprints should not be taken without our consent, as we have policing by consent. The Met’s use of live facial recognition and retrospective facial recognition is worrying. I had a meeting with Mark Rowley the other day and, to be honest, he did not really understand the implications, which is a worry.
Like many people, I could easily get carried away and get drunk with this AI debate, but I am the driver. I need to stay sober to make sure everyone gets home safely.
It is a pleasure to follow the hon. Member for Brent Central (Dawn Butler). I join everyone in congratulating my hon. Friend the Member for Boston and Skegness (Matt Warman) on securing this important debate.
Everybody is talking about artificial intelligence, which is everywhere. An article in The Sentinel, Stoke’s local paper, recently caught my eye. Last week, the Home Secretary visited my constituency to open a Home Office facility in Hanley, a development providing more than 500 new jobs in Stoke-on-Trent. The article reflected on the visit and, amusingly, compared the Home Secretary’s responses to questions posed by the local media with the responses from an AI. Specifically, the Home Secretary was asked whether Stoke-on-Trent had taken more than its fair share of asylum seekers through the asylum dispersal scheme, and about the measures she is taking to ensure that asylum seekers are accommodated more evenly across the country. She replied:
“The new Home Office site is a vote of confidence in Stoke-on-Trent... They will be helping to bring down the asylum backlog and process applications more quickly.”
The same question was posed to ChatGPT, which was asked to respond as if it were the Home Secretary. The AI responded:
“I acknowledge the city has indeed taken on a significant number of asylum seekers. This kind of uneven distribution can place stress on local resources and create tension within communities. It is clear we need a more balanced approach that ensures all regions share responsibility and benefits associated with welcoming those in need.”
The AI also referred to reviewing the asylum dispersal scheme, strengthening collaboration with local authorities, infrastructure development and the importance of public awareness and engagement.
We all know what it is like to be on the receiving end of media questions, and a simple and straightforward answer is not always readily available. I suppose the AI’s response offers more detail but, unsurprisingly, it does not tell us anything new. It is, after all, limited by the information that is currently on the internet when formulating its answers. Thankfully, AI is not taken to making things up—hopefully that will not happen, but it is one of the big debates.
This begs the question: what is truth? That is the fundamental question on this topic. We must develop a robust ethical framework for artificial intelligence. The UK should be commended for embracing the spirit of an entrepreneurial and innovative approach to artificial intelligence. We know that over-regulation stifles creativity and all the good things it has to offer. However, AI has become consumer-focused and increasingly accessible to people without technical expertise. Our regulatory stance must reflect this shift. Although there should be a departure from national regulatory micromanagement, the Government have a role to play in protecting the public against potential online harms. It cannot be left to self-regulation by individual companies.
Let us also remember that artificial intelligence operates within a global space. We cannot regulate the companies that are developing this technology if they are based in another nation. This is a complicated space in which to navigate and create safeguards.
Balancing those concerns is increasingly complex and challenging, and conversations such as this must help us to recognise that regulation is not impossible and that it is incredibly important to get it right. For example, when the tax authorities in the Netherlands employed an AI tool to detect potential childcare benefit fraud, it made mistakes, resulting in innocent families facing financial ruin and thousands of children being placed in state custody as a result of accusations. When the victims tried to challenge the decision, they were told that officials could not access the algorithmic inputs, so they were unable to establish how decisions had been made. That underlines the importance of checks and balances.
The hon. Lady is absolutely right on these concerns, especially as regards the Home Office. Big Brother Watch’s “Biometric Britain” report spoke about how much money the Home Office is paying to companies, but we do not know who they are. If we do not know who these companies are, we will not then know how they gather, develop and use their data. Does she think it is important that we know who is getting money for what?
The hon. Lady makes a good point. Clearly, that is the big part of this debate: we have to have transparency, as it is essential. The Government’s current plans, set out in the AI White Paper, do not place any new obligations on public bodies to be transparent about their use of AI; to make sure their AI tools meet accuracy and non-discrimination standards, as she rightly said; or to ensure that there are proper mechanisms in place for challenging or getting redress when AI decisions go wrong. What the White Paper proposes is a “test and learn” approach to regulation, but we must also be proactive. Technology is changing rapidly, while policy lags behind. Once AI is beyond our control, implementing safeguards becomes implausible. We should acknowledge that we cannot afford to wait to see how its use might cause harm and undermine trust in our institutions.
While still encouraging sensible innovation, we should also learn from international experiences. We must encourage transparency and put in place the proper protections to avoid damage. Let us consider the financial sector, where banks traditionally analyse credit ratings and histories when deciding who to lend money to. I have recently been working with groups such as Burnley Savings and Loans, which manually underwrites all loans and assesses the risk of each loan by studying the business models and repayment plans of its customers. Would it be right to use AI to make such decisions? If we enter a world where there is no scope for gut feeling, human empathy and intuition, do we risk impoverishing our society? We need to be careful and consider how we want to use AI, being ethical and thoughtful, and remaining in control, rather than rolling it out wherever possible. We must strike the right balance.
Research indicates that AI and automation are most useful when complemented by human roles. The media can be negative about AI’s impact, leading to a general fear that people will lose their jobs as a result of its growth. However, historically, new technology has also led to new careers that were not initially apparent. It has been suggested that the impact of AI on the workplace could rival that of the industrial revolution. So the Government must equip the workforce of the future through skills forecasting and promoting education in STEM—science, technology, engineering and maths.
Furthermore, we must remain competitive in AI on the global stage, ensuring agility and adaptability, in order to give future generations the best chances. In conjunction with the all-party group on youth affairs, the YMCA has conducted polling on how young people feel about the future and the potential impact of AI on their careers. The results are going to be announced next month. It found that AI could not only lead to a large amount of job displacement, but provide opportunities for those from non-traditional backgrounds. More information on skills and demand will help inform young people to identify their career choices and support industries and businesses in preparing for the impact of AI.
I am pleased that the Department for Education has already launched a consultation on AI education, which is open until the end of August. Following that, we should work hard to ensure that schools and universities can quickly adapt to AI’s challenges. Cross-departmental discussion is important, bringing together AI experts and educators, to ensure that the UK is at the cutting edge of developments with AI and to provide advice to adapt to younger generations.
AI is hugely powerful and possesses immense potential. ChatGPT has recently caught everybody’s attention, and it can create good stories and news articles, like the one I shared. But that technology has been used for years and, right now, we are not keeping up. We need to be quicker at adapting to change, monitoring closely and being alert to potential dangers, and stepping in when and where necessary, to ensure the safe and ethical development of AI for the future of our society and the welfare of future generations.
For the benefit of Members present, Mr Deputy Speaker and I had the chance to discuss and look at the qualities of ChatGPT. Within a matter of seconds, ChatGPT produced a 200-word speech in the style of Winston Churchill on the subject of road pricing. It was a powerful demonstration of what we are discussing today.
I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on conceiving the debate and bringing it to the Floor of the House. I thank the Chair of the Business and Trade Committee, the hon. Member for Bristol North West (Darren Jones), and the Chair of the Science, Innovation and Technology Committee, my right hon. Friend the Member for Tunbridge Wells (Greg Clark), for their contributions. As a Back Bencher, it was fascinating to hear about their role as Chairs of those Committees and how they pursue lines of inquiry into a subject as important as this one.
I have been encouraged greatly by hon. Members from across the House by the careful and measured consideration they have taken of the subject. I congratulate the hon. Member for Brent Central (Dawn Butler) on perhaps the most engaging introduction to a speech that I have heard in many a week. My own thoughts went to the other character in the party who thinks they are sober, but everyone else can see that they are not. I leave it to those listening to the debate to decide which of us fits which caricature.
I have come to realise that this House is at its best when we consider and discuss the challenges and opportunities to our society, our lives and our ways of working. The debate addresses both challenge and opportunity. First, I will look at what AI is, because without knowing that, we cannot build on the subject or have meaningful discussion about what lies beyond. In considering the development of AI, I will look at how we in the UK have a unique advantage. I will also look at the inevitability of destruction, as some risk and challenge lies ahead. Finally, I hope to end on a more optimistic and positive note, and with some questions about what the future holds.
Like many of us, I remember where I was when I saw Nelson Mandela make that walk to freedom. I remember where I was when I saw the images on television of the Berlin wall coming down. And I remember where I was, sitting in a classroom, when I saw the tragedy of the NASA shuttle falling from the sky after its launch. I also remember where I was, and the computer I was sitting at, when I first engaged with ELIZA. Those who are familiar with artificial intelligence will know that ELIZA was a dummy program that provided the role of a counsellor or someone with whom people could engage. My right hon. Friend the Member for Tunbridge Wells has already alluded to the Turing test, so I will not speak more of that, but that is where my fascination and interest with this matter started.
To bring things right up to date, as mentioned by Mr Deputy Speaker, we now have ChatGPT and the power of what that can do. I am grateful to my hon. Friend the Member for Stoke-on-Trent Central (Jo Gideon) and to the hon. Member for Brent Central because I am richer, not only for their contributions, but because I had a private bet with myself that at least two Members would use and quote from ChatGPT in the course of the debate, so I thank them both for an extra fiver in my jar as a result of their contributions.
In grounding our debate in an understanding of what AI is, I was glad that my hon. Friend the Member for Boston and Skegness mentioned the simulation of an unarmed aerial vehicle and how it took out the operator for being the weak link in delivering what it had been tasked with doing. That, of course, is not the point of the story and he did well to go on to mention that the UAV had adapted—adapted to take that step. As a simulation, when that rule changed, it then changed again and said, “Now I will take out the communication means by which that operator, who I can no longer touch, controls myself”.
The principle there is exactly as hon. Members have mentioned: it can work only to the data that it is given and the rules with which it is set. That is the lesson from apocryphal stories such as those. In that particular case, there is a very important principle—it is this idea of a “human in the loop”. Within that cycle of data, processing, decision making and action, there must remain a human hand guiding it. The more critical the consequence—the more critical the action—the more important it is that that is there.
If we think of the potential application of AI in defence, it would be very straightforward—complex but straightforward—and certainly in the realms of what is possible, for AI to be used to interpret real-time satellite imagery to detect troop movements and to respond accordingly, or to recommend a response accordingly, and that is where the human in the loop becomes critical. These things are all possible with the technology that we have.
What AI does well is to find, learn and recognise patterns. In fact, we live our life in patterns at both a small and a large scale. AI is incredibly good—we could even say superhuman—at seeing those patterns and predicting next steps. We have all experienced things such as TikTok and Facebook on our phones. We find ourselves suddenly shaking our head and thinking, “Gosh, I have just lost 15 minutes or longer, scrolling through.” It is because the algorithms in the software are spotting a pattern of what we like to see, how long we dwell on it, what we do with that, and it then feeds us another similar item for us to consume.
Perhaps more constructively, artificial intelligence is now used in agriculture. Tractors will carry booms across their backs with multiple robots. Each one of those little robots will be using an optical sensor to look at individual plants that it is passing over and it will, in a split second, identify whether that plant is a crop that is wanted, or a weed that is not. More than that, it will identify whether it is a healthy plant, whether it is infected with a parasite or a mould, or whether it is infested with insects. It will then deliver a targeted squirt of whatever substance is needed—a nutrient, a weedkiller or a pesticide —to deal with that single plant. This is all being done in a tractor that is moving across a field without a driver, because it is being guided by GPS and an autonomous system to maximise the efficiency of the coverage of that area. AI is used in all these things, but, again, it is about recognising patterns. There are advantages in that. There are no more harmful blanket administrations of pesticides, or the excessive use of chemicals, because these can now be very precisely targeted.
To any experts listening to this, let me say that I make no pretence of expertise. This is in some ways my own mimicry of the things that I have read and learned and am fascinated by. Experts will say that it is not patterns that AI is good at; it is abstractions. That can be a strange concept, but the idea of an abstraction is one of how we pull out of and create a model of what we are looking at. Without going into too much detail, there is something in what the hon. Member for Brent Central was talking about in terms of bias and prejudice within systems. I suggest that that does not actually exist within the system unless it is intentionally programmed. It is a layer that we apply on top of what the system produces and we call it this thing. The computer has no understanding of bias or prejudice; it is just processing—that is all. We apply an interpretation on top that can indeed be harmful and dangerous. We just need to be careful about that distinction.
The hon. Gentleman is absolutely right: AI does not create; it generates. It generates from the data that is being inputted. The simplified version is “rubbish in, rubbish out”—it is more complex than that, but that is the simplest way of saying it. If we do not sort out the biases before we put in the data, the data will be biased.
The hon. Lady—my hon. Friend, if I may—is absolutely correct. It is important to understand that we are dealing with something that, as I will come onto in a moment, does not have a generalised intelligence, but is an artificial intelligence. That is why, if hon. Members will forgive me, I am perhaps labouring the point a little.
A good example is autonomous vehicles and the abstraction of events that the AI must create. It might be a car being driven erratically, for example. While the autonomous vehicle is driving along, its cameras are constantly scanning what is happening around it on the road. It needs to do that in order to recognise patterns against that abstraction and respond to them. Of course, once it has that learning, it can act very quickly: there are videos on the internet from the dashcams of cars driven autonomously and without a driver, slowing down, changing lane or moving to one side of the road because the car has predicted, based on the behaviour it is seeing of other cars on the road, that an accident is going to happen—and sure enough, seconds later, the accident occurs ahead, but the AI has successfully steered the vehicle to one side.
That is important, but the limitation is that, if the AI only learns about wandering cars and does not also learn about rocks rolling on to the road, a falling tree, a landslide, a plane crash, an animal running into the road, a wheelchair, a child’s stroller or an empty shopping cart, it will not know how to respond to those. These are sometimes called edge cases, because they are not the mainstream but happen on the edges. They are hugely important and they all have to be accounted for. Even in the event of a falling tree, the abstraction must allow for trees that are big or small, in leaf or bare, falling towards the car or across the road, so we can see both the challenges of what AI must do, and the accomplishment in how well it has done what it has done so far.
That highlights the Achilles heel of AI, because what I have tried to describe is what is called a generalised intelligence. Generalised intelligence is something that we as humans turn out to be quite good at, or at least something that it is hard for computers to replicate reliably. What a teenager can learn in a few hours—that is, driving a car—it takes billions of images and videos and scenarios for an AI to learn. A teenager in a car intuitively knows that a rock rolling down a hillside or a falling tree presents a real threat to the road and its users. The AI has to learn those things; it has to be told those things. Crucially, however, once AI knows those things, it can generate them faster and respond much more quickly and much more reliably.
I will just make the comment that it does have that ability to learn. To go back to the agricultural example, the years of gathering images of healthy and poorly plants, creating libraries and then teaching, can now be done much faster because of this ability to learn. That is another factor in what lies ahead. We have to think not just that change will come, but that the ability to change will also be faster in the future. I hope it is clear then that what AI is not is a mind of its own. There is no ghost in the machine. It cannot have motivation of its own origin, nor can it operate beyond the parameters set by its programs or the physical constraints built into its hardware.
As an aside, I should make a comment about hardware, since my right hon. Friend the Member for Tunbridge Wells and others may comment on it. In terms of hardware constraints, the suggestion is that the probability of the sudden take-off of general artificial intelligence in the future is very small. AI derives its abilities to make rapid calculations from parallelisation, that is, simultaneously running multiple calculations across central processing units.
The optimisation and instruction programme appears to have hit rapidly diminishing returns in the mid to late 2010s, as such processing speed is increasingly constrained by the number of CPUs available. An order-of-magnitude increase in throughput therefore requires similar increases in available hardware or an exceedingly expensive endeavour. In other words, basic engineering parameters mean that we cannot be suddenly blindsided, I would suggest, by the emergence of a malevolent global intelligence, as the movies would have us believe.
I am grateful for your indulgence, Mr Deputy Speaker, as I establish this baseline about what AI can and cannot do. It is important to do so in order then to consider the question of development. The key point that I highlight is the opportunity we have to create in the UK—specifically in the post-Brexit UK—an environment for the development of AI. If colleagues will indulge me—I mean not to make political points—I will make an observation on the contrast between the environment we have here compared with other parts of the world.
In any rapidly developing area of technology, it is important to differentiate the unethical application of technology and the technology itself. Unfortunately the EU’s AI Act illustrates a failure to recognise that distinction. By banning models capable of emotional and facial recognition, for example, EU lawmakers may believe that they have banned a tool of mass surveillance, but in fact, they risk banning the development of a technology that may have a myriad of otherwise very good applications, such as therapies and educational tools that can adjust to user responses.
The same holds for the ban on models that use behaviour patterns to predict future actions. Caution around that is wise, but a rule preventing AI from performing a process that is already used by insurers, credit scorers, interest-rate setters and health planners across the world for fear that it might be used to develop a product for sale to nasty dictators is limiting. Perhaps the most egregious example of that conflation is the ban on models trained on published literature, a move that effectively risks lobotomising large language model research applications such as ChatGPT in the name of reducing the risk of online piracy. We might compare that to banning all factories simply to ensure that none is used to manufacture illegal firearms.
In short, and in words of one syllable: it is easy to ban stuff. But it is much harder—and this is the task to which we must apply ourselves—to create a moral framework within which regulation can help technology such as AI to flourish. To want to control and protect is understandable, but an inappropriate regulatory approach risks smothering the AI industry as it draws its first breaths. In fact, as experts will know better than me, AI is exceptionally good at finding loopholes in rules-based systems, so there is a deep irony to the idea that it might be the subject of a rules-based system but not find or use a way to navigate around it.
I am encouraged by the Government’s contrasting approach and the strategy that they published last year. We have recognised that Britain is in a position to do so much better. Rather than constraining development before applications become apparent, we seek to look to those applications. We can do that because, unlike the tradition of Roman law, which is inherently prescriptive and underlines the thinking of many nations and, indeed, of the EU, the common law, as we have in this country, allows us to build an ethical framework for monitoring industries without resorting to blanket regulation that kills the underlying innovation.
That means that, in place of prescriptive dictates, regulators and judges, we can—in combination with industry leaders—innovate, evolve and formalise best practice proportionate to evolving threats. Given that the many applications of AI will be discoverable only through the trial and error of hundreds of dispersed sectors of the economy, that is the only option open to us that does not risk culling future prosperity and—without wishing to overdramatise—creating an invisible graveyard of unsaved lives.
It is a most un-British thing to say, but this British system is a better way. Indeed, it is being introduced to nations around the world. They are switching from a regulatory approach to one of common law for many reasons. First, it facilitates progress. Just as no legislator can presume to know all the positive applications of a new technology such as AI, they are also blind to its potential negative applications. In the UK, in this environment, AI could prove to be a game-changer for British bioengineering. The world-leading 100,000 Genomes Project and UK Biobank, combined with our upcoming departure from the GDPR, promise AI-equipped researchers an unparalleled opportunity to uncover the genetic underpinnings of poor health and pharmaceutical efficacy, to the benefit of health services around the world.
The second reason is that it is more adaptable to threats. Decentralised systems of monitoring, involving industry professionals with a clear understanding of the technology, is the most effective form of risk management we can realistically devise. An adaptable system has the potential to insulate us from another risk of the AI era: technology in the hands of hostile powers and criminals. As in previous eras, unilateral disarmament would not make us safer. Instead, it would leave us without the tools to counteract the superior predictive abilities of our foes, rendering us a contemporary Qing dynasty marvelling at the arrival of steamships.
It is vital to recognise that AI is going to bring destruction. This is perhaps the most revolutionary technological innovation of our lifetime, and with it, AI brings the potential for creative destruction across the economy at a faster pace than even the world wide web. I will quote Oppenheimer when he cited the Bhagavad Gita, which says:
“Now I am become Death, the destroyer of worlds.”
That is not to sensationalise and fall into the same trap I warned of at the start of my remarks, but it is important to recognise that there will be change. Every bit as much as we have seen the stripping out of personnel in factories as they are replaced by machinery, we will see the loss of sectors to this technology. The critical point is not to stop it but to recognise it, adapt and use it for its strengths to develop.
We should be upfront about this. A failure to do so risks a backlash to excess. We cannot react with regulation; we must harness this. The industrial revolution brought both unprecedented economic prosperity and massive disruption. For all we know, had the luddites enjoyed a world of universal suffrage, their cause may have triumphed, dooming us to material poverty thereafter. If Britain is to reap the benefits of this new era of innovation, we must be frank about its potential, including its disruptive potential, and be prepared to make a strong case to defend the future it promises. Should we fail in this task, surrendering instead to the temptations of reactionary hysteria, our future may not look like an apocalyptic Hollywood blockbuster. It will, however, resemble that common historical tale of a once-great power sleepwalking its way into irrelevance.
On a more hopeful note, I turn to the question of where next? I spoke before of the pattern-based approaches that amplify conformity, such as we see on TikTok and Facebook. This quality may be attractive to technocrats—predictability, patterns, finding gaps and filling them—but that points to an increasing conformity that I, and I think many others, find boring. Artificial intelligence should be exploring what is new and innovative.
What about awe—the experience and the reaction of our mind when seeing or realising something genuinely new that does not conform to past patterns? A genuinely intelligent system would regularly be creating a sense of awe and wonder as we experience new things. Contrast the joy when we find a new film of a type we have not seen before—it covers the pages of the newspapers, dominates conversations with our friends and brings life to our souls, even—with being fed another version of the same old thing we have got used to, as some music apps are prone to do. Consider the teacher who encouraged us to try new things and have new experiences, and how we grew through taking those risks, rather than just hearing more of the same.
This begs key questions of governance, too. We have heard about a Bill of digital rights, and questions of freedom were rightly raised by the hon. Member for Brent Central, but what about a genuinely free-thinking future? What would AI bring to politics? We must address that question in this place. What system of government has the best record of dealing with such issues? Would it support an ultimate vision of fairness and equity via communism? Could it value and preserve traditions and concepts of beauty that could only be said, as Scruton argued, to have true value in a conservative context? These have always been big questions for any democracy, and I believe that AI may force us to address them in depth and at pace in the near future.
That brings me to a final point: the question of a moral approach. Here, I see hope and encouragement. My hon. Friend the Member for Stoke-on-Trent Central talked about truth, and I believe that ultimately, all AI does is surface these deeper questions and issues. The one I would like to address, very briefly, is the point of justice. The law is a rulebook; patterns, abstractions, conformity and breach are all suited to AI, but such a system does not predict or produce mercy or forgiveness. As we heard at the national parliamentary prayer breakfast this week, justice opens the door to mercy and forgiveness. It is something that is vital to the future of any modern society.
We all seek justice—we often hear about it in this House—but I would suggest that what we really seek is what lies beyond: mercy and forgiveness. Likewise, when we talk about technology, it is often not the technology itself but what lies beyond it that is our aim. As such, I am encouraged that there will always be a place for humanity and those human qualities in our future. Indeed, I would argue, they are essential foundations for the future that lies ahead.