All 1 Debates between Robin Millar and Dawn Butler

Artificial Intelligence

Debate between Robin Millar and Dawn Butler
Thursday 29th June 2023

(1 year, 4 months ago)

Commons Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Robin Millar Portrait Robin Millar (Aberconwy) (Con)
- View Speech - Hansard - -

For the benefit of Members present, Mr Deputy Speaker and I had the chance to discuss and look at the qualities of ChatGPT. Within a matter of seconds, ChatGPT produced a 200-word speech in the style of Winston Churchill on the subject of road pricing. It was a powerful demonstration of what we are discussing today.

I congratulate my hon. Friend the Member for Boston and Skegness (Matt Warman) on conceiving the debate and bringing it to the Floor of the House. I thank the Chair of the Business and Trade Committee, the hon. Member for Bristol North West (Darren Jones), and the Chair of the Science, Innovation and Technology Committee, my right hon. Friend the Member for Tunbridge Wells (Greg Clark), for their contributions. As a Back Bencher, it was fascinating to hear about their role as Chairs of those Committees and how they pursue lines of inquiry into a subject as important as this one.

I have been encouraged greatly by hon. Members from across the House by the careful and measured consideration they have taken of the subject. I congratulate the hon. Member for Brent Central (Dawn Butler) on perhaps the most engaging introduction to a speech that I have heard in many a week. My own thoughts went to the other character in the party who thinks they are sober, but everyone else can see that they are not. I leave it to those listening to the debate to decide which of us fits which caricature.

I have come to realise that this House is at its best when we consider and discuss the challenges and opportunities to our society, our lives and our ways of working. The debate addresses both challenge and opportunity. First, I will look at what AI is, because without knowing that, we cannot build on the subject or have meaningful discussion about what lies beyond. In considering the development of AI, I will look at how we in the UK have a unique advantage. I will also look at the inevitability of destruction, as some risk and challenge lies ahead. Finally, I hope to end on a more optimistic and positive note, and with some questions about what the future holds.

Like many of us, I remember where I was when I saw Nelson Mandela make that walk to freedom. I remember where I was when I saw the images on television of the Berlin wall coming down. And I remember where I was, sitting in a classroom, when I saw the tragedy of the NASA shuttle falling from the sky after its launch. I also remember where I was, and the computer I was sitting at, when I first engaged with ELIZA. Those who are familiar with artificial intelligence will know that ELIZA was a dummy program that provided the role of a counsellor or someone with whom people could engage. My right hon. Friend the Member for Tunbridge Wells has already alluded to the Turing test, so I will not speak more of that, but that is where my fascination and interest with this matter started.

To bring things right up to date, as mentioned by Mr Deputy Speaker, we now have ChatGPT and the power of what that can do. I am grateful to my hon. Friend the Member for Stoke-on-Trent Central (Jo Gideon) and to the hon. Member for Brent Central because I am richer, not only for their contributions, but because I had a private bet with myself that at least two Members would use and quote from ChatGPT in the course of the debate, so I thank them both for an extra fiver in my jar as a result of their contributions.

In grounding our debate in an understanding of what AI is, I was glad that my hon. Friend the Member for Boston and Skegness mentioned the simulation of an unarmed aerial vehicle and how it took out the operator for being the weak link in delivering what it had been tasked with doing. That, of course, is not the point of the story and he did well to go on to mention that the UAV had adapted—adapted to take that step. As a simulation, when that rule changed, it then changed again and said, “Now I will take out the communication means by which that operator, who I can no longer touch, controls myself”.

The principle there is exactly as hon. Members have mentioned: it can work only to the data that it is given and the rules with which it is set. That is the lesson from apocryphal stories such as those. In that particular case, there is a very important principle—it is this idea of a “human in the loop”. Within that cycle of data, processing, decision making and action, there must remain a human hand guiding it. The more critical the consequence—the more critical the action—the more important it is that that is there.

If we think of the potential application of AI in defence, it would be very straightforward—complex but straightforward—and certainly in the realms of what is possible, for AI to be used to interpret real-time satellite imagery to detect troop movements and to respond accordingly, or to recommend a response accordingly, and that is where the human in the loop becomes critical. These things are all possible with the technology that we have.

What AI does well is to find, learn and recognise patterns. In fact, we live our life in patterns at both a small and a large scale. AI is incredibly good—we could even say superhuman—at seeing those patterns and predicting next steps. We have all experienced things such as TikTok and Facebook on our phones. We find ourselves suddenly shaking our head and thinking, “Gosh, I have just lost 15 minutes or longer, scrolling through.” It is because the algorithms in the software are spotting a pattern of what we like to see, how long we dwell on it, what we do with that, and it then feeds us another similar item for us to consume.

Perhaps more constructively, artificial intelligence is now used in agriculture. Tractors will carry booms across their backs with multiple robots. Each one of those little robots will be using an optical sensor to look at individual plants that it is passing over and it will, in a split second, identify whether that plant is a crop that is wanted, or a weed that is not. More than that, it will identify whether it is a healthy plant, whether it is infected with a parasite or a mould, or whether it is infested with insects. It will then deliver a targeted squirt of whatever substance is needed—a nutrient, a weedkiller or a pesticide —to deal with that single plant. This is all being done in a tractor that is moving across a field without a driver, because it is being guided by GPS and an autonomous system to maximise the efficiency of the coverage of that area. AI is used in all these things, but, again, it is about recognising patterns. There are advantages in that. There are no more harmful blanket administrations of pesticides, or the excessive use of chemicals, because these can now be very precisely targeted.

To any experts listening to this, let me say that I make no pretence of expertise. This is in some ways my own mimicry of the things that I have read and learned and am fascinated by. Experts will say that it is not patterns that AI is good at; it is abstractions. That can be a strange concept, but the idea of an abstraction is one of how we pull out of and create a model of what we are looking at. Without going into too much detail, there is something in what the hon. Member for Brent Central was talking about in terms of bias and prejudice within systems. I suggest that that does not actually exist within the system unless it is intentionally programmed. It is a layer that we apply on top of what the system produces and we call it this thing. The computer has no understanding of bias or prejudice; it is just processing—that is all. We apply an interpretation on top that can indeed be harmful and dangerous. We just need to be careful about that distinction.

Dawn Butler Portrait Dawn Butler
- Hansard - - - Excerpts

The hon. Gentleman is absolutely right: AI does not create; it generates. It generates from the data that is being inputted. The simplified version is “rubbish in, rubbish out”—it is more complex than that, but that is the simplest way of saying it. If we do not sort out the biases before we put in the data, the data will be biased.

Robin Millar Portrait Robin Millar
- Hansard - -

The hon. Lady—my hon. Friend, if I may—is absolutely correct. It is important to understand that we are dealing with something that, as I will come onto in a moment, does not have a generalised intelligence, but is an artificial intelligence. That is why, if hon. Members will forgive me, I am perhaps labouring the point a little.

A good example is autonomous vehicles and the abstraction of events that the AI must create. It might be a car being driven erratically, for example. While the autonomous vehicle is driving along, its cameras are constantly scanning what is happening around it on the road. It needs to do that in order to recognise patterns against that abstraction and respond to them. Of course, once it has that learning, it can act very quickly: there are videos on the internet from the dashcams of cars driven autonomously and without a driver, slowing down, changing lane or moving to one side of the road because the car has predicted, based on the behaviour it is seeing of other cars on the road, that an accident is going to happen—and sure enough, seconds later, the accident occurs ahead, but the AI has successfully steered the vehicle to one side.

That is important, but the limitation is that, if the AI only learns about wandering cars and does not also learn about rocks rolling on to the road, a falling tree, a landslide, a plane crash, an animal running into the road, a wheelchair, a child’s stroller or an empty shopping cart, it will not know how to respond to those. These are sometimes called edge cases, because they are not the mainstream but happen on the edges. They are hugely important and they all have to be accounted for. Even in the event of a falling tree, the abstraction must allow for trees that are big or small, in leaf or bare, falling towards the car or across the road, so we can see both the challenges of what AI must do, and the accomplishment in how well it has done what it has done so far.

That highlights the Achilles heel of AI, because what I have tried to describe is what is called a generalised intelligence. Generalised intelligence is something that we as humans turn out to be quite good at, or at least something that it is hard for computers to replicate reliably. What a teenager can learn in a few hours—that is, driving a car—it takes billions of images and videos and scenarios for an AI to learn. A teenager in a car intuitively knows that a rock rolling down a hillside or a falling tree presents a real threat to the road and its users. The AI has to learn those things; it has to be told those things. Crucially, however, once AI knows those things, it can generate them faster and respond much more quickly and much more reliably.

I will just make the comment that it does have that ability to learn. To go back to the agricultural example, the years of gathering images of healthy and poorly plants, creating libraries and then teaching, can now be done much faster because of this ability to learn. That is another factor in what lies ahead. We have to think not just that change will come, but that the ability to change will also be faster in the future. I hope it is clear then that what AI is not is a mind of its own. There is no ghost in the machine. It cannot have motivation of its own origin, nor can it operate beyond the parameters set by its programs or the physical constraints built into its hardware.

As an aside, I should make a comment about hardware, since my right hon. Friend the Member for Tunbridge Wells and others may comment on it. In terms of hardware constraints, the suggestion is that the probability of the sudden take-off of general artificial intelligence in the future is very small. AI derives its abilities to make rapid calculations from parallelisation, that is, simultaneously running multiple calculations across central processing units.

The optimisation and instruction programme appears to have hit rapidly diminishing returns in the mid to late 2010s, as such processing speed is increasingly constrained by the number of CPUs available. An order-of-magnitude increase in throughput therefore requires similar increases in available hardware or an exceedingly expensive endeavour. In other words, basic engineering parameters mean that we cannot be suddenly blindsided, I would suggest, by the emergence of a malevolent global intelligence, as the movies would have us believe.

I am grateful for your indulgence, Mr Deputy Speaker, as I establish this baseline about what AI can and cannot do. It is important to do so in order then to consider the question of development. The key point that I highlight is the opportunity we have to create in the UK—specifically in the post-Brexit UK—an environment for the development of AI. If colleagues will indulge me—I mean not to make political points—I will make an observation on the contrast between the environment we have here compared with other parts of the world.

In any rapidly developing area of technology, it is important to differentiate the unethical application of technology and the technology itself. Unfortunately the EU’s AI Act illustrates a failure to recognise that distinction. By banning models capable of emotional and facial recognition, for example, EU lawmakers may believe that they have banned a tool of mass surveillance, but in fact, they risk banning the development of a technology that may have a myriad of otherwise very good applications, such as therapies and educational tools that can adjust to user responses.

The same holds for the ban on models that use behaviour patterns to predict future actions. Caution around that is wise, but a rule preventing AI from performing a process that is already used by insurers, credit scorers, interest-rate setters and health planners across the world for fear that it might be used to develop a product for sale to nasty dictators is limiting. Perhaps the most egregious example of that conflation is the ban on models trained on published literature, a move that effectively risks lobotomising large language model research applications such as ChatGPT in the name of reducing the risk of online piracy. We might compare that to banning all factories simply to ensure that none is used to manufacture illegal firearms.

In short, and in words of one syllable: it is easy to ban stuff. But it is much harder—and this is the task to which we must apply ourselves—to create a moral framework within which regulation can help technology such as AI to flourish. To want to control and protect is understandable, but an inappropriate regulatory approach risks smothering the AI industry as it draws its first breaths. In fact, as experts will know better than me, AI is exceptionally good at finding loopholes in rules-based systems, so there is a deep irony to the idea that it might be the subject of a rules-based system but not find or use a way to navigate around it.

I am encouraged by the Government’s contrasting approach and the strategy that they published last year. We have recognised that Britain is in a position to do so much better. Rather than constraining development before applications become apparent, we seek to look to those applications. We can do that because, unlike the tradition of Roman law, which is inherently prescriptive and underlines the thinking of many nations and, indeed, of the EU, the common law, as we have in this country, allows us to build an ethical framework for monitoring industries without resorting to blanket regulation that kills the underlying innovation.

That means that, in place of prescriptive dictates, regulators and judges, we can—in combination with industry leaders—innovate, evolve and formalise best practice proportionate to evolving threats. Given that the many applications of AI will be discoverable only through the trial and error of hundreds of dispersed sectors of the economy, that is the only option open to us that does not risk culling future prosperity and—without wishing to overdramatise—creating an invisible graveyard of unsaved lives.

It is a most un-British thing to say, but this British system is a better way. Indeed, it is being introduced to nations around the world. They are switching from a regulatory approach to one of common law for many reasons. First, it facilitates progress. Just as no legislator can presume to know all the positive applications of a new technology such as AI, they are also blind to its potential negative applications. In the UK, in this environment, AI could prove to be a game-changer for British bioengineering. The world-leading 100,000 Genomes Project and UK Biobank, combined with our upcoming departure from the GDPR, promise AI-equipped researchers an unparalleled opportunity to uncover the genetic underpinnings of poor health and pharmaceutical efficacy, to the benefit of health services around the world.

The second reason is that it is more adaptable to threats. Decentralised systems of monitoring, involving industry professionals with a clear understanding of the technology, is the most effective form of risk management we can realistically devise. An adaptable system has the potential to insulate us from another risk of the AI era: technology in the hands of hostile powers and criminals. As in previous eras, unilateral disarmament would not make us safer. Instead, it would leave us without the tools to counteract the superior predictive abilities of our foes, rendering us a contemporary Qing dynasty marvelling at the arrival of steamships.

It is vital to recognise that AI is going to bring destruction. This is perhaps the most revolutionary technological innovation of our lifetime, and with it, AI brings the potential for creative destruction across the economy at a faster pace than even the world wide web. I will quote Oppenheimer when he cited the Bhagavad Gita, which says:

“Now I am become Death, the destroyer of worlds.”

That is not to sensationalise and fall into the same trap I warned of at the start of my remarks, but it is important to recognise that there will be change. Every bit as much as we have seen the stripping out of personnel in factories as they are replaced by machinery, we will see the loss of sectors to this technology. The critical point is not to stop it but to recognise it, adapt and use it for its strengths to develop.

We should be upfront about this. A failure to do so risks a backlash to excess. We cannot react with regulation; we must harness this. The industrial revolution brought both unprecedented economic prosperity and massive disruption. For all we know, had the luddites enjoyed a world of universal suffrage, their cause may have triumphed, dooming us to material poverty thereafter. If Britain is to reap the benefits of this new era of innovation, we must be frank about its potential, including its disruptive potential, and be prepared to make a strong case to defend the future it promises. Should we fail in this task, surrendering instead to the temptations of reactionary hysteria, our future may not look like an apocalyptic Hollywood blockbuster. It will, however, resemble that common historical tale of a once-great power sleepwalking its way into irrelevance.

On a more hopeful note, I turn to the question of where next? I spoke before of the pattern-based approaches that amplify conformity, such as we see on TikTok and Facebook. This quality may be attractive to technocrats—predictability, patterns, finding gaps and filling them—but that points to an increasing conformity that I, and I think many others, find boring. Artificial intelligence should be exploring what is new and innovative.

What about awe—the experience and the reaction of our mind when seeing or realising something genuinely new that does not conform to past patterns? A genuinely intelligent system would regularly be creating a sense of awe and wonder as we experience new things. Contrast the joy when we find a new film of a type we have not seen before—it covers the pages of the newspapers, dominates conversations with our friends and brings life to our souls, even—with being fed another version of the same old thing we have got used to, as some music apps are prone to do. Consider the teacher who encouraged us to try new things and have new experiences, and how we grew through taking those risks, rather than just hearing more of the same.

This begs key questions of governance, too. We have heard about a Bill of digital rights, and questions of freedom were rightly raised by the hon. Member for Brent Central, but what about a genuinely free-thinking future? What would AI bring to politics? We must address that question in this place. What system of government has the best record of dealing with such issues? Would it support an ultimate vision of fairness and equity via communism? Could it value and preserve traditions and concepts of beauty that could only be said, as Scruton argued, to have true value in a conservative context? These have always been big questions for any democracy, and I believe that AI may force us to address them in depth and at pace in the near future.

That brings me to a final point: the question of a moral approach. Here, I see hope and encouragement. My hon. Friend the Member for Stoke-on-Trent Central talked about truth, and I believe that ultimately, all AI does is surface these deeper questions and issues. The one I would like to address, very briefly, is the point of justice. The law is a rulebook; patterns, abstractions, conformity and breach are all suited to AI, but such a system does not predict or produce mercy or forgiveness. As we heard at the national parliamentary prayer breakfast this week, justice opens the door to mercy and forgiveness. It is something that is vital to the future of any modern society.

We all seek justice—we often hear about it in this House—but I would suggest that what we really seek is what lies beyond: mercy and forgiveness. Likewise, when we talk about technology, it is often not the technology itself but what lies beyond it that is our aim. As such, I am encouraged that there will always be a place for humanity and those human qualities in our future. Indeed, I would argue, they are essential foundations for the future that lies ahead.