Artificial Intelligence (Regulation) Bill [HL]

Earl of Erroll Excerpts
2nd reading
Friday 22nd March 2024

(9 months ago)

Lords Chamber
Read Full debate Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Earl of Erroll Portrait The Earl of Erroll (CB)
- View Speech - Hansard - -

My Lords, I entirely agree with those last sentiments, which will get us thinking about what on earth we do about this. An awful lot of nonsense is talked, and a lot of great wisdom is talked. The contributions to the debate have been useful in getting people thinking along the right lines.

I will say something about artificial general intelligence, which is very different, because it may well aim to control people or the environment in which we live, rather than generative AI or large language models, which I think people are thinking of: ChatGPT, Llama, Google Gemini, and all those bits and pieces. They are trawling through large amounts of information incredibly usefully and producing a good formatted epitome of what is in there. Because you do not have time to read, for instance, large research datasets, they can find things in them that you have not had time to trawl through and find. They can be incredibly useful for development there.

AI could start to do other things: it could control things and we could make it take decisions. Some people suggest that it could replace the law courts and a lot of those sorts of things. But the problem with that is that we live in a complex world and complex systems are not deterministic, to use a mathematical thing. You cannot control them with rules. Rules have unintended consequences, as is well known—the famous butterfly effect. You cannot be certain about what will happen when you change one little bit. AI will not necessarily be able to predict that because, if you look at how it trains itself, you do not know what it has learned—it is not done by algorithm, and some AI systems can modify their own code. So you do not know what it is doing and you cannot regulate for the algorithms or any of that.

I think we have to end up regulating, or passing laws on, the outcomes. We always did this in common law: we said, “Thou shalt not kill”, and then we developed it a bit further, but the principle of not going around killing people was established. The same is true of other simple things like “You shan’t nick things”. It is what comes out of it that matters. This applies when you want to establish liability, which we will have to do in the case of self-driving cars, for instance, which will take over more and more as other things get clogged up. They will crash less, kill fewer people and cause fewer accidents. But, because it is a machine doing it, it will be highly psychologically unacceptable—with human drivers, there will be more accidents. There will have to be changes in thought on that.

Regulation or legislation has to be around the outcomes rather than the method, because we cannot control where these things go. A computer does not have an innate sense of right and wrong or empathy, which comes into human decisions a lot. We may be able to mimic it, and we could probably train computers up on models to try to do that. One lot of AI might try to say whether another lot of AI is producing okay outcomes. It will be very interesting. I have no idea how we will get there.

Another thing that will be quite fun is when the net-zero people get on to these self-training models. An LLM trawling through data uses huge amounts of energy, which will not help us towards our net-zero capabilities. However, AI might help if we put it in charge of planning how to get electricity from point A to point B in an acceptable fashion. But on the other hand people will not trust it, including planners. I am sorry—I am trying to illustrate a complex system. How on earth can you translate that into something that you can put on paper and try to control? You cannot, and that is what people have to realise. It is an interesting world.

I am glad that the Bill is coming along, because it is high time we started thinking about this and what we expect we can do about it. It is also transnational—it goes right across all borders—so we cannot regulate in isolation. In this new interconnected and networked world, we cannot have a little isolated island in the middle of it all where we can control it—that is just not going to happen. Anyway, we live in very interesting times.