Artificial Intelligence in Weapon Systems Committee Report Debate

Full Debate: Read Full Debate
Department: Ministry of Defence

Artificial Intelligence in Weapon Systems Committee Report

Earl of Erroll Excerpts
Friday 19th April 2024

(7 months ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Earl of Erroll Portrait The Earl of Erroll (CB)
- View Speech - Hansard - -

My Lords, this has been a very useful report, because it gets us thinking properly about these things.

I declare an interest in the whole world of generative AI and LLMs, with Kaimai and FIDO, which is looking at curated databases to extract information. It is very useful for that sort of thing. With that, as mentioned in the report, comes the whole issue of training materials. The results you get depend on what the AI is looking at. If you fire it off against a huge amount of uncurated stuff, you might, and you can, get all sorts of rubbish back. Certain tests have found AI returning things that were 70% to 80% inaccurate. On the other hand, if put against something carefully targeted, such as medical research, where everything that has gone into the dataset has been studied and put together by someone, it will find stuff that no one has had time to read and put together, and realise that it is relevant.

In the same way, AI will be very useful in confusing scenarios facing a military commander, or in military decisions, to help them weed out what is right and what is wrong and what is true and what is not. I seem to remember, though I cannot remember when it was, that there was nearly a nuclear war because, at one point, various sensors had gone wrong and they thought there was a military attack on the United States. They nearly triggered all the defences, but someone suddenly said, “Hang on, this doesn’t look quite right”. It may well be that an artificial intelligence system, which may not be confused by some of the fluff, might have spotted that more easily or accurately, and reported it and said, “Don’t believe everything you’re looking at; there is another problem in the system”. On the other hand, it might have done the opposite. This is the trouble, which is why the human intervention point is very important.

We also have to remember that, although AI started being developed in the 1980s, with neural networks and things like that, it is only really getting into its stride now. We do not know quite where things will end up, and so it is very difficult to regulate it.

My interest in this stems from the fact that I served with the TA for 15 years, and so I am interested in this country’s ability to defend itself. I worry about what would happen if we start trying to shackle ourselves to a whole lot of things that reduce that capability—I entirely agree with the noble Lord, Lord Hamilton. We should worry about that, because many countries may well pay lip service to international humanitarian law but an awful lot of them will use it to try to shackle us, who tend to obey it, while they themselves will not feel constricted by it. Take, for instance, the international Convention on Cluster Munitions. We are signed up to that, and so are many good countries, but there are one or two very serious countries, including one of our allies, that did not sign up to it. I personally agree with it, absolutely—it is a most appalling munition, because of the huge problems with the aftermath and the tidy-up.

I was also amused by conclusion 8 in the report, which mentioned testing AI “against all possible scenarios”. I seem to remember that there was a famous German general who said, “When anybody has only two possible courses of action, he will always adopt the third”. That is the trouble. I think the British are quite good at finding the third way in these things; that is possibly how we run, because of the unlikelihood of what we do.

The other thing I worry about with autonomous weapons systems is collateral damage. If you start programming a thing with facial recognition—you program in a face and ask it to take out a particular person or group of people, and off shoots the drone to make a decision on it—how do you tell it how much collateral damage to allow, if any? That is a problem. Particularly recently, we have seen that with other things, where people have decided that the target is so important that it is all right killing a few others. But it is not really—at least, I do not feel so. When you create a lot of collateral damage, particularly if it is not in a war but an insurgent situation, you reinforce the insurgents’ desire to be difficult, and that of their family and friends and all the other people. People forget that.

The other thing is that parliamentary scrutiny will be too slow. We are no good at scrutinising at high speed, and things will be changing quite rapidly in this area. We need scrutiny, we need human control at the end, and we need to use AI when it is useful for making that decision, but we must be very careful about trying to slow everything down with overbearing scrutiny.