Artificial Intelligence (Select Committee Report) Debate
Full Debate: Read Full DebateEarl of Erroll
Main Page: Earl of Erroll (Crossbench - Excepted Hereditary)Department Debates - View all Earl of Erroll's debates with the Department for Business, Energy and Industrial Strategy
(6 years ago)
Lords ChamberMy Lords, I congratulate the noble Lord, Lord Clement-Jones, and the committee on a great report, which is crammed full of good advice, especially about the need for investment in our universities, where they teach thorough thinking, and in our innovative SMEs, where we can possibly unleash the full potential of the UK in this area. I declare a small interest in that I am about to join an ethical oversight group for the Proton Partners data vault, which will contain oncological data.
The first thing that struck me about the report was what it said about lifelong retraining. I can see exactly why this is necessary. I remember reading a report some time ago about people’s capacity to handle change as they grow older. Unfortunately, a lot of people find that very difficult. Certainly a lot of my friends do, and they regard me as rather odd because I have lived in the cyber world and am very happy to embrace change and enjoy it. However, I have discovered that a lot of people like to settle down within the boundaries of what they know, so I do not know how that will be handled. Will the human mind and its ability to handle change alter? I think we should study that.
The second thing that amused me in the report were the great figures on how many jobs we are going to lose. So far, I have noticed that every time there has been a technological improvement, the number of jobs has increased—they never seem to disappear; they just change. I remember that when bookkeeping software came out, it was said that accountants would be redundant. I will not go on with other examples as there is no point.
The third thing that I noticed in the report was the reference to anonymisation; that comes down to a lot of things that people want. They want their privacy and are terrified either of big companies knowing too much about them and using their data for financial gain or of the Government drawing inappropriate conclusions about whether to restrict people’s ability to move around due to their patterns of behaviour. That may be a mistake. But the trouble is that, theoretically, we may be able to anonymise data, but, if certain things are anonymised properly, they are no longer useful. Epidemiological research is particularly like that. It is very often necessary to know where a subject is located to look for clustering effects in the data. To go right back to the first example, that is how they tracked down cholera to a particular street in London. The utility of the data can be destroyed.
That brings me to ethics, which is really what I wanted to mention. With true anonymisation, if you discover that a subject in a study could be saved if only you could identify them, should you save them? Or, in the greater cause of keeping the data for epidemiological study, should you make sure that everything is anonymous and accept that they will die? That brings me to the ethical bit. I was very interested in the speech by the noble Lord, Lord Reid, who, much better than I could, went down the road of thinking about the challenge of the AI system. It is, as he said, an alien thought process. It does not have empathy or a conscience built into it. It is therefore, by definition, sociopathic. That is a challenge. How do you get that into a computer? It does not think like us. Our little computers—our brains—are analogue computers that work on reactions and in shades of grey. They are not, at heart, logical. However much you give that computer fuzzy logic, it comes down to ones and noughts firing at the bottom. I have heard discussions between various neuroscientists about whether it is possible to programme empathy, but that does not matter. We do not have that at the moment.
It will be interesting when the computer that lacks empathy comes up with some conclusions. Let us fire at it the huge problem of NHS funding. One big problem is the unsustainable cost of end-of-life care. The Government are trying to dream up all sorts of wonderful taxes and so forth. Some research a long time ago by a Dutch university found that smokers spend seven times more in taxes during their lifetimes than they cost when they start dying of cancer. They also die earlier, so there would be less end-of-life care to fund. The AI computer will think logically. It will realise that there has been a huge rise in obesity. In fact, obesity-related cancers have now overtaken smoking-related cancers. I predicted the rise in obesity when people were stopped from smoking because smoking is an appetite suppressant. Therefore, if we can get more people smoking, we will reduce the obesity and end-of-life funding problems and we could probably drop taxes because there will be a net gain in the profits from people who smoke. And they would enjoy themselves, particularly bipolar people—smoking is great for them because it calms them down when they are hyper and, if they are a bit down and getting sleepy in a car, they can puff on a cigarette and not fall asleep, avoiding many accidents on the road. I can see just how the computer would recommend that.
Is that a sociopathic view? Does it lack empathy or is it logically what we should be doing? I leave that to noble Lords. I make absolutely no judgment. I am just trying to suggest what could happen. That is the problem because lots of these decisions will involve ethics—decisions that are going to cause harm. We have to work out what is the least-worst thing. How will we deal with the transfer of liability? I will run out of time if I go into too many things, but there will be biases in the system: the bias of the person who designed the way the computer thinks and analyses problems, or the bias—this is in the report—of the data supplied to it, which could build up the wrong impression.
These machines are probably intelligent enough effectively to start gaming the system themselves. That is dangerous. The more control that we hand over to make our lives easier, the more likely we are to find the machines gaming. The report on malicious intent, which my noble friend Lord Rees referred to, is very interesting and I highly recommend it. It was produced by a collaboration of about half a dozen universities and it can be found on the internet.
Much has been said about people and the big data issue. I was very involved, and still am, with the internet of things and I was the chair of the BSI group which produced PAS 212 on interoperability standards. The whole point is to get the data out there so that one can do useful things with it. This is not about people’s data but about the consequences for them of the misuse of such data. An example would be trying to enhance traffic flows and so on. It may be that the computer, to control the overall picture, could send someone out on a journey that is not in their best interests. They may be in a crisis because their wife is about to have a baby and needs to get to hospital quickly. There are issues around this area which come down to liability.
The root of it all is the real problem that complex systems are not deterministic. While you can get the same pattern twice, you do not get the same outcome every time. That is the problem with having rules-based systems to deal with these things. AI systems can start to get around that, but you cannot be sure of what they are going to do. It has always amused me that everyone is predicting a wonderful artificial intelligence-driven idyllic future where everything is easy. I think that it will probably get bogged down in the legal system quite quickly, or other issues such as safety may arise. By the time the HSE gets its teeth into this, I will be very interested to see what happens.
I think back to the late 1970s when ethernet came on to the scene. There were many predictions about the paperless office that would arrive in a few years’ time. A wonderful cynic said that that was about as likely as the paperless loo. All I can say is that the loo has won.