(1 day, 8 hours ago)
Lords ChamberMy Lords, I have supported AI for as long as I can remember, and I think it is the future for this country. If we are looking for improvements in productivity, there is no doubt that we should look to the National Health Service and the public sector, where we can see AI having its greatest effect and improving the health of the economy of this country.
However, we are in early days with AI, although it has been with us for some time. We must be very careful not to rely on it for too many things which should be done by human beings. The noble Lord, Lord Stevens, has already referred to the appalling rate of misdiagnosis. We can look at these statistics and say, “Well, it is only a small number who are misdiagnosed”. Yes, but my noble friend Lord Polack was misdiagnosed as only having six months to live and he is still with us 32 years later. You must think about this, because if you get the situation with misdiagnosis badly wrong, it undermines the basis of this Bill. Therefore, we must be very careful that AI does not contribute to that as well.
I pay tribute to the right reverend Prelate. AI is having a tremendous effect in the health service and helping a large number of people to get better, and it may well be that AI introduces cures for people who are being written off by their doctors—perhaps wrongly. We must not dismiss AI, but we must be very wary about where it leads us. There will be an awful lot of bumps in the road before AI is something in which we can all have complete confidence and believe will deliver better outcomes than human beings.
My Lords, there are just a few remarks I would like to make. We live in an age where it is hard to get a human to interact with any more. We lift the phone and speak to a voice that says that if you want one thing, press 1, and if you want something else, press 2. I fear that this is what we are heading for: if you want death, just press a button.
I have no doubt that if this legislation is passed as it is, in the near future we will be heading towards AI assessment procedures. My concern is not where we start in this process, but where it leads to and where it ends.
I am informed that, in the Netherlands, it has been proposed to use AI to kill patients in cases where doctors are unwilling to participate. Indeed, it is suggested that AI could be less prone to human error. Surely, in crucial assessments and decision-making processes for a person seeking assisted suicide, AI could not identify subtle coercion and assess nuanced capacity, bearing in mind the irreversible nature of the outcome. There are concerns about the risk of coercion or encouragement by AI. It should be noted that, with the newly developed AI functions and chatbots, there are already cases globally of individuals being coerced into all sorts of different behaviours, practices and decision-making.
Clause 5 allows doctors to direct the person
“to where they can obtain information and have the preliminary discussion”.
That source of information could be AI or a chatbot. Is there anything in the Bill that will prevent this?
AI undermines accountability. If the technology fails, who bears responsibility? Traditionally in the health service, the doctor bears responsibility. If AI is used, who bears responsibility?