(4 days, 6 hours ago)
Lords Chamber
Baroness Gerada (CB)
My Lords, under this amendment as it stands, we would have patients who could not have computerised records, because we have AI sitting behind every computer. The AI starts at the beginning. It starts with our telephone system, so, in fact, the patient would not even be able to use the telephone to access us; they or a relative would have to come in. They certainly would not be allowed to have computerised records, because of the digital and AI systems that we have in order to pick out diseases and to make sure that we are safely practising.
They also would not be able to have electronic prescribing, in many ways, because the pharmacy end too uses AI to make sure that patients are not being overmedicated and for drug interactions, et cetera, and, if they are using a computer system, AI is also used to digitally scribe consultations. So I understand the essence of this amendment, which I think, as many have said, is to not allow AI to decision-make somebody at the end of their life, but, as it stands, I have to warn noble Lords that it is unworkable in clinical practice.
My Lords, I am grateful to my noble friend for laying such a broad amendment, and obviously I agree with much of what the right reverend Prelate said. It is interesting that this is coming straight after the debate on face-to-face conversations. We are all used to ticking the “I am not a robot” box, but AI now has the ability to create persons, and it is often very difficult if you are not face to face to judge whether the person on screen is actually a person. I cannot believe we have got there quite so quickly.
However, it is also important to consider about public confidence and understanding at the moment. This is, as we keep saying, such an important life-or-death decision. There is a lack of understanding and people are potentially worried about these implications, often with regard to employment but also other purposes. For instance, as I was preparing this, it made me reflect, as the noble Baroness, Lady Gerada, said, on how your GP uses AI. When Patchs told me recently that the NHS guidance was that I should not take an over-the-counter drug for more than two weeks, I queried it.
However, only yesterday, I thought: was that answer actually from my GP or was it from an AI tool sitting behind the system? We really need to be careful with the level of public understanding and awareness of its use. This use of AI is also one step on and connected to Clause 42, which relates to advertising. I am grateful that the noble and learned Lord is going to bring forward some amendments on that clause. I hope that the connection with AI, as well as the Online Safety Act 2023, have been considered. If I have understood the noble and learned Lord correctly, I am disappointed that we have had no assurance that those amendments will be with us by the end of Committee, when the noble and learned Lord gave evidence on 22 October last year and accepted that there was additional work to be done on Clause 42.
I said at Second Reading that the Bill is currently drafted for an analogue age. I am not wanting to take us back to some kind of quill and no-use-of-AI situation. Obviously, as other noble Lords have said, the Bill do not deal with the pressure or coercion not being from a human being. It also does not consider that coercion can now be more hidden with the use of AI. The Bill does not deal with people being able to learn to answer certain tools by watching YouTube. Therefore, we could be in a situation where someone who would not qualify if there was a face-to-face non-AI system could learn those answers and qualify.
There are also good studies to say that its use in GP practices has had some inaccuracies. In many circumstances, there is a lack of transparency and accountability in tracing where the decision has come from. We do not even understand the algorithms that are sending us advertisements for different shops, let alone how they could be connected to a decision such as this.
Finally, my biggest concern is that there will be a limited number of practitioners who will want to participate in this process. That has been accepted on numerous occasions in your Lordships House. I will quote from a public letter written on 12 June last year. All of Plymouth’s senior palliative medicine doctors were signatories to a letter warning us of the risks of the Bill and saying that the
“changes would significantly worsen the delivery of our current health services in Plymouth through the complexity of the conversations required when patients ask us about the option of assistance to die”.
That is relevant for two reasons. First, if we have a shortage of practitioners in parts of the country, such as the south-west if those doctors’ opposition to the Bill translates into not being involved, there may therefore be an increased temptation to resort to more use of AI. I hope that the noble and learned Lord or the Minister can help on this point.
Many of these systems—I am speaking as a layperson here—rely on data groups and information within the system: the learning is created from that. If you have a very small pool of practitioners and some form of AI being used, does that not affect the creation of the AI tool itself? I hope that I have explained that correctly. With such a small group doing it, will that not affect the technology itself?