Artificial Intelligence: Public Services Debate
Full Debate: Read Full DebateBaroness Chakrabarti
Main Page: Baroness Chakrabarti (Labour - Life peer)Department Debates - View all Baroness Chakrabarti's debates with the Department for Science, Innovation & Technology
(2 days, 22 hours ago)
Lords ChamberThe deployment of AI has started, as the noble Lord recognised, and I have given the three headline exemplars—and others are being put in through the incubator for AI that sits within DSIT. He raises a crucial point, and that is why the responsible AI advisory panel is being set up, which will include civil society, industry and academia to make sure that this is looked at properly. An ethics unit is already looking at this, and there are many diverse groups across government. What the Government Digital Service is trying to do is to pull it together into something more coherent, of which I think the responsible AI advisory panel is an important part.
My Lords, a slogan from the early days of computing is, “Rubbish in, rubbish out”. Biased historic training data can bake discrimination and historic bias into the system, whether on stop and search, which we have discussed, or whether on insurability or employability, and so on. To flip my noble friend’s very positive and commendable Question, what are the Government going to do to ensure that there are safeguards to ensure that historic bias is not baked into the system?
Once again, that is a very important question. The noble Baroness is absolutely right. It is as true for AI as it is for other systems: rubbish in, rubbish out. Well-curated, properly understood datasets are crucial. It is one of the reasons that where there are well-documented, well-curated datasets that can be used to train models for government purposes, we will be pursuing those. We will use the AI assurance mechanism that I discussed previously to try to make sure that we identify where there are systems that carry risks such as the one the noble Baroness raises.