AI Systems: Risks Debate

Full Debate: Read Full Debate

Baroness Harding of Winscombe

Main Page: Baroness Harding of Winscombe (Conservative - Life peer)

AI Systems: Risks

Baroness Harding of Winscombe Excerpts
Thursday 8th January 2026

(2 days, 6 hours ago)

Grand Committee
Read Full debate Read Hansard Text
Baroness Harding of Winscombe Portrait Baroness Harding of Winscombe (Con)
- Hansard - -

My Lords, I too thank my noble friend Lord Fairfax for bringing this debate and for his continued efforts on this topic. I shall focus my remarks on so-called advanced general artificial intelligence, AGI. I understand the resistance to legislation. I understand the fear that technology will get around barriers and that technologists and technology will simply go elsewhere, with the associated growth that that might bring. But I think that, as everyone who has spoken in this debate has said, there are very real fears, expressed by the head of MI5 no less, that this technology could get out of control. We have to ask the question: it is not just whether you can do something; it is whether you should do something.

There is a real example of the UK tackling a different but similar problem brilliantly in our recent past: the Warnock report of 1982 to 1984. Dame Mary Warnock was charged with reviewing the social, ethical and legal implications of developments in human fertilisation and embryology. What Dame Mary and her team did at that time was to settle the debate and to settle public opinion on what ought to be done—not what could be, but what ought to be. That included, for example, the 14-day rule for research on human embryos. At the time, human embryos could be kept alive only for a couple of days. That rule has lasted 40 years and is currently being redebated. What we have is a British model for what was at the time a global technology that presented huge opportunity and created great fear. Does this sound familiar? I think it does.

I ask the Minister whether the Government will consider something similar. The AI Security Institute is doing good work, but it is scientific work. It is asking, “What do these models currently do?” It is not asking, “What should they do?” I think we need ethicists, philosophers and social scientists to build that social, moral and then legal framework for this technology, which I would be the first to say I welcome—but, my goodness, we need to decide what we want it to do rather than just wait to find out what it can do.