Artificial Intelligence (Regulation) Bill [HL]

Lord Bishop of Worcester Excerpts
Lord Bishop of Worcester Portrait The Lord Bishop of Worcester
- View Speech - Hansard - -

My Lords, I guarantee that this is not an AI-generated speech. Indeed, Members of the House might decide after five minutes that there is not much intelligence of any kind involved in its creation. Be that as it may, we on these Benches have engaged extensively with the impacts and implications of new technologies for years—from contributions to the Warnock committee in the 1980s through to the passage of the Online Safety Bill through this House last year. I am grateful to the noble Lord, Lord Holmes, for this timely and thoughtful Bill and for his brilliant introduction to it. Innovation must be enthusiastically encouraged, as the noble Baroness, Lady Moyo, has just reminded us. It is a pleasure to follow her.

That said, I will take us back to first principles for a moment: to Christian principles, which I hope all of good will would want to support. From these principles arise two imperatives for regulation and governance, whatever breakthroughs new technologies enable. The first is that a flourishing society depends on respecting human dignity and agency. The more any new tool threatens such innate dignity, the more carefully it should be evaluated and regulated. The second imperative is a duty of government, and all of us, to defend and promote the needs of the nation’s weak and marginalised —those who cannot always help themselves. I am not convinced that the current pro-innovation and “observe first, intervene later” approach to AI get this perennial balance quite right. For that reason, I support the ambitions outlined in the Bill.

There are certainly aspects of last year’s AI White Paper that get things in the right order: I warmly commend the Government for including fairness, accountability and redress among the five guiding principles going forward. Establishing an AI authority would formalise the hub-and-spoke structure the Government are already putting in place, with the added benefit of shifting from a voluntary to a compulsory basis, and an industry-funded regulatory model of the kind the Online Safety Act is beginning to implement.

The voluntary code of practice on which the Government’s approach currently depends is surely inadequate. The track record of the big tech companies that developed the AI economy and are now training the most powerful AI models shows that profit trumps users’ safety and well-being time and again. “Move fast and break things” and “act first, apologise later” remains the lodestar. Sam Altman’s qualities of character and conduct while at the helm of OpenAI have come under considerable scrutiny over the last few months. At Davos in January this year, the Secretary-General of the United Nations complained:

“Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact.”


How can it be right that the richest companies in history have no mandatory duties to financially support a robust safety framework? Surely, it should not be for the taxpayer alone to shoulder the costs of an AI digital hub to find and fix gaps that lead to risks or harm. Why should the taxpayer shoulder the cost of providing appropriate regulatory sandboxes for testing new product safety?

The Government’s five guiding principles are a good guide for AI, but they need legal powers underpinning them and the sharpened teeth of financial penalties for corporations that intentionally flout best practice, to the clear and obvious harm of consumers.

I commend the ambitions of the Bill. A whole-system, proportional and legally enforceable approach to regulating AI is urgently needed. Balancing industry’s need to innovate with its duty to respect human dignity and the vulnerable in society is vital if we are safely to navigate the many changes and challenges not just over the horizon but already in plain sight.