Watchdogs (Industry and Regulators Committee Report) Debate
Full Debate: Read Full DebateLord Holmes of Richmond
Main Page: Lord Holmes of Richmond (Conservative - Life peer)Department Debates - View all Lord Holmes of Richmond's debates with the Department for Business and Trade
(2 months, 1 week ago)
Lords ChamberMy Lords, it is a pleasure to take part in this debate. I congratulate the noble Lord, Lord Hollick, on his excellent introduction to the debate and thank him and the committee for an excellent report that covers so much ground in such clarity and detail. “Who watches the watchdogs?” has been the cry over centuries of human societies, and it is never more applicable than today with the proliferation of regulators covering all aspects of our economy and society. Performance, independence and accountability are exactly the three points on any tripod to get into the issues surrounding how in the UK we regulate in the 21st century. The recommendations are clear, achievable and relevant, and I agree with all of them.
The themes running through the report are equally clear. There is a sense that it is as good as pointless—worse, harmful—simply to add more statutory objectives to regulators in the belief that this would impact performance and produce a better result for the market or consumers. Similarly, some regulators are able to fund themselves through levies and fees, and others have to go with their hand out to government. That financial structure must impact on the way that they operate, through no fault of their own.
The cry I hear running through the whole report is for clarity, consistency and coherence across the regulatory landscape. I agree entirely. This is never clearer than when we come to artificial intelligence where, currently, there is no regulator. The previous Government had the inadequate approach of writing a letter to all regulators to ask them what they intended to do when it comes to artificial intelligence. Will the Minister say what this Government’s approach will be to get the right regulatory framework for AI? I would certainly like to see an AI authority to review many of the provisions in my AI Private Member’s Bill, and I thank the noble Viscount, Lord Chandos, for his kind words about it.
When I say an AI authority, I do not mean a behemothic regulator covering all aspects of AI; I mean a right-sized, agile, nimble and, crucially, horizontally-focused regulator to look across all the existing regulators to assess their competence, address the issues, challenges and opportunities of AI and identify the gaps where currently there is no recourse. For example, in recruitment, if you find yourself on the wrong end of a recruitment decision, often without even knowing that AI was in the mix, there is currently nowhere in the regulatory landscape to seek redress. Similarly, we need an AI authority to be the custodian of the principles we want to see, not just for the right-size regulation of AI, but going further than that with an ability to transform the way we regulate across the whole of our economy and society and to look at all legislation to address its competence to address the challenges and opportunities of AI.
Will the Minister say where the Government currently are with the regulatory innovation office? What will be the scope? How will it be funded? What will be its first tasks? Does she agree that it is high time that we had an AI authority if we are to gain all the economic, social and psychological advantages and benefits of AI while being wholly conscious and competent to address all the risks and challenges? I suggest that if we had such an AI authority, it would have not just a positive impact on how we go about regulating AI but could improve how we go about regulation and regulators across the piece, not just positively impacting AI, not just asking the question “Who watches the watchdog?”, but enabling those watchdogs to be more, enabling them to be guard dogs and to be guide dogs, and, crucially, if the guard dog and the guide dog fail, empowering them to show their teeth.