Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateBaroness Stowell of Beeston
Main Page: Baroness Stowell of Beeston (Conservative - Life peer)Department Debates - View all Baroness Stowell of Beeston's debates with the Department for Science, Innovation & Technology
(9 months ago)
Lords ChamberMy Lords, it is a great pleasure to follow the noble and learned Lord, Lord Thomas, and his interesting speech. I remind noble Lords that the Communications and Digital Committee, which I have the privilege to chair, published our report Large Language Models and Generative AI only last month. For anyone who has not read it, I wholeheartedly recommend it, and I am going to draw heavily on it in my speech.
It is a pleasure to speak in a debate led by my noble friend Lord Holmes, and I congratulate him on all that he does in the digital and technology space. As he knows, I cannot support his Bill because I do not agree with the concept of an AI authority, but I have listened carefully to the arguments put forward by the noble and learned Lord, Lord Thomas, a moment ago. But neither would I encourage the Government to follow the Europeans and rush to develop overly specific legislation for this general-purpose technology.
That said, there is much common ground on which my noble friend and I can stand when it comes to our ambitions for AI, so I will say a little about that and where I see danger with the Government’s current approach to this massively important technological development.
As we have heard, AI is reshaping our world. Some of these changes are modest, and some are hype, but others are genuinely profound. Large language models in particular have the potential to fundamentally reshape our relationship with machines. In the right hands, they could drive huge benefits to our economy, supporting ground-breaking scientific research and much more.
I agree with my noble friend Lord Holmes about how we should approach AI. It must be developed and used to benefit people and society, not just big tech giants. Existing regulators must be equipped and empowered to hold tech firms to account as this technology operates in their own respective sectors, and we must ensure that there are proper safety tests for the riskiest models.
That said, we must maintain an open market for AI, and so any testing must not create barriers to entry. Indeed, one of my biggest fears is an even greater concentration of power among the big tech firms and repeating the same mistakes which led to a single firm dominating search, no UK-based cloud service, and a couple of firms controlling social media. Instead, we must ensure that generative AI creates new markets and, if possible, use it to address the existing market distortions.
Our large language model report looked in detail at what needs to happen over the next three years to catalyse AI innovation responsibly and mitigate risks proportionately. The UK is well-placed to be among the world leaders of this technology, but we can only achieve that by being positive and ambitious. The recent focus on existential sci-fi scenarios has shifted attention towards too narrow a view of AI safety. On its own, a concentration on safety will not deliver the broader capabilities and commercial heft that the UK needs to shape international norms. However, we cannot keep up with international competitors without more focus on supporting commercial opportunities and academic excellence. A rebalance in government strategy and a more positive vision is therefore needed. The Government should improve access to computing power, increase support for digital, and do more to help start-ups grow out of university research.
I do not wish to downplay the risks of AI. Many need to be addressed quickly—for example, cyberattacks and synthetic child sexual abuse, as well as bias and discrimination, which we have already heard about. The Government should scale up existing mitigations, and ensure industry improves its own guard-rails. However, the overall point is about balance. Regulation should be thoughtful and proportionate, to catalyse rather than stifle responsible innovation, otherwise we risk creating extensive rules that end up entrenching incumbents’ market power, and we throttle domestic industry in the process. Regulatory capture is a real danger that our inquiry highlighted.
Copyright is another danger, and this is where there is a clear case for government action now. The point of copyright is to reward innovation, yet tech firms have been exploiting rights holders by using works without permission or payment. Some of that is starting to change, and I am pleased to see some firms now striking deals with publishers. However, these remain small steps, and the fundamental question about respecting copyright in the first place remains unresolved.
The role for government here is clear: it should endorse the principles behind copyright and uphold fair play, and should then update legislation. Unfortunately, the current approach remains unclear and inadequate. It has abandoned the IPO-led process, but apparently without anything more ambitious in its place. I hope for better news in the Government’s response to our report, expected next month, and it would be better still if my noble friend the Minister could say something reassuring today.
In the meantime, I am grateful to my noble friend Lord Holmes for providing the opportunity to debate such an important topic.