(9 months ago)
Lords ChamberMy Lords, I join other noble Lords in commending the noble Lord, Lord Holmes, for bringing forward this Bill.
I come to this debate with the fundamental belief that supporting innovation and investment must be embedded in all regulation, but even more so in the regulation of artificial intelligence. After all, this wave of artificial intelligence is being billed as a catalyst that could propel economic growth and human progress for decades to come. The United Kingdom should not miss this supercycle and the promise of a lengthy period of economic expansion—the first of its kind since deglobalisation and deregulation 40 years ago.
With this in mind, in reading the AI regulation Bill I am struck by the weight of emphasis on risk mitigation, as opposed to innovation and investment. I must say that right at this moment, notwithstanding the fact that I realise that the Government, through other routes, including the pro-innovation stance that we talked about, are looking into innovation in investment. Even so, I feel that, on balance, the weight here is more on risk mitigation than innovation. I am keen that, in the drafting and execution of the artificial intelligence authority’s mandate in particular, and in the evolution of this Bill in general, the management of risk does not deter investment in this game-changing innovation.
I am of course reassured that innovation or opportunity are mentioned at least two times in the Bill. For example, Clause 6(a) signals that the public engagement exercise will consider
“the opportunities and risks presented by AI”.
Perhaps more pointedly, Clause 1(2)(e) states that the list of functions of the Al Authority are to include support for innovation. However, this mandate is at best left open to interpretation and at worst downgrades the importance and centrality of innovation.
My concern is that the new AI authority could see support for innovation as a distant or secondary objective, and that risk-aversion and mitigation become the cultural bedrock of the organisation. If we were to take too heavy-handed a risk-mitigation approach to AI, what opportunities could be missed? In terms of economic growth, as my noble friend Lord Holmes mentioned, PricewaterhouseCoopers estimates that AI could contribute more than $15 trillion to the world economy by 2030. In this prevailing era of slow economic growth, AI could meaningfully alter the growth trajectory.
In terms of business, AI could spur a new start-up ecosystem, creating a new generation of small and medium-sized enterprises. Furthermore, to underscore this point, AI promises to boost productivity gains, which could help generate an additional $4.4 trillion in annual profits, according to a 2023 report by McKinsey. To place this in context, this annual gain is nearly one and a half times larger than the UK’s annual GDP.
On public goods such as education and healthcare, the Chancellor in his Spring Budget a few weeks ago indicated the substantial role that a technology upgrade, including the use of AI, could play in improving delivery and access and in unlocking up to £35 billion of savings.
Clearly, a lot is at stake. This is why it is imperative that this AI Bill, and the way it is interpreted, strikes the right balance between mitigating risk and supporting investment and innovation.
I am very much aware of the perennial risks of malevolent state actors and errant new technologies, and thus, the need for effective regulation is clear, as the noble and learned Lord, Lord Thomas, stressed. This is unambiguous, and I support the Bill. However, we must be alert to the danger of regulation becoming a synonym for risk-management. This would overshadow the critical regulatory responsibility of ensuring a competitive environment in which innovation can thrive and thereby attract investment.