Artificial Intelligence (Regulation) Bill [HL]

Lord Ranger of Northwood Excerpts
2nd reading
Friday 22nd March 2024

(8 months, 1 week ago)

Lords Chamber
Read Full debate Artificial Intelligence (Regulation) Bill [HL] 2023-24 Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Ranger of Northwood Portrait Lord Ranger of Northwood (Con)
- View Speech - Hansard - -

My Lords, are we ready for the power of artificial intelligence? With each leap in human ability to invent and change what we can achieve, we have utilised a new power, a new energy that has redefined the boundaries of imagination: steam and the Industrial Revolution; electricity and the age of light; and so, again, we stand on the precipice of another seismic leap.

However, the future of AI is not just about what we can do with it but about who will have access to control its power. So I welcome the attempt made by my noble friend Lord Holmes via this Bill to encourage an open public debate on democratic oversight of AI, but I do have some concerns. Our view of AI at this early stage is heavily coloured by how this power will deliver automation and the potential reduction of process-reliant jobs and how those who hold the pen on writing the algorithms behind AI could exert vast power and influence on the masses via media manipulation. We fear that the AI genie is out of the bottle and we may not be able to control it. The sheer, limitless potential of AI is intimidating.

If, like me, you are from a certain generation, these seeds of fear and fascination at the power of artificial intelligence have long been planted by numerous Hollywood movies picking on our hopes, dreams and fears of what AI could do to us. Think of the unnerving subservience of HAL in Stanley Kubrick’s “2001: A Space Odyssey” made in 1968, the menacing and semi-obedient robot Maximilian from the 1979 Disney production “The Black Hole”, the fantasy woman called Lisa created by the power of 80s home computing in “Weird Science” from 1985, and, of course, the ultimate hellish future of machine intelligence taking over the world in the form of Skynet in “The Terminator” made in 1984. These and many other futuristic interpretations of AI helped to fan the flames in the minds of engineers, computer scientists and super-geeks, many of whom created and now run the biggest tech firms in the world.

But where are we now? The advancement in processing power, coupled with vast amounts of big data and developments such as large language models, have led to the era of commercialisation of AI. Dollops of AI are available in everyday software programmes via chatbots and automated services. Obviously, the emergence of ChatGPT turbocharged the public awareness and usage of the technology. We have poured algorithms into machines and made them “think”. We have stopped prioritising trying to get robots to look and feel like us, and focused instead on the automation of systems and processes, enabling them to do more activities. We have moved from the pioneering to the application era of AI.

With all this innovation, with so many opportunities and benefits to be derived by its application, what should we fear? My answer is not from the world of Hollywood science fiction; it relates not to individuals losing control to machines but, rather, to how we will ensure that this power remains democratic and accessible and benefits the many. How will we ensure that control does not fall into the hands of the few, that wealth does not determine the ability to benefit from innovation and that a small set of organisations do not gain ultimate global control or influence over our lives? How, also, will we ensure that Governments and bureaucracies do not end up ever furthering the power and control of the state through well-intentioned regulatory control? This is why we must appreciate the size of this opportunity, think about the long-term future, and start to design the policy frameworks and new public bodies that will work in tandem with those who will design and deliver our future world.

But here is the rub: I do not believe we can control, manage or regulate this technology through a single authority. I am extremely supportive of the ambitions of my noble friend Lord Holmes to drive this debate. However, I humbly suggest that the question we need to focus on will be how we can ensure that the innovations, outcomes and quality services that AI delivers are beneficial and well understood. The Bill as it stands may be overambitious for the scope of this AI authority: to act as oversight across other regulators; to assess safety, risks and opportunities; to monitor risks across the economy; to promote interoperability and regulatory frameworks; and to act as an incubator to innovation. To achieve this and more, the AIA would need vast cross-cutting capability and resources. Again, I appreciate what my noble friend Lord Holmes is trying to achieve and, as such, I would say that we need to consider with more focus the questions that we are trying to answer.

I wholeheartedly believe and agree that the critical role will be to drive public education, engagement and awareness of AI, and where and how it is used, and to clearly identify the risks and benefits to the end-users, consumers, customers and the broader public. However, I strongly suggest that we do not begin this journey by requiring labelling, under Clause 5(1)(a)(iii), using “unambiguous health warnings” on AI products or services. That would not help us to work hand in hand with industry and trade bodies to build trust and confidence in the technology.

I believe there will eventually be a need for some form of future government body to help provide guidance to both industry and the public about how AI outcomes, especially those in delivering public sector services, are transparent, fair in design and ethical in approach. Such a body will need to take note of the approach of other nations and will need to engage with local and global businesses to test and formulate the best way forward. So, although I am sceptical of many of the specifics of the Bill, I welcome and support the journey that it, my noble friend Lord Holmes and this debate are taking us on.