(1 year, 3 months ago)
Lords ChamberMy Lords, I join in thanking my noble friend Lord Ravensdale for introducing this topical and very important debate. I was fortunate to be a member of the Lords Select Committee, together with the right reverend Prelate the Bishop of Oxford as well as the noble Lord, Lord Holmes of Richmond, ably chaired by the noble Lord, Lord Clement-Jones, some six years ago. We did a deep dive into the benefits of AI, as well as the risks. One of our conclusions was the necessity for joined-up thinking when it comes to regulation.
There is no denying that AI is the most powerful technology of our times, but many are getting alarmed at the speed of its delivery. It took Facebook four and a half years to get 100 million users; it took Google two and a half years to get 100 million users; but it took ChatGPT just two months.
I particularly welcome its potential for advancing personalised healthcare as well as education. It will also accelerate the deployment of transformational climate solutions and, no doubt, in the bigger picture of the economy it will accelerate a rapid surge in productivity. However, that poses the question of what jobs will be augmented by AI. My simple answer to that is that we have to focus a lot more on upskilling in all SMEs to take account of what AI will have in the future. It is generally accepted that the long-term impact of AI on employment in the UK will be broadly neutral, but the impact of generative AI on productivity could add trillions of dollars in value to the global economy.
I listened yesterday to the podcast “The AI Dilemma”, with leading AI global experts arguing about the potential risk. What I found alarming was that 50% of leading AI researchers believe that there is a 10% or greater chance that humans could go extinct as a result of their inability to control AI. Personally, I do not share that alarmist approach. I do not believe that there is an existential threat while the focus is on narrow AI. General AI, on the other hand, remains a theoretical concept and has not yet been achieved.
On the question of regulation, there have been growing calls for Governments around the world to adapt quickly to the challenges that AI is already delivering and to the potential transformational changes in the future with those associated risk factors. There have been more and more calls for a moratorium on AI development. Equally, I do not believe that that is possible. Regulators need cross-collaboration and cross-regulation to solve the biggest problems. There is also a need for more evidence gathering and case studies.
Public trust in AI is going to be a major challenge. Communication beyond policy is important for private and public understanding. In terms of regulation, the financial services sector’s use of AI is a lot more regulated than any other sector’s. Just as regulators need to focus on addressing safety risks in the use of AI, so the regulators themselves need to be upskilled with the new advances in this technology.
DCMS appears to be taking a light touch in regulating AI, which I welcome. I also welcome initiatives by UKRI to fund responsible AI and an AI task force. However, there needs to be more focus on building ecosystems for different regulators in how to rejuvenate the supply chain of AI. The future is uncertain.
The public sector, as we all know, is likely to be a lot slower to embrace the benefits of AI. By way of example, I am sure that the Department for Work and Pensions could do a lot more with all its data to formulate algorithms for greater efficiency and provide better support to the public in these difficult times. We need a co-ordinated approach across all government departments to agree an AI strategy.
In conclusion, there is no doubt that AI is here to stay and no doubt about the incredible potential it holds, but we need joined-up thinking and collaboration. It is accepted that AI will never be able to replace humans, but humans with AI could potentially replace humans who do not embrace AI.