(7 months, 3 weeks ago)
Lords ChamberMy Lords, like others, I congratulate the noble Lord, Lord Holmes of Richmond, on his Private Member’s Bill, the Artificial Intelligence (Regulation) Bill. It has been a fascinating debate and one that is pivotal to our future. My noble friend Lord Leong apologises for his absence and I am grateful to the Government Benches for allowing me, in the absence of an AI-generated hologram of my noble friend, to take part in this debate. If the tone of my comments is at times his, that is because my noble friend is supremely organised and I will be using much of what he prepared for this debate. Like the noble Lord, Lord Young, I am relying heavily on osmosis; I am much more knowledgeable on this subject now than two hours ago.
My first jobs were reliant on some of the now-defunct technologies, although I still think that one of the most useful skills I learned was touch-typing. I learned that on a typewriter, complete with carbon paper and absolutely no use of Tipp-Ex allowed. However, automation and our continued and growing reliance on computers have improved many jobs rather than simply replacing them. AI can help businesses save money and increase productivity by adopting new technologies; it can also release people from repetitive data-entry tasks, enabling them to focus on creative and value-added tasks. New jobs requiring different skills can be created and, while this is not the whole focus of the debate, how we achieve people being able to take up new jobs also needs to be a focus of government policy in this area.
As many noble Lords have observed, we stand on the brink of an AI revolution, one that has already started. It is already changing the way we live, the way we work and the way we relate to one another. I count myself in the same generation of film viewers as the noble Lord, Lord Ranger. The rapidly approaching tech transformation is unlike anything that humankind has experienced in its speed, scale and scope: 20th-century science fiction is becoming commonplace in our 21st-century lives.
As the noble Baroness, Lady Moyo, said, it is estimated that AI technology could contribute up to £15 trillion to the world economy by 2030. As many noble Lords mentioned, AI also presents government with huge opportunities to transform public services, potentially delivering billions of pounds in savings and increasing the service to the public. For example, it could help with the workforce crisis in health, particularly in critical health diagnostics, as highlighted by the noble Lord, Lord Empey. The noble Baroness, Lady Finlay, highlighted the example of how diagnosis of Covid lung has benefited through the use of AI, but, as she said, that introduces requirements for additional infrastructure. My noble friend Lord Davies also noted that AI can help to contribute to how we tackle climate change.
The use of AI by government underpins Labour’s missions to revive our country’s fortunes and ensure that the UK thrives and is at the forefront of the coming technological revolution. However, we should not and must not overlook the risks that may arise from its use, nor the unease around AI and the lack of confidence among the public around its use. Speaking as someone who generally focuses on education from these Benches, this is not least in the protection of children, as the noble Baroness, Lady Kidron, pointed out. AI can help education in a range of ways, but these also need regulation. As the noble Baroness said, we need rules to defend against the potential abuses.
Goldman Sachs predicts that the equivalent of 300 million full-time jobs globally will be replaced; this includes around a quarter of current work tasks in the US and Europe. Furthermore, as has been noted, AI can damage our physical and mental health. It can infringe upon individual privacy and, if not protected against, undermine human rights. Our collective response to these concerns must be as integrated and comprehensive as our embracing of the potential benefits. It should involve all stakeholders, from the public and private sectors to academia and civil society. Permission should and must be sought by AI developers for the use of copyright-protected work, with remuneration and attribution provided to creators and rights holders, an issue highlighted by the noble Lord, Lord Freyberg. Most importantly, transparency needs to be delivered on what content is used to train generative AI models. I found the speech of the noble Earl, Lord Erroll, focusing on outcomes, of particular interest.
Around the world, countries and regions are already beginning to draft rules for AI. As the noble Lord, Lord Kirkhope, said, this does not need to stifle innovation. The Government’s White Paper on AI regulation adopted a cross-sector and outcome-based framework, underpinned by its five core principles. Unfortunately, there are no proposals in the current White Paper for introducing a new AI regulator to oversee the implementation of the framework. Existing regulators, such as the Information Commissioner’s Office, Ofcom and the FCA have instead been asked to implement the five principles from within their respective domains. As a number of noble Lords referred to, the Ada Lovelace Institute has expressed concern about the Government’s approach, which it has described as “all eyes, no hands”. The institute says that, despite
“significant horizon-scanning capabilities to anticipate and monitor AI risks … it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact”.
The Bill introduced by the noble Lord, Lord Holmes, seeks to address these shortcomings and, as he said in his opening remarks: if not now, when? Until such time as an independent AI regulator is established, the challenge lies in ensuring its effective implementation across various regulatory domains. This includes data protection, competition, communications and financial services. A number of noble Lords mentioned the multitude of regulatory bodies involved. This means that effective governance between them will be paramount. Regulatory clarity, which enables business to adopt and scale investment in AI, will bolster the UK’s competitive edge. The UK has so far been focusing on voluntary measures for general-purpose AI systems. As the right reverend Prelate the Bishop of Worcester said, this is not adequate: human rights and privacy must also be protected.
The noble Lord, Lord Kirkhope, noted that AI does not respect national borders. A range of international approaches to AI safety and governance are developing, some of which were mentioned by the noble Lord, Lord Fairfax. The EU has opted for a comprehensive and prescriptive legislative approach; the US is introducing some mandatory reporting requirements; for example, for foundation models that pose serious national or economic security risks.
Moreover, a joint US-EU initiative is drafting a set of voluntary rules for AI businesses—the AI code of conduct. In the short term, these may serve as de facto international standards for global firms. Can the Minister tell your Lordships’ House whether the Government are engaging with this drafting? The noble Lord, Lord Empey, suggested that the Government have lost momentum. Can the Minister explain why the Government are allowing the UK to lose influence over the development of international AI regulation?
The noble Lord, Lord Clement-Jones, noted that the Library briefing states that this Bill marks a departure from government approach. The Government have argued that introducing legislation now would be premature and that the risks and challenges associated with AI, the regulatory gaps and the best way to address them must be better understood. This cannot be the case. Using the horse analogy adopted by the noble Baroness earlier, we need to make sure that we do not act after the horse has bolted.
I pay tribute, as others have done, to the work of the House of Lords Communications and Digital Committee. I found the points highlighted by its chair and her comments very helpful. We are facing an inflection point with AI. It is regrettable that the government response is not keeping up with the change. Why are the Government procrastinating while all other G7 members are adopting a different, more proactive approach? A Labour Government would act decisively and not delay. Self-regulation is simply not enough.
The honourable Member for Hove, the shadow Secretary of State for Science, Innovation and Technology, outlined Labour’s plans recently at techUK’s annual conference. He said:
“Businesses need fast, clear and consistent regulation … that … does not unnecessarily slow down innovation”—
a point reflected in comments by the noble and learned Lord, Lord Thomas. We also need regulation that encourages risk taking and finding new ways of working. We need regulation that addresses the concerns and protects the privacy of the public.
As my noble friend Lord Chandos said, the UK also needs to address concerns about misinformation and disinformation, not least in instances where these are democratic threats. This point was also reflected by the noble Lords, Lord Vaizey and Lord Fairfax.
Labour’s regulatory innovation office would give strategic steers aligned with our industrial strategy. It would set and monitor targets on regulatory approval timelines, benchmark against international comparators and strengthen the work done by the Regulatory Horizons Council. The public need to know that safety will be baked into how AI is used by both the public and the private sectors. A Labour Government would ensure that the UK public sector is a leader in responsibly and transparently applying AI. We will require safety reports from the companies developing frontier AI. We are developing plans to make sure that AI works for everyone.
Without clear regulation, widespread business adoption and public trust, the UK’s adoption of AI will be too slow. It is the Government’s responsibility to acknowledge and address how AI affects people’s jobs, lives, data and privacy, and the rapidly changing world in which they live. The Government are veering haphazardly between extreme risk, extreme optimism and extreme delay on this issue. Labour is developing a practical, well-informed and long-term approach to regulation.
In the meantime, we support and welcome the principles behind the Private Member’s Bill from the noble Lord, Lord Holmes, but remain open-minded on the current situation and solution, while acknowledging that there is still much more to be done.