Monday 24th July 2023

(1 year, 4 months ago)

Lords Chamber
Read Hansard Text
Viscount Camrose Portrait The Parliamentary Under-Secretary of State, Department for Science, Innovation and Technology (Viscount Camrose) (Con)
- Hansard - - - Excerpts

I join all noble Lords in thanking the noble Lord, Lord Ravensdale, for tabling such an important debate on how we establish the right guardrails for AI—surely, as many noble Lords have said, one of the most pressing issues of our time. I thank all noble Lords who have spoken on this so persuasively and impactfully; it has been a debate of the highest quality and I am delighted to take part in it. A great many important points have been raised. I will do my best to address them all and to associate noble Lords with the points they have raised. If I inadvertently neglect to do either, that is by no means on purpose and I hope noble Lords will write to me. I am very keen to continue the conversation wherever possible.

As many have observed, AI is here right now, underpinning ever more digital services, such as public transport apps, and innovations on the horizon, from medical breakthroughs to driverless cars. There are huge opportunities to drive productivity and prosperity across the economy, with some analysts predicting a tripling of growth for economies that make the most of this transformational technology.

However, with rapid advances in AI technologies come accelerated and altogether new risks, from the amplification of unfair biases in data to new risks that may emerge in the made-up answers that AI chatbots produce when there are gaps in training data. Potential risks may be difficult to quantify and measure, as mentioned by many today, such as the possibility of super-intelligence putting humanity in danger. Clearly, these risks need to be carefully considered and, where appropriate, addressed by the Government.

As stated in the AI regulation White Paper, unless our regulatory approach addresses the significant risks caused or amplified by AI, the public will not trust the technology and we will fail to maximise the opportunities it presents. To drive trust in AI, it is critical to establish the right guardrails. The principles at the heart of our regulatory framework articulate what responsible, safe and reliable AI innovation should look like.

This work is supported by the Government’s commitment to tools for trustworthy AI, including technical standards and assurance techniques. These important tools will ensure that safety, trust and security are at the heart of AI products and services, while boosting international interoperability on AI governance, as referenced by the noble Lord, Lord Clement-Jones. Initiatives such as the AI Standards Hub and a portfolio of AI assurance techniques support this work.

The principles will be complemented by our work to strengthen the evidence base on trustworthy AI, as noted by the noble Lords, Lord Ravensdale and Lord St John. Building safe and reliable AI systems is a difficult technical challenge, on which much excellent research is being conducted. To answer the point on compute from the noble Lords, Lord Ravensdale and Lord Watson, the Government have earmarked £900 million for exascale compute and AI research resource as of March this year. I agree with the noble Lord, Lord Kakkar, that data access is critical for the UK’s scientific leadership and competitiveness. The National Data Strategy set out a pro-growth approach with data availability and confidence in responsible use at its heart.

I say to my noble friend Lord Holmes that, while synthetic data can be a tool to address some issues of bias, there are additional dangers in training models on data that is computer-generated rather than drawn from the real world. To reassure the noble Lord, Lord Bilimoria, this Government have invested significant amounts in AI since 2014—for the better part of 10 years; some £2.5 billion, in addition to the £900 million earmarked for exascale compute that I have mentioned, in specific R&D projects on trustworthy AI, including a further £31 million UKRI research grant into responsible and trustworthy AI this year.

The Foundation Model Taskforce will provide further critical insights into this question. We have announced £100 million of initial funding for the Foundation Model Taskforce and Ian Hogarth, as its chair—to address the concerns of the noble Lord, Lord Browne—will report directly to the Prime Minister and the Technology Secretary. Linked to this, I thank my noble friend Lady Stowell of Beeston for raising the Communications and Digital Committee’s inquiry into large language models, which she will chair. This is an important issue, and my department will respond to the inquiry.

As the Prime Minister has made clear, we are taking action to establish the right guardrails for AI. The AI regulation White Paper, published this March, set out our proportionate, outcomes-focused and adaptable regulatory framework—important characteristics noted by many noble Lords. As the noble Lord, Lord Clement-Jones, noted, our approach is designed to adapt as this fast-moving technology develops and respond quickly as risks emerge or escalate. We will ensure that there are protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs and bold new discoveries that radically improve people’s lives.

The right reverend Prelate the Bishop of Oxford and my noble friend Lord Holmes raised points about ethics and accountability. Our approach is underpinned by a set of values-based principles, aligned with the OECD and reflecting the ethical use of AI through concepts such as fairness, transparency and accountability. To reassure the noble and right reverend Lord, Lord Harries, we are accelerating our work to establish the central functions proposed in the White Paper, including horizon-scanning and risk assessment. These central functions will allow the Government to identify, monitor and respond to AI risks in a rigorous way—including existential risks, as raised by my noble friend Lord Fairfax, and biosecurity risks, referred to by the noble Lord, Lord Anderson.

We recognise the importance of regulator upskilling and co-ordination, as noted by several noble Lords. Our central functions will support existing regulators to apply the principles, using their sectoral expertise, and our regulatory sandbox will help build best practice.

I thank noble Lords for their emphasis on stakeholder inclusion. We made it clear in the White Paper that we are taking a collaborative approach and are already putting this into practice. For example, to answer the inquiry of the noble Lord, Lord Bilimoria, I am pleased that the White Paper sets out plans to create an education and awareness function to make sure that a wide range of groups are empowered and encouraged to engage with the regulatory framework.

In addition to meetings with the major AI developers—the multinational conglomerates noted by the noble Lord, Lord Rees—the Prime Minister, the Technology Secretary and I have met British-based AI start-ups and scale-ups. We heard from more than 300 people at round tables and workshops organised as part of our recent consultation on the White Paper, including civil society organisations and trade unions that I was fortunate enough to speak with personally. More than 400 stakeholders have sent us written evidence. To reassure the right reverend Prelate the Bishop of Oxford, we also continue to collaborate with our colleagues across government, including the Centre for Data Ethics and Innovation, which leads the Government’s work to enable trustworthy innovation, using data and AI to earn public trust.

It is important to note that the proposals put forward in the White Paper work in tandem with legislation currently going through Parliament, such as the Online Safety Bill and the Data Protection and Digital Information Bill. We were clear that the AI regulation White Paper is a first step in addressing the risks and opportunities presented by AI. We will review and adapt our approach in response to the fast pace of this technology. We are unafraid to take further steps if needed to ensure safe and responsible AI innovation.

As we have heard in this debate, the issue of copyright protection and how it applies to training materials and outputs from generative AI is an important issue to get right. To the several noble Lords who asked for the Government’s view on copying works in order to extract data in relation to copyright law, I can confirm this Government’s position that, under existing law, copying works in order to extract data from them will infringe copyright, unless copying is permitted under a licence or exception. The legal question of exactly what is permitted under existing copyright exceptions is the subject of ongoing litigation on whose details I will not comment.

To respond to my noble friend Lady Stowell and the noble Lords, Lord Watson and Lord Clement-Jones, we believe that the involvement of both the AI and creative sectors in the discussions the IPO is currently facilitating will help with the creation of a balanced and pragmatic code of practice that will enable both sectors to grow in partnership.

The noble Earl, Lord Devon, raised the question of AI inventions. The Government have committed to keep this area under review. As noble Lords may be aware, this issue is currently being considered in the DABUS case, and we are closely monitoring that litigation.

The noble Lord, Lord Rees, and the noble Baroness, Lady Primarolo, raised the important issue of the impact of AI on the labour market. I note that AI has the potential to be a net creator of jobs. The World Economic Forum’s Future of Jobs Report 2023 found that, while 25% of organisations expect AI to lead to job losses, 50% expect it to create job growth. However, even with such job growth, we can anticipate disruption—a point raised by the noble Lord, Lord Bassam, and others.

Many of you asked whether our postgraduate AI conversion courses and scholarship will expand the AI workforce. We are working with partners to develop research to help employees understand what skills they need to use AI effectively. The Department for Work and Pensions’ job-matching pilot is assessing how new technologies like AI might support jobseekers.

To address the point made by the noble Baroness, Lady Primarolo, on workplace surveillance, the Government recognise that the deployment of technologies in a workplace context involves consideration of a wide range of regulatory frameworks—not just data protection law but also human rights law, legal frameworks relating to health and safety and, most importantly, employment law. We outline a commitment to contestability and redress in our White Paper principles: where AI might challenge someone’s rights in the workplace, the UK has a strong system of legislation and enforcement of these protections.

In response to the concerns about AI’s impact on journalism, raised by the noble Viscount, Lord Colville, I met with the Secretary of State for Culture, Media and Sport last week. She has held a number of meetings with the sector and plans to convene round tables with media stakeholders on this issue.

To address the point made by the noble Viscount, Lord Chandos, companies subject to the Online Safety Bill’s safety duties must take action against illegal content online, including illegal misinformation and disinformation produced by AI. I also note that the Online Safety Bill will regulate generative AI content on services that allow user interaction, including using AI chatbots to radicalise others, especially young people. The strongest protections in the Bill are for children: platforms will have to take comprehensive measures to protect them from harm.

The Government are, of course, very aware of concerns around the adoption of AI in the military, as raised by the noble Lord, Lord Browne, and the noble and gallant Lord, Lord Houghton. We are determined to adopt AI safely and responsibly, because no other approach would be in line with the values of the British public. It is clear that the UK and our allies must adopt AI with pace and purpose to maintain the UK’s competitive advantage, as set out in the Ministry of Defence’s Defence Artificial Intelligence Strategy.

On the work the UK is doing with our international partners, the UK’s global leadership on AI has a long precedent. To reassure the noble Viscount, Lord Chandos, the UK is already consistently ranked in the top three countries for AI across a number of metrics. I also reassure the noble Lord, Lord Freyberg, that the UK already plays an important role in international fora, including the G7’s Hiroshima AI Process, the Council of Europe, the OECD, UNESCO and the G20, as well as through being a founding member of the Global Partnership on AI. Our leadership is recognised internationally, with President Biden commending the Prime Minister’s ambition to make the UK the home of AI safety. Demonstrating our leadership, on 18 July the Foreign Secretary chaired the first ever UN Security Council briefing on AI, calling on the world to come together to address the global opportunities and challenges of AI, particularly in relation to peace and security.

To reassure my noble friend Lord Udny-Lister and the noble Lord, Lord Giddens, on the importance of assessing AI opportunities and risks with our international partners, it is clear to this Government that the governance of AI is a subject of global importance. As such, it requires global co-operation. That is why the UK will host the first major global summit on AI safety this year. Some noble Lords have questioned the UK’s convening power internationally. The summit will bring together key countries, as well as leading tech companies and researchers, to drive targeted, rapid international action to guarantee safety and security at the frontier of this technology.

To bring this to a close, AI has rapidly advanced in under a decade and we anticipate further rapid leaps. These advances bring great opportunities, from improving diagnostics in healthcare to tackling climate change. However, they also bring serious challenges, such as the threat of fraud and disinformation created by deepfakes. We note the stark warnings from AI pioneers —however uncertain they may be—about artificial general intelligence and AI biosecurity risks.

The UK already has a reputation as a global leader on AI as a result of our thriving AI ecosystem, our world-class institutions and AI research base and our respected rule of law. Through our work to establish the rules that govern AI and create the mechanisms that enable the adaptation of those rules, the Foundation Model Taskforce and the forthcoming summit on AI safety, we will lead the debate on safe and responsible AI innovation. We will unlock the extraordinary benefits of this landmark technology while protecting our society and keeping the public safe. I thank all noble Lords for today’s really important debate—the insights will help guide our next steps on this critical agenda.