AI in the UK (Liaison Committee Report)

Lord Parkinson of Whitley Bay Excerpts
Wednesday 25th May 2022

(2 years, 5 months ago)

Grand Committee
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - -

My Lords, I am grateful to the noble Lord, Lord Clement-Jones, and all noble Lords who have spoken in today’s debate. I agree with the noble Lord, Lord McNally, that all the considerations we have heard have been hugely insightful and of very high quality.

The Government want to make sure that artificial intelligence delivers for people and businesses across the UK. We have taken important early steps to ensure we harness its enormous benefits, but agree that there is still a huge amount more to do to keep up with the pace of development. As the noble Lord, Lord Clement-Jones, said in his opening remarks, this is in many ways a moving target. The Government provided a formal response to the report of your Lordships’ committee in February 2021, but today’s debate has been a valuable opportunity to take stock of its conclusions and reflect on the progress made since then.

Since the Government responded to the committee’s 2020 report, we have published the National AI Strategy. The strategy, which I think it is fair to say has been well received, had three key objectives that will drive the Government’s activity over the next 10 years. First, we will invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower; secondly, we will support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring that AI benefits all sectors and parts of the country; and, thirdly, we will ensure the UK gets the national and international governance of AI technologies right to encourage innovation and investment, and to protect the public and the values that we hold dear.

We will provide an update on our work to implement our cross-government strategy through the forthcoming AI action plan but, for now, I turn to some of the other key themes covered in today’s debate. As noble Lords have noted, we need to ensure the public have trust and confidence in AI systems. Indeed, improving trust in AI was a key theme in the National AI Strategy. Trust in AI requires trust in the data which underpin these technologies. The Centre for Data Ethics and Innovation has engaged widely to understand public attitudes to data and the drivers of trust in data use, publishing an attitudes tracker earlier this year. The centre’s early work on public attitudes showed how people tend to focus on negative experiences relating to data use rather than positive ones. I am glad to say that we have had a much more optimistic outlook in this evening’s debate.

The National Data Strategy sets out what steps we will take to rebalance this perception from the public, from one where we only see risks to one where we also see the opportunities of data use. It sets out our vision to harness the power of responsible data use to drive growth and improve services, including by AI-driven services. It describes how we will make data usable, accessible and available across the economy, while protecting people’s data rights and businesses’ intellectual property.

My noble friend Lord Holmes of Richmond talked about anonymisation. Privacy-enhancing technologies such as this were noted in the National Data Strategy and the Centre for Data Ethics and Innovation, which leads the Government’s work to enable trustworthy innovation, is helping to take that forward in a number of ways. This year the centre will continue to ensure trustworthy innovation through a world-first AI assurance road map and will collaborate with the Government of the United States of America on a prize challenge to accelerate the development of a new breed of privacy-enhancing technologies, which enable data use in ways that preserve privacy.

Our approach includes supporting a thriving ecosystem of data intermediaries, including data trusts, which have been mentioned, to enable responsible data-sharing. We are already seeing data trusts being set up; for example, pilots on health data and data for communities are being established by the Data Trusts Initiative, hosted by the University of Cambridge, and further pilots are being led by the Open Data Institute. Just as we must shift the debate on data, we must also improve the public understanding and awareness of AI; this will be critical to driving its adoption throughout the economy. The Office for Artificial Intelligence and the Centre for Data Ethics and Innovation are taking the lead here, undertaking work across government to share best practice on how to communicate issues regarding AI clearly.

Key to promoting public trust in AI is having in place a clear, proportionate governance framework that addresses the unique challenges and opportunities of AI, which brings me to another of the key themes of this evening’s debate: ethics and regulation. The UK has a world-leading regulatory regime and a history of innovation-friendly approaches to regulation. We are committed to making sure that new and emerging technologies are regulated in a way that instils public confidence in them while supporting further innovation. We need to make sure that our regulatory approach keeps pace with new developments in this fast-moving field. That is why, later this year, the Government will publish a White Paper on AI governance, exploring how to govern AI technologies in an innovation-friendly way to deliver the opportunities that AI promises while taking a proportionate approach to risk so that we can protect the public.

We want to make sure that our approach is tailored to context and proportionate to the actual impact on individuals and groups in particular contexts. As noble Lords, including the right reverend Prelate the Bishop of Oxford, have rightly set out, those contexts can be many and varied. But we also want to make sure our approach is coherent so that we can reduce unnecessary complexity or confusion for businesses and the public. We are considering whether there is a need for a set of cross-cutting principles which guide how we approach common issues relating to AI, such as safety, and looking at how to make sure that there are effective mechanisms in place to ensure co-ordination across the regulatory landscape.

The UK has already taken important steps forward with the formation of the Digital Regulation Cooperation Forum, as the noble Lord, Lord Clement-Jones, and others have noted, but we need to consider whether further measures are needed. Finally, the cross-border nature of the international market means that we will continue to collaborate with key partners on the global stage to shape approaches to AI governance and facilitate co-operation on key issues.

My noble friend Lord Holmes of Richmond and the noble Lord, Lord Evans of Weardale, both referred to the data reform Bill and the issues it covers. DCMS has consulted on and put together an ambitious package of reforms to create a new pro-growth regime for data which is trusted by people and businesses. This is a pragmatic approach which allows data-driven businesses to use data responsibly while keeping personal information safe and secure. We will publish our response to that later this spring.

My noble friend also mentioned the impact of AI on jobs and skills. He is right that the debate has moved on in an encouraging and more optimistic way and that we need to address the growing skills gap in AI and data science and keep developing, attracting and training the best and brightest talent in this area. Since the AI sector deal in 2018, the Government have been making concerted efforts to improve the skills pipeline. There has been an increased focus on reskilling and upskilling, so that we can ensure that, where there is a level of displacement, there is redeployment rather than unemployment.

As the noble Lord, Lord Bilimoria, noted with pleasure, the Government worked through the Office for AI and the Office for Students to fund 2,500 postgraduate conversion courses in AI for students from near and non-STEM backgrounds. That includes 1,000 scholarships for people from underrepresented backgrounds, and these courses are available at universities across the country. Last autumn, the Chancellor of the Exchequer announced that this programme would be bolstered by 2,000 more scholarships, so that many more people across the country can benefit from them. In the Spring Statement, 1,000 more PhD places were announced to complement those already available at 16 centres for doctoral training across the country. We want to build a world-leading digital economy that works for everyone. That means ensuring that as many people as possible can reap the benefits of new technologies. That is why the Government have taken steps to increase the skills pipeline, including introducing more flexible training routes into digital roles.

The noble Lord, Lord St John of Bletso, was right to focus on how the UK contributes to international dialogue on AI. The UK is playing a leading role in international discussions on ethics and regulation, including our work at the Council of Europe, UNESCO and the OECD. We should not forget that the UK was one of the founding members of the Global Partnership on Artificial Intelligence, the first multilateral forum looking specifically at this important area.

We will continue to work with international partners to support the development of the rules on use of AI. We have also taken practical steps to take some of these high-level principles and implement them when delivering public services. In 2020, we worked with the World Economic Forum to develop guidelines for responsible procurement of AI based on these values which have since been put into operation through the Crown Commercial Service’s AI marketplace. This service has been renewed and the Crown Commercial Service is exploring expanding the options available to government buyers. On an international level, this work resulted in a policy tool called “AI procurement in a box”, a framework for like-minded countries to adapt for their own purposes.

I am mindful that Second Reading of the Procurement Bill is taking place in the Chamber as we speak, competing with this debate. That Bill will replace the current process-driven EU regime for public procurement by creating a simpler and more flexible commercial system, but international collaboration and dialogue will continue to be a key part of our work in this area in the years to come.

The noble Lord, Lord Browne of Ladyton, spoke very powerfully about the use of AI in defence. The Government will publish a defence AI strategy this summer, alongside a policy ensuring the ambitious, safe and responsible use of AI in defence, which will include ethical principles based on extensive policy work together with the Centre for Data Ethics and Innovation. The policy will include an updated statement of our position on lethal autonomous weapons systems.

As the noble Lord, Lord Clement-Jones, said, there is no international agreement on the definition of such weapons systems, but the UK continues to contribute actively at the UN Convention on Certain Conventional Weapons, working closely with our international partners, seeking to build norms around their use and positive obligations to demonstrate how degrees of autonomy in weapons systems can be used in accordance with international humanitarian law. The defence AI centre will have a key role in delivering technical standards, including where these can support our implementation of ethical principles. The centre achieved initial operating capability last month and will continue to expand throughout this year, having already established joint military, government and industry multidisciplinary teams. The Centre for Data Ethics and Innovation has, over the past year, been working with the Ministry of Defence to develop ethical principles for the use of AI in defence—as, I should say, it has with the Centre for Connected and Autonomous Vehicles in the important context of self-driving vehicles.

The noble Baroness, Lady Merron, asked about the application of AI in the important sphere of the environment. Over the past two years, the Global Partnership on Artificial Intelligence’s data governance working group has brought together experts from across the world to advance international co-operation and collaboration in areas such as this. The UK’s Office for Artificial Intelligence provided more than £1 million to support two research projects on data trusts and data justice in collaboration with partner institutions including the Alan Turing Institute, the Open Data Institute and the Data Trusts Initiative at Cambridge University. These projects explored using data trusts to support action to protect our climate, as well as expanding understanding of data governance to include considerations of equity and justice.

The insights that have been raised in today’s debate and in the reports which tonight’s debate has concerned will continue to shape the Government’s thinking as we take forward our strategy on AI. As noble Lords have noted, by most measures the UK is a leader in AI, behind only the United States and China. We are home to one-third of Europe’s AI companies and twice as many as any other European nation. We are also third in the world for AI investment—again, behind the US and China—attracting twice as much venture capital as France and Germany combined, but we are not complacent. We are determined to keep building on our strengths, maintaining and building on this global position. This evening’s debate has provided many rich insights on the further steps we must take to make sure that the UK remains an AI and science superpower. I am very grateful to noble Lords, particularly to the noble Lord, Lord Clement-Jones, for instigating it.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, first I thank noble Lords for having taken part in this debate. We certainly do not lack ambition around the table, so to speak. I think everybody saw the opportunities and the positives, but also saw the risks and challenges. I liked the use by the noble Baroness, Lady Merron, of the word “grappling”. I think we have grappled quite well today with some of the issues and I think the Minister, given what is quite a tricky cross-departmental need to pull everything together, made a very elegant fist of responding to the debate. Of course, inevitably, we want stronger meat in response on almost every occasion.

I am not going to do another wind-up speech, so to speak, but I think it was a very useful opportunity, prompted by the right reverend Prelate, to reflect on humanity. We cannot talk about artificial intelligence without talking about human intelligence. That is the extraordinary thing: the more you talk about what artificial intelligence can do, the more you have to talk about human endeavour and what humans can do. In that context, I congratulate the noble Lords, Lord Holmes and Lord Bilimoria, on their versatility. They both took part in the earlier debate, and it is very interesting to see the commonality between some of the issues raised in the previous debate on digital exclusion —human beings being excluded from opportunity— which arise also in the case of AI. I was very interested to see how, back to back, they managed to deal with all that.

The Minister said a number of things, but I think the trust and confidence aspect is vital. The proof of the pudding will be in the data reform Bill. I may differ slightly on that from the noble Lord, Lord Holmes, who thinks it is a pretty good thing, by the sound of it, but we do not know what it is going to contain. All I will say is that, when Professor Goldacre appeared before the Science and Technology Committee, I think it was a lesson for us all. He is the chap who has just written the definitive report on data use in the health area for the Department of Health, and he deliberately opted out, last year, of the GP request for consent to share data, and he is the leading data scientist in health. He was not convinced of the fact that his data would be safe. We can talk about trusted research environments and all that, but public trust in data use, whether it is in health or anything else, needs engagement by government and needs far more work.

The thing that frightens a lot of us is that we can see all the opportunities but if we do not get it right, and if we do not get permission to use the technology, we cannot deploy it in the way we conceived, whether it is for the sustainable development goals or for other forms of public benefit in the public service. Provided we get the compliance mechanisms right we can see the opportunities, but we have to get that public trust on board, not least in the area of lethal autonomous weapons. I think the perception of what the Government are doing in that area is very different from what the Ministry of Defence may think it is doing, particularly if they are developing some splendid principles of which we will all approve, when it is all about what is actually happening on the ground.

I will say no further. I am sure we will have further debates on this and I hope that the Minister has enjoyed having to brief himself for this debate, because it is very much part of the department’s responsibilities.