AI in the UK (Liaison Committee Report) Debate
Full Debate: Read Full DebateLord Clement-Jones
Main Page: Lord Clement-Jones (Liberal Democrat - Life peer)Department Debates - View all Lord Clement-Jones's debates with the Department for Digital, Culture, Media & Sport
(2 years, 6 months ago)
Grand CommitteeThat the Grand Committee takes note of the Report from the Liaison Committee AI in the UK: No Room for Complacency (7th Report, Session 2019–21, HL Paper 196).
My Lords, the Liaison Committee report No Room for Complacency was published in December 2020, as a follow-up to our AI Select Committee report, AI in the UK: Ready, Willing and Able?, published in April 2018. Throughout both inquiries and right up until today, the pace of development here and abroad in AI technology, and the discussion of AI governance and regulation, has been extremely fast moving. Today, just as then, I know that I am attempting to hit a moving target. Just take, for instance, the announcement a couple of weeks ago about the new Gato—the multipurpose AI which can do 604 functions —or perhaps less optimistically, the Clearview fine. Both have relevance to what we have to say today.
First, however, I say a big thank you to the then Liaison Committee for the new procedure which allowed our follow-up report and to the current Lord Speaker, Lord McFall, in particular and those members of our original committee who took part. I give special thanks to the Liaison Committee team of Philippa Tudor, Michael Collon, Lucy Molloy and Heather Fuller, and to Luke Hussey and Hannah Murdoch from our original committee team who more than helped bring the band, and our messages, back together.
So what were the main conclusions of our follow-up report? What was the government response, and where are we now? I shall tackle this under five main headings. The first is trust and understanding. The adoption of AI has made huge strides since we started our first report, but the trust issue still looms large. Nearly all our witnesses in the follow-up inquiry said that engagement continued to be essential across business and society in particular to ensure that there is greater understanding of how data is used in AI and that government must lead the way. We said that the development of data trusts must speed up. They were the brainchild of the Hall-Pesenti report back in 2017 as a mechanism for giving assurance about the use and sharing of personal data, but we now needed to focus on developing the legal and ethical frameworks. The Government acknowledged that the AI Council’s roadmap took the same view and pointed to the ODI work and the national data strategy. However, there has been too little recent progress on data trusts. The ODI has done some good work, together with the Ada Lovelace Institute, but this needs taking forward as a matter of urgency, particularly guidance on the legal structures. If anything, the proposals in Data: A New Direction, presaging a new data reform Bill in the autumn, which propose watering down data protection, are a backward step.
More needs to be done generally on digital understanding. The digital literacy strategy needs to be much broader than digital media, and a strong digital competition framework has yet to be put in place. Public trust has not been helped by confusion and poor communication about the use of data during the pandemic, and initiatives such as the Government’s single identifier project, together with automated decision-making and live facial recognition, are a real cause for concern that we are approaching an all-seeing state.
My second heading is ethics and regulation. One of the main areas of focus of our committee throughout has been the need to develop an appropriate ethical framework for the development and application of AI, and we were early advocates for international agreement on the principles to be adopted. Back in 2018, the committee took the view that blanket regulation would be inappropriate, and we recommended an approach to identify gaps in the regulatory framework where existing regulation might not be adequate. We also placed emphasis on the importance of regulators having the necessary expertise.
In our follow-up report, we took the view that it was now high time to move on to agreement on the mechanisms on how to instil what are now commonly accepted ethical principles—I pay tribute to the right reverend Prelate for coming up with the idea in the first place—and to establish national standards for AI development and AI use and application. We referred to the work that was being undertaken by the EU and the Council of Europe, with their risk-based approaches, and also made recommendations focused on development of expertise and better understanding of risk of AI systems by regulators. We highlighted an important advisory role for the Centre for Data Ethics and Innovation and urged that it be placed on a statutory footing.
We welcomed the formation of the Digital Regulation Cooperation Forum. It is clear that all the regulators involved—I apologise for the initials in advance—the ICO, CMA, Ofcom and the FCA, have made great strides in building a centre of excellence in AI and algorithm audit and making this public. However, despite the publication of the National AI Strategy and its commitment to trustworthy AI, we still await the Government’s proposals on AI governance in the forthcoming White Paper.
It seems that the debate within government about whether to have a horizontal or vertical sectoral framework for regulation still continues. However, it seems clear to me, particularly for accountability and transparency, that some horizontality across government, business and society is needed to embed the OECD principles. At the very least, we need to be mindful that the extraterritoriality of the EU AI Act means a level of regulatory conformity will be required and that there is a strong need for standards of impact, as well as risk assessment, audit and monitoring, to be enshrined in regulation to ensure, as techUK urges, that we consider the entire AI lifecycle.
We need to consider particularly what regulation is appropriate for those applications which are genuinely high risk and high impact. I hope that, through the recently created AI standards hub, the Alan Turing Institute will take this forward at pace. All this has been emphasised by the debate on the deployment of live facial recognition technology, the use of biometrics in policing and schools, and the use of AI in criminal justice, recently examined by our own Justice and Home Affairs Committee.
My third heading is government co-ordination and strategy. Throughout our reports we have stressed the need for co-ordination between a very wide range of bodies, including the Office for Artificial Intelligence, the AI Council, the CDEI and the Alan Turing Institute. On our follow-up inquiry, we still believed that more should be done to ensure that this was effective, so we recommended a Cabinet committee which would commission and approve a five-year national AI strategy, as did the AI road map.
In response, the Government did not agree to create a committee but they did commit to the publication of a cross-government national AI strategy. I pay tribute to the Office for AI, in particular its outgoing director Sana Khareghani, for its work on this. The objectives of the strategy are absolutely spot on, and I look forward to seeing the national AI strategy action plan, which it seems will show how cross-government engagement is fostered. However, the Committee on Standards in Public Life—I am delighted that the noble Lord, Lord Evans, will speak today—report on AI and public standards made the deficiencies in common standards in the public sector clear.
Subsequently, we now have an ethics, transparency and accountability framework for automated decision-making in the public sector, and more recently the CDDO-CDEI public sector algorithmic transparency standard, but there appears to be no central and local government compliance mechanism and little transparency in the form of a public register, and the Home Office appears to be still a law unto itself. We have AI procurement guidelines based on the World Economic Forum model but nothing relevant to them in the Procurement Bill, which is being debated as we speak. I believe we still need a government mechanism for co-ordination and compliance at the highest level.
The fourth heading is impact on jobs and skills. Opinions differ over the potential impact of AI but, whatever the chosen prognosis, we said there was little evidence that the Government had taken a really strategic view about this issue and the pressing need for digital upskilling and reskilling. Although the Government agreed that this was critical and cited a number of initiatives, I am not convinced that the pace, scale and ambition of government action really matches the challenge facing many people working in the UK.
The Skills and Post-16 Education Act, with its introduction of a lifelong loan entitlement, is a step in the right direction and I welcome the renewed emphasis on further education and the new institutes of technology. The Government refer to AI apprenticeships, but apprentice levy reform is long overdue. The work of local digital skills partnerships and digital boot camps is welcome, but they are greatly underresourced and only a patchwork. The recent Youth Unemployment Select Committee report Skills for Every Young Person noted the severe lack of digital skills and the need to embed digital education in the curriculum, as did the AI road map. Alongside this, we shared the priority of the AI Council road map for more diversity and inclusion in the AI workforce and wanted to see more progress.
At the less rarefied end, although there are many useful initiatives on foot, not least from techUK and Global Tech Advocates, it is imperative that the Government move much more swiftly and strategically. The All-Party Parliamentary Group on Diversity and Inclusion in STEM recommended in a recent report a STEM diversity decade of action. As mentioned earlier, broader digital literacy is crucial too. We need to learn how to live and work alongside AI.
The fifth heading is the UK as a world leader. It was clear to us that the UK needs to remain attractive to international research talent, and we welcomed the Global Partnership on AI initiative. The Government in response cited the new fast-track visa, but there are still strong concerns about the availability of research visas for entrance to university research programmes. The failure to agree and lack of access to EU Horizon research funding could have a huge impact on our ability to punch our weight internationally.
How the national AI strategy is delivered in terms of increased R&D and innovation funding will be highly significant. Of course, who knows what ARIA may deliver? In my view, key weaknesses remain in the commercialisation and translation of AI R&D. The recent debate on the Science and Technology Committee’s report on catapults reminded us that this aspect is still a work in progress.
Recent Cambridge round tables have confirmed to me that we have a strong R&D base and a growing number of potentially successful spin-outs from universities, with the help of their dedicated investment funds, but when it comes to broader venture capital culture and investment in the later rounds of funding, we are not yet on a par with Silicon Valley in terms of risk appetite. For AI investment, we should now consider something akin to the dedicated film tax credit which has been so successful to date.
Finally, we had, and have, the vexed question of lethal autonomous weapons, which we raised in the original Select Committee report and in the follow-up, particularly in the light of the announcement at the time of the creation of the autonomy development centre in the MoD. Professor Stuart Russell, who has long campaigned on this subject, cogently raised the limitation of these weapons in his second Reith Lecture. In both our reports we said that one of the big disappointments was the lack of definition of “autonomous weapons”. That position subsequently changed, and we were told in the Government’s response to the follow-up report that NATO had agreed a definition of “autonomous” and “automated”, but there is still no comprehensive definition of lethal autonomous weapons, despite evidence that they have clearly already been deployed in theatres such as Libya, and the UK has firmly set its face against laws limitation in international fora such as the CCW.
For a short report, our follow-up report covered a great deal of ground, which I have tried to cover at some speed today. AI lies at the intersection of computer science, moral philosophy, industrial education and regulatory policy, which makes how we approach the risks and opportunities inherent in this technology vital and difficult. The Government are engaged in a great deal of activity. The question, as ever, is whether it is focused enough and whether the objectives, such as achieving trustworthy AI and digital upskilling, are going to be achieved through the actions taken so far. The evidence of success is clearly mixed. Certainly there is still no room for complacency. I very much look forward to hearing the debate today and to what the Minister has to say in response. I beg to move.
My Lords, I am grateful to the noble Lord, Lord Clement-Jones, and all noble Lords who have spoken in today’s debate. I agree with the noble Lord, Lord McNally, that all the considerations we have heard have been hugely insightful and of very high quality.
The Government want to make sure that artificial intelligence delivers for people and businesses across the UK. We have taken important early steps to ensure we harness its enormous benefits, but agree that there is still a huge amount more to do to keep up with the pace of development. As the noble Lord, Lord Clement-Jones, said in his opening remarks, this is in many ways a moving target. The Government provided a formal response to the report of your Lordships’ committee in February 2021, but today’s debate has been a valuable opportunity to take stock of its conclusions and reflect on the progress made since then.
Since the Government responded to the committee’s 2020 report, we have published the National AI Strategy. The strategy, which I think it is fair to say has been well received, had three key objectives that will drive the Government’s activity over the next 10 years. First, we will invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower; secondly, we will support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring that AI benefits all sectors and parts of the country; and, thirdly, we will ensure the UK gets the national and international governance of AI technologies right to encourage innovation and investment, and to protect the public and the values that we hold dear.
We will provide an update on our work to implement our cross-government strategy through the forthcoming AI action plan but, for now, I turn to some of the other key themes covered in today’s debate. As noble Lords have noted, we need to ensure the public have trust and confidence in AI systems. Indeed, improving trust in AI was a key theme in the National AI Strategy. Trust in AI requires trust in the data which underpin these technologies. The Centre for Data Ethics and Innovation has engaged widely to understand public attitudes to data and the drivers of trust in data use, publishing an attitudes tracker earlier this year. The centre’s early work on public attitudes showed how people tend to focus on negative experiences relating to data use rather than positive ones. I am glad to say that we have had a much more optimistic outlook in this evening’s debate.
The National Data Strategy sets out what steps we will take to rebalance this perception from the public, from one where we only see risks to one where we also see the opportunities of data use. It sets out our vision to harness the power of responsible data use to drive growth and improve services, including by AI-driven services. It describes how we will make data usable, accessible and available across the economy, while protecting people’s data rights and businesses’ intellectual property.
My noble friend Lord Holmes of Richmond talked about anonymisation. Privacy-enhancing technologies such as this were noted in the National Data Strategy and the Centre for Data Ethics and Innovation, which leads the Government’s work to enable trustworthy innovation, is helping to take that forward in a number of ways. This year the centre will continue to ensure trustworthy innovation through a world-first AI assurance road map and will collaborate with the Government of the United States of America on a prize challenge to accelerate the development of a new breed of privacy-enhancing technologies, which enable data use in ways that preserve privacy.
Our approach includes supporting a thriving ecosystem of data intermediaries, including data trusts, which have been mentioned, to enable responsible data-sharing. We are already seeing data trusts being set up; for example, pilots on health data and data for communities are being established by the Data Trusts Initiative, hosted by the University of Cambridge, and further pilots are being led by the Open Data Institute. Just as we must shift the debate on data, we must also improve the public understanding and awareness of AI; this will be critical to driving its adoption throughout the economy. The Office for Artificial Intelligence and the Centre for Data Ethics and Innovation are taking the lead here, undertaking work across government to share best practice on how to communicate issues regarding AI clearly.
Key to promoting public trust in AI is having in place a clear, proportionate governance framework that addresses the unique challenges and opportunities of AI, which brings me to another of the key themes of this evening’s debate: ethics and regulation. The UK has a world-leading regulatory regime and a history of innovation-friendly approaches to regulation. We are committed to making sure that new and emerging technologies are regulated in a way that instils public confidence in them while supporting further innovation. We need to make sure that our regulatory approach keeps pace with new developments in this fast-moving field. That is why, later this year, the Government will publish a White Paper on AI governance, exploring how to govern AI technologies in an innovation-friendly way to deliver the opportunities that AI promises while taking a proportionate approach to risk so that we can protect the public.
We want to make sure that our approach is tailored to context and proportionate to the actual impact on individuals and groups in particular contexts. As noble Lords, including the right reverend Prelate the Bishop of Oxford, have rightly set out, those contexts can be many and varied. But we also want to make sure our approach is coherent so that we can reduce unnecessary complexity or confusion for businesses and the public. We are considering whether there is a need for a set of cross-cutting principles which guide how we approach common issues relating to AI, such as safety, and looking at how to make sure that there are effective mechanisms in place to ensure co-ordination across the regulatory landscape.
The UK has already taken important steps forward with the formation of the Digital Regulation Cooperation Forum, as the noble Lord, Lord Clement-Jones, and others have noted, but we need to consider whether further measures are needed. Finally, the cross-border nature of the international market means that we will continue to collaborate with key partners on the global stage to shape approaches to AI governance and facilitate co-operation on key issues.
My noble friend Lord Holmes of Richmond and the noble Lord, Lord Evans of Weardale, both referred to the data reform Bill and the issues it covers. DCMS has consulted on and put together an ambitious package of reforms to create a new pro-growth regime for data which is trusted by people and businesses. This is a pragmatic approach which allows data-driven businesses to use data responsibly while keeping personal information safe and secure. We will publish our response to that later this spring.
My noble friend also mentioned the impact of AI on jobs and skills. He is right that the debate has moved on in an encouraging and more optimistic way and that we need to address the growing skills gap in AI and data science and keep developing, attracting and training the best and brightest talent in this area. Since the AI sector deal in 2018, the Government have been making concerted efforts to improve the skills pipeline. There has been an increased focus on reskilling and upskilling, so that we can ensure that, where there is a level of displacement, there is redeployment rather than unemployment.
As the noble Lord, Lord Bilimoria, noted with pleasure, the Government worked through the Office for AI and the Office for Students to fund 2,500 postgraduate conversion courses in AI for students from near and non-STEM backgrounds. That includes 1,000 scholarships for people from underrepresented backgrounds, and these courses are available at universities across the country. Last autumn, the Chancellor of the Exchequer announced that this programme would be bolstered by 2,000 more scholarships, so that many more people across the country can benefit from them. In the Spring Statement, 1,000 more PhD places were announced to complement those already available at 16 centres for doctoral training across the country. We want to build a world-leading digital economy that works for everyone. That means ensuring that as many people as possible can reap the benefits of new technologies. That is why the Government have taken steps to increase the skills pipeline, including introducing more flexible training routes into digital roles.
The noble Lord, Lord St John of Bletso, was right to focus on how the UK contributes to international dialogue on AI. The UK is playing a leading role in international discussions on ethics and regulation, including our work at the Council of Europe, UNESCO and the OECD. We should not forget that the UK was one of the founding members of the Global Partnership on Artificial Intelligence, the first multilateral forum looking specifically at this important area.
We will continue to work with international partners to support the development of the rules on use of AI. We have also taken practical steps to take some of these high-level principles and implement them when delivering public services. In 2020, we worked with the World Economic Forum to develop guidelines for responsible procurement of AI based on these values which have since been put into operation through the Crown Commercial Service’s AI marketplace. This service has been renewed and the Crown Commercial Service is exploring expanding the options available to government buyers. On an international level, this work resulted in a policy tool called “AI procurement in a box”, a framework for like-minded countries to adapt for their own purposes.
I am mindful that Second Reading of the Procurement Bill is taking place in the Chamber as we speak, competing with this debate. That Bill will replace the current process-driven EU regime for public procurement by creating a simpler and more flexible commercial system, but international collaboration and dialogue will continue to be a key part of our work in this area in the years to come.
The noble Lord, Lord Browne of Ladyton, spoke very powerfully about the use of AI in defence. The Government will publish a defence AI strategy this summer, alongside a policy ensuring the ambitious, safe and responsible use of AI in defence, which will include ethical principles based on extensive policy work together with the Centre for Data Ethics and Innovation. The policy will include an updated statement of our position on lethal autonomous weapons systems.
As the noble Lord, Lord Clement-Jones, said, there is no international agreement on the definition of such weapons systems, but the UK continues to contribute actively at the UN Convention on Certain Conventional Weapons, working closely with our international partners, seeking to build norms around their use and positive obligations to demonstrate how degrees of autonomy in weapons systems can be used in accordance with international humanitarian law. The defence AI centre will have a key role in delivering technical standards, including where these can support our implementation of ethical principles. The centre achieved initial operating capability last month and will continue to expand throughout this year, having already established joint military, government and industry multidisciplinary teams. The Centre for Data Ethics and Innovation has, over the past year, been working with the Ministry of Defence to develop ethical principles for the use of AI in defence—as, I should say, it has with the Centre for Connected and Autonomous Vehicles in the important context of self-driving vehicles.
The noble Baroness, Lady Merron, asked about the application of AI in the important sphere of the environment. Over the past two years, the Global Partnership on Artificial Intelligence’s data governance working group has brought together experts from across the world to advance international co-operation and collaboration in areas such as this. The UK’s Office for Artificial Intelligence provided more than £1 million to support two research projects on data trusts and data justice in collaboration with partner institutions including the Alan Turing Institute, the Open Data Institute and the Data Trusts Initiative at Cambridge University. These projects explored using data trusts to support action to protect our climate, as well as expanding understanding of data governance to include considerations of equity and justice.
The insights that have been raised in today’s debate and in the reports which tonight’s debate has concerned will continue to shape the Government’s thinking as we take forward our strategy on AI. As noble Lords have noted, by most measures the UK is a leader in AI, behind only the United States and China. We are home to one-third of Europe’s AI companies and twice as many as any other European nation. We are also third in the world for AI investment—again, behind the US and China—attracting twice as much venture capital as France and Germany combined, but we are not complacent. We are determined to keep building on our strengths, maintaining and building on this global position. This evening’s debate has provided many rich insights on the further steps we must take to make sure that the UK remains an AI and science superpower. I am very grateful to noble Lords, particularly to the noble Lord, Lord Clement-Jones, for instigating it.
My Lords, first I thank noble Lords for having taken part in this debate. We certainly do not lack ambition around the table, so to speak. I think everybody saw the opportunities and the positives, but also saw the risks and challenges. I liked the use by the noble Baroness, Lady Merron, of the word “grappling”. I think we have grappled quite well today with some of the issues and I think the Minister, given what is quite a tricky cross-departmental need to pull everything together, made a very elegant fist of responding to the debate. Of course, inevitably, we want stronger meat in response on almost every occasion.
I am not going to do another wind-up speech, so to speak, but I think it was a very useful opportunity, prompted by the right reverend Prelate, to reflect on humanity. We cannot talk about artificial intelligence without talking about human intelligence. That is the extraordinary thing: the more you talk about what artificial intelligence can do, the more you have to talk about human endeavour and what humans can do. In that context, I congratulate the noble Lords, Lord Holmes and Lord Bilimoria, on their versatility. They both took part in the earlier debate, and it is very interesting to see the commonality between some of the issues raised in the previous debate on digital exclusion —human beings being excluded from opportunity— which arise also in the case of AI. I was very interested to see how, back to back, they managed to deal with all that.
The Minister said a number of things, but I think the trust and confidence aspect is vital. The proof of the pudding will be in the data reform Bill. I may differ slightly on that from the noble Lord, Lord Holmes, who thinks it is a pretty good thing, by the sound of it, but we do not know what it is going to contain. All I will say is that, when Professor Goldacre appeared before the Science and Technology Committee, I think it was a lesson for us all. He is the chap who has just written the definitive report on data use in the health area for the Department of Health, and he deliberately opted out, last year, of the GP request for consent to share data, and he is the leading data scientist in health. He was not convinced of the fact that his data would be safe. We can talk about trusted research environments and all that, but public trust in data use, whether it is in health or anything else, needs engagement by government and needs far more work.
The thing that frightens a lot of us is that we can see all the opportunities but if we do not get it right, and if we do not get permission to use the technology, we cannot deploy it in the way we conceived, whether it is for the sustainable development goals or for other forms of public benefit in the public service. Provided we get the compliance mechanisms right we can see the opportunities, but we have to get that public trust on board, not least in the area of lethal autonomous weapons. I think the perception of what the Government are doing in that area is very different from what the Ministry of Defence may think it is doing, particularly if they are developing some splendid principles of which we will all approve, when it is all about what is actually happening on the ground.
I will say no further. I am sure we will have further debates on this and I hope that the Minister has enjoyed having to brief himself for this debate, because it is very much part of the department’s responsibilities.