Wednesday 25th May 2022

(1 year, 11 months ago)

Grand Committee
Read Hansard Text Read Debate Ministerial Extracts
Motion to Take Note
18:02
Moved by
Lord Clement-Jones Portrait Lord Clement-Jones
- Hansard - - - Excerpts

That the Grand Committee takes note of the Report from the Liaison Committee AI in the UK: No Room for Complacency (7th Report, Session 2019–21, HL Paper 196).

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, the Liaison Committee report No Room for Complacency was published in December 2020, as a follow-up to our AI Select Committee report, AI in the UK: Ready, Willing and Able?, published in April 2018. Throughout both inquiries and right up until today, the pace of development here and abroad in AI technology, and the discussion of AI governance and regulation, has been extremely fast moving. Today, just as then, I know that I am attempting to hit a moving target. Just take, for instance, the announcement a couple of weeks ago about the new Gato—the multipurpose AI which can do 604 functions —or perhaps less optimistically, the Clearview fine. Both have relevance to what we have to say today.

First, however, I say a big thank you to the then Liaison Committee for the new procedure which allowed our follow-up report and to the current Lord Speaker, Lord McFall, in particular and those members of our original committee who took part. I give special thanks to the Liaison Committee team of Philippa Tudor, Michael Collon, Lucy Molloy and Heather Fuller, and to Luke Hussey and Hannah Murdoch from our original committee team who more than helped bring the band, and our messages, back together.

So what were the main conclusions of our follow-up report? What was the government response, and where are we now? I shall tackle this under five main headings. The first is trust and understanding. The adoption of AI has made huge strides since we started our first report, but the trust issue still looms large. Nearly all our witnesses in the follow-up inquiry said that engagement continued to be essential across business and society in particular to ensure that there is greater understanding of how data is used in AI and that government must lead the way. We said that the development of data trusts must speed up. They were the brainchild of the Hall-Pesenti report back in 2017 as a mechanism for giving assurance about the use and sharing of personal data, but we now needed to focus on developing the legal and ethical frameworks. The Government acknowledged that the AI Council’s roadmap took the same view and pointed to the ODI work and the national data strategy. However, there has been too little recent progress on data trusts. The ODI has done some good work, together with the Ada Lovelace Institute, but this needs taking forward as a matter of urgency, particularly guidance on the legal structures. If anything, the proposals in Data: A New Direction, presaging a new data reform Bill in the autumn, which propose watering down data protection, are a backward step.

More needs to be done generally on digital understanding. The digital literacy strategy needs to be much broader than digital media, and a strong digital competition framework has yet to be put in place. Public trust has not been helped by confusion and poor communication about the use of data during the pandemic, and initiatives such as the Government’s single identifier project, together with automated decision-making and live facial recognition, are a real cause for concern that we are approaching an all-seeing state.

My second heading is ethics and regulation. One of the main areas of focus of our committee throughout has been the need to develop an appropriate ethical framework for the development and application of AI, and we were early advocates for international agreement on the principles to be adopted. Back in 2018, the committee took the view that blanket regulation would be inappropriate, and we recommended an approach to identify gaps in the regulatory framework where existing regulation might not be adequate. We also placed emphasis on the importance of regulators having the necessary expertise.

In our follow-up report, we took the view that it was now high time to move on to agreement on the mechanisms on how to instil what are now commonly accepted ethical principles—I pay tribute to the right reverend Prelate for coming up with the idea in the first place—and to establish national standards for AI development and AI use and application. We referred to the work that was being undertaken by the EU and the Council of Europe, with their risk-based approaches, and also made recommendations focused on development of expertise and better understanding of risk of AI systems by regulators. We highlighted an important advisory role for the Centre for Data Ethics and Innovation and urged that it be placed on a statutory footing.

We welcomed the formation of the Digital Regulation Cooperation Forum. It is clear that all the regulators involved—I apologise for the initials in advance—the ICO, CMA, Ofcom and the FCA, have made great strides in building a centre of excellence in AI and algorithm audit and making this public. However, despite the publication of the National AI Strategy and its commitment to trustworthy AI, we still await the Government’s proposals on AI governance in the forthcoming White Paper.

It seems that the debate within government about whether to have a horizontal or vertical sectoral framework for regulation still continues. However, it seems clear to me, particularly for accountability and transparency, that some horizontality across government, business and society is needed to embed the OECD principles. At the very least, we need to be mindful that the extraterritoriality of the EU AI Act means a level of regulatory conformity will be required and that there is a strong need for standards of impact, as well as risk assessment, audit and monitoring, to be enshrined in regulation to ensure, as techUK urges, that we consider the entire AI lifecycle.

We need to consider particularly what regulation is appropriate for those applications which are genuinely high risk and high impact. I hope that, through the recently created AI standards hub, the Alan Turing Institute will take this forward at pace. All this has been emphasised by the debate on the deployment of live facial recognition technology, the use of biometrics in policing and schools, and the use of AI in criminal justice, recently examined by our own Justice and Home Affairs Committee.

My third heading is government co-ordination and strategy. Throughout our reports we have stressed the need for co-ordination between a very wide range of bodies, including the Office for Artificial Intelligence, the AI Council, the CDEI and the Alan Turing Institute. On our follow-up inquiry, we still believed that more should be done to ensure that this was effective, so we recommended a Cabinet committee which would commission and approve a five-year national AI strategy, as did the AI road map.

In response, the Government did not agree to create a committee but they did commit to the publication of a cross-government national AI strategy. I pay tribute to the Office for AI, in particular its outgoing director Sana Khareghani, for its work on this. The objectives of the strategy are absolutely spot on, and I look forward to seeing the national AI strategy action plan, which it seems will show how cross-government engagement is fostered. However, the Committee on Standards in Public Life—I am delighted that the noble Lord, Lord Evans, will speak today—report on AI and public standards made the deficiencies in common standards in the public sector clear.

Subsequently, we now have an ethics, transparency and accountability framework for automated decision-making in the public sector, and more recently the CDDO-CDEI public sector algorithmic transparency standard, but there appears to be no central and local government compliance mechanism and little transparency in the form of a public register, and the Home Office appears to be still a law unto itself. We have AI procurement guidelines based on the World Economic Forum model but nothing relevant to them in the Procurement Bill, which is being debated as we speak. I believe we still need a government mechanism for co-ordination and compliance at the highest level.

The fourth heading is impact on jobs and skills. Opinions differ over the potential impact of AI but, whatever the chosen prognosis, we said there was little evidence that the Government had taken a really strategic view about this issue and the pressing need for digital upskilling and reskilling. Although the Government agreed that this was critical and cited a number of initiatives, I am not convinced that the pace, scale and ambition of government action really matches the challenge facing many people working in the UK.

The Skills and Post-16 Education Act, with its introduction of a lifelong loan entitlement, is a step in the right direction and I welcome the renewed emphasis on further education and the new institutes of technology. The Government refer to AI apprenticeships, but apprentice levy reform is long overdue. The work of local digital skills partnerships and digital boot camps is welcome, but they are greatly underresourced and only a patchwork. The recent Youth Unemployment Select Committee report Skills for Every Young Person noted the severe lack of digital skills and the need to embed digital education in the curriculum, as did the AI road map. Alongside this, we shared the priority of the AI Council road map for more diversity and inclusion in the AI workforce and wanted to see more progress.

At the less rarefied end, although there are many useful initiatives on foot, not least from techUK and Global Tech Advocates, it is imperative that the Government move much more swiftly and strategically. The All-Party Parliamentary Group on Diversity and Inclusion in STEM recommended in a recent report a STEM diversity decade of action. As mentioned earlier, broader digital literacy is crucial too. We need to learn how to live and work alongside AI.

The fifth heading is the UK as a world leader. It was clear to us that the UK needs to remain attractive to international research talent, and we welcomed the Global Partnership on AI initiative. The Government in response cited the new fast-track visa, but there are still strong concerns about the availability of research visas for entrance to university research programmes. The failure to agree and lack of access to EU Horizon research funding could have a huge impact on our ability to punch our weight internationally.

How the national AI strategy is delivered in terms of increased R&D and innovation funding will be highly significant. Of course, who knows what ARIA may deliver? In my view, key weaknesses remain in the commercialisation and translation of AI R&D. The recent debate on the Science and Technology Committee’s report on catapults reminded us that this aspect is still a work in progress.

Recent Cambridge round tables have confirmed to me that we have a strong R&D base and a growing number of potentially successful spin-outs from universities, with the help of their dedicated investment funds, but when it comes to broader venture capital culture and investment in the later rounds of funding, we are not yet on a par with Silicon Valley in terms of risk appetite. For AI investment, we should now consider something akin to the dedicated film tax credit which has been so successful to date.

Finally, we had, and have, the vexed question of lethal autonomous weapons, which we raised in the original Select Committee report and in the follow-up, particularly in the light of the announcement at the time of the creation of the autonomy development centre in the MoD. Professor Stuart Russell, who has long campaigned on this subject, cogently raised the limitation of these weapons in his second Reith Lecture. In both our reports we said that one of the big disappointments was the lack of definition of “autonomous weapons”. That position subsequently changed, and we were told in the Government’s response to the follow-up report that NATO had agreed a definition of “autonomous” and “automated”, but there is still no comprehensive definition of lethal autonomous weapons, despite evidence that they have clearly already been deployed in theatres such as Libya, and the UK has firmly set its face against laws limitation in international fora such as the CCW.

For a short report, our follow-up report covered a great deal of ground, which I have tried to cover at some speed today. AI lies at the intersection of computer science, moral philosophy, industrial education and regulatory policy, which makes how we approach the risks and opportunities inherent in this technology vital and difficult. The Government are engaged in a great deal of activity. The question, as ever, is whether it is focused enough and whether the objectives, such as achieving trustworthy AI and digital upskilling, are going to be achieved through the actions taken so far. The evidence of success is clearly mixed. Certainly there is still no room for complacency. I very much look forward to hearing the debate today and to what the Minister has to say in response. I beg to move.

18:17
Lord Holmes of Richmond Portrait Lord Holmes of Richmond (Con)
- Hansard - - - Excerpts

My Lords, what a pleasure it is to follow the noble Lord, Lord Clement-Jones. It was a pleasure to serve under his chairmanship on the original committee. I echo all his thanks to all the committee staff who did such great work getting us to produce our original report. I shall pick up a number of the themes he touched on, but I fear I cannot match his eloquence and nobody around the table can in any sense match his speed. In many ways, he has potentially passed the Turing test in his opening remarks.

I declare my technology interests as set out in the register. In many ways, the narrative can fall into quite a negative and fearful approach, which goes something like this: the bots are coming, our jobs are going, we are all off to hell and we are not even sure if there is a handcart. I do not think that was ever the case, and it is positive that the debate has moved on from the imminent unemployment of huge swathes of society to this—and I think it is just this in terms of jobs. The real clear and present danger for the UK is not that there will not be jobs for us all to do but that we will be unprepared or underprepared for those new jobs as and when they come, and they are already coming at speed this very day. Does the Minister agree that all the focus needs to be on how we drive at speed in real time the skills to enable all the talent coming through to be able to get all those jobs and have fulfilling careers in AI?

In many ways this debate begins and ends with everything around data. AI is nothing without data. Data is the beginning and the end of the discussion. It is probably right, and it shows the foresight of the noble Lord, Lord Clement-Jones, in having a debate today because it is time to wish many happy returns—not to the noble Lord but to the GDPR. Who would have thought that it is already four years since 25 May 2018?

In many ways, it has not been unalloyed joy and success. It is probably over-prescriptive, has not necessarily given more protection to citizens across the European community, and certainly has not been adopted in other jurisdictions around the world. I therefore ask my noble friend the Minister: what plans are there in the upcoming data reform Bill not to have such a prescriptive approach? What is the Government’s philosophy in terms of balancing all the competing needs and philosophical underpins to data when that Bill comes before your Lordships’ House?

Privacy is incredibly important. We see just this week that an NHS England AI project has been shelved because of privacy concerns. It takes us back to a similar situation at the Royal Free—another AI programme shelved. Could these programmes have been more effectively delivered if there had been more consideration and understanding of the use of data and the crucial point that it is our data, not big tech’s? It is our data, and we need to have the ability to understand that and operate with it as a central tenet. Could these projects have been more successful? How do we understand real anonymisation? Is it possible in reality, or should we very much look to the issue around the curse of dimensionalisation? What is the Government’s view as to how true anonymisation occurs when you have more than one credential? When you get to multiple dimensions, anonymisation of the data is extraordinarily difficult to achieve.

That leads us into the whole area of bias. Probably one of the crassest examples of AI deployment was the soap dispenser in the United States—why indeed we needed AI to be put into a soap dispenser we can discuss another time—which would dispense soap only to a white hand. How absolutely appalling, how atrocious, but how facile that that can occur with something called artificial intelligence. You can train it, but it can do only pretty much what datasets it has been trained on: white hands, white-hand soap dispensing. It is absolutely appalling. I therefore ask my noble friend the Minister: have the Government got a grip across all the areas and ways in which bias kicks in? There are so many elements of bias in what we could call “non-AI” society; are the Government where they need to be in considering bias in this AI environment?

Moving on to building on how we can all best operate with our data, I believe that we urgently need to move to have a system of digital ID in the UK. The best model to build this upon is the principles around self-sovereign distributed ID. Does my noble friend agree and can he update the Grand Committee on his department’s work on digital ID? So much of the opportunity, and indeed the protection to enable opportunity, in this space around AI comes down to whether we can have an effective interoperable system of digital ID.

Building on that, I believe that we need far greater public debate and public engagement around AI. It is not something that is “other” to people’s experience; it is already in every community and impacting people’s lives, whether they know it or want that to be the case. We see how public engagement can work effectively and well with Baroness Warnock’s stunning commission decades ago into IVF. What could be more terrifying than human life made in a test tube? Why, both at the time and decades later, is it seen as a such a positive force in our society? It is because of the Warnock commission and that public engagement. We can compare that with GM foods. I make no flag-waving for or against GM foods, I just say that the public debate was not engaged on that. What are the Government’s plans to do more to engage the public at every level with this?

Allied to that, what are the Government’s plans around data and digital literacy, right from the earliest year at school, to ensure that we have citizens coming through who can operate safely, effectively and productively in this space? If we can get to that point, potentially we could enable every citizen to take advantage of AI rather than have AI take advantage of us. It does not need to be an extractive exercise or to feel alienating. It does not need to be put just to SEO and marketing and cardboard boxes turning up on our doorstep—we have forgotten what was even in the box, and the size of the box will not give us a clue because the smallest thing we order is always likely to come in the largest cardboard box. If we can take advantage of all the opportunities of AI, what social, economic or psychological potential lies at our fingertips.

What is AI? To come to that at the end rather than beginning of my speech seems odd. Is it statistics on steroids? Perhaps it is a bit more than that. AI, in essence, is just the latest tools—yes, incredibly powerful tools, but the latest tools in our human hands. It is down to us to connect, collaborate and co-create for the public good and common good, and for the economic, social and psychological good, for our communities, cities and our country. If we all get behind that—and it is in our hands, our heads and our hearts—perhaps, just perhaps, we can build a society fit for the title “civilised”.

18:26
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - - - Excerpts

My Lords, it is a significant pleasure to follow the noble Lord, Lord Holmes. I admire and envy his knowledge of the issue, but mostly I admire and envy his ability to communicate about these complex issues in a way that is accessible and, on occasions, entertaining. A couple of times during the course of what he said, I thought, “I wish I’d said that”, knowing full well that at some time in future I will, which is the highest compliment I can pay him.

As was specifically spelled out in the remit of the Select Committee on Artificial Intelligence, the issues that we are debating today have significant economic, security, ethical and social implications. Thanks to the work of that committee and, to a large degree, the expertise and the leadership of the noble Lord, Lord Clement-Jones, the committee’s report is evidence that it fully met the challenge of the remit. Since its publication—and I know this from lots of volunteered opinions that I have received since April 2018, when it was published—the report has gained a worldwide reputation for excellence. It is proper, therefore, that this report should be the first to which the new procedure put in place by the Liaison Committee, to follow up on the committee’s recommendations, should be applied.

I wish to address the issue of policy on autonomous weapons systems in my remarks. I think that it is known throughout your Lordships’ House that I have prejudices about this issue—but I think that they are informed prejudices, so I share them at any opportunity that I get. The original report, as the noble Lord, Lord Clement-Jones, said, referred to lethal autonomous weapons and particularly to the challenge of the definition, which continues. But that was about as far as the committee went. As I recollect, this weaponry was not the issue that gave the committee the most concern—but that was as far as it went, because it did not have the capacity to address it, saying that it deserved an inquiry of its own. Unfortunately, that has not yet taken place, but it may do soon.

The report that we are debating—which, in paragraph 83, comments on the welcome establishment of the Autonomy Development Centre, announced by the Prime Minister on 19 November 2020 and described as a new centre dedicated to AI, to accelerate the research, development, testing, integration and deployment of world-leading artificial intelligence and autonomous systems—highlighted that the work of that centre will be “inhibited” owing to the lack of alignment of the UK’s definition of autonomous weapons with the definitions used by international partners. The government response, while agreeing the importance of ensuring that official definitions do not undermine our arguments or diverge from our allies, responded further, and at length, by acknowledging that the various definitions relating to autonomous systems are challenging and, at length, set out a comparison of them.

Further, we are told that the Ministry of Defence is preparing to publish a new defence AI strategy that will allow the UK to participate in international debates and act as a leader in the space, and we are told that the definitions will be continually reviewed as part of that. It is hard not to conclude that this response alone justifies the warning of the danger of “complacency” deployed in the title of the report.

On the AI strategy, on 18 May the ministerial response to my contribution to the Queen’s Speech debate was, in its entirety, an assurance that the AI strategy would be published before the Summer Recess. We will wait and see. I look forward to that, but there is today an urgent need for strategic leadership by the Government and for scrutiny by Parliament as AI plays an increasing role in the changing landscape of war. Rapid advancements in technology have put us on the brink of a new generation of warfare where AI plays an instrumental role in the critical functions of weapons systems.

In the Ukraine war, in April, a senior Defense Department official said that the Pentagon is quietly using AI and machine-learning tools to analyse vast amounts of data, generate useful battlefield intelligence and learn about Russian tactics and strategy. Just how much the US is passing to Ukraine is a matter for conjecture, which I will not engage in; I am not qualified to do so anyway. A powerful Russian drone with AI capabilities has been spotted in Ukraine. Meanwhile, Ukraine has itself employed the use of controversial facial recognition technology. Vice Prime Minister Fedorov told Reuters that it had been using Clearview AI—software that uses facial recognition—to discover the social media profiles of deceased Russian soldiers, which authorities then use to notify their relatives and offer arrangements for their bodies to be recovered. If the technology can be used to identify live as well as dead enemy soldiers, it could also be incorporated into systems that use automated decision-making to direct lethal force. That is not a remote possibility; last year the UN reported that an autonomous drone had killed people in Libya in 2020. There are unconfirmed reports of autonomous weapons already being used in Ukraine, although I do not think it is helpful to repeat some of that because most of it is speculation.

We are seeing a rapid trend towards increasing autonomy in weapons systems. AI and computational methods are allowing machines to make more and more decisions themselves. We urgently need UK leadership to establish, domestically and internationally, when it is ethically and legally appropriate to delegate to a machine autonomous decision-making about when to take an individual’s life.

The UK Government, like the US, see AI as playing an important role in the future of warfighting. The UK’s 2021 Integrated Review of Security, Defence, Development and Foreign Policy sets out the Government’s priority of

“identifying, funding, developing and deploying new technologies and capabilities faster than our potential adversaries”,

presenting AI and other scientific advances as “battle-winning technologies”—in what in my view is the unhelpful context of a race. My fear of this race is that at some point the humans will think they have gone through the line but the machines will carry on.

In the absence of an international ban, it is inevitable that eventually these weapons will be used against UK citizens or soldiers. Advocating international regulation would not be abandoning the military potential of new technology, as is often argued. International regulation on AWS is needed to give our industry guidance to be a sci-tech superpower without undermining our security and values. Only this week, the leaders of the German engineering industry called for the EU to create specific law and tighter regulation on autonomous and dual-use weapons, as they need to know where the line is and cannot be expected to draw it themselves. They have stated:

“Imprecise regulations would do damage to the export control environment as a whole.”


Further, systems that operate outside human control do not offer genuine or sustainable advantage in the achievement of our national security and foreign policy goals. Weapons that are not aligned with our values cannot be effectively used to defend our values. We should not be asking our honourable service personnel to utilise immoral weapons—no bad weapons for good soldiers.

The problematic nature of nonhuman-centred decision-making was demonstrated dramatically when the faulty Horizon software was used to prosecute 900-plus sub-postmasters. Let me explain. In 1999, totally coincidentally at the same time as the Horizon software began to be rolled out in sub-post offices, a presumption was introduced into the law on how courts should consider electronic evidence. The new rule followed a Law Commission recommendation for courts to presume that a computer system has operated correctly unless there is explicit evidence to the contrary. This legal presumption replaced a section of the Police and Criminal Evidence Act 1984, PACE, which stated that computer evidence should be subject to proof that it was in fact operating properly.

The new rule meant that data from the Horizon system was presumed accurate. It made it easier for the Post Office, through its private prosecution powers, to convict sub-postmasters for financial crimes when there were accounting shortfalls based on data from the Horizon system. Rightly, the nation has felt moral outrage: this is in scale the largest miscarriage of justice in this country’s history, and we have a judiciary which does not understand this technology, so there was nothing in the system that could counteract this rule. Some sub-postmasters served prison sentences, hundreds lost their livelihoods and there was at least one suicide linked to the scandal. With lethal autonomous weapons systems, we are talking about a machine deciding to take people’s lives away. We cannot have a presumption of infallibility for the decisions of lethal machines: in fact, we must have the opposite presumption, or meaningful human control.

The ongoing war in Ukraine is a daily reminder of the tragic human consequences of ongoing conflict. With the use of lethal autonomous weapons systems in future conflicts, a lack of clear accountability for decisions made poses serious complications and challenges for post-conflict resolution and peacebuilding. The way in which these weapons might be used and the human rights challenges they present are novel and unknown. The existing laws of war were not designed to cope with such situations, any more than our laws of evidence were designed to cope with the development of computers and, on their own, are not enough to control the use of future autonomous weapons systems. Even more worrying, once we make the development from AI to AGI, they can potentially develop at a speed that we humans cannot physically keep up with.

Previously in your Lordships’ House, I have referred to a “Stories of Our Times” podcast entitled “The Rise of Killer Robots: The Future of Modern Warfare?”. Both General Sir Richard Barrons, former Commander of the UK Joint Forces Command, and General Sir Nick Carter, former Chief of the Defence Staff, contributed to what, in my view, should be compulsory listening for Members of Parliament, particularly those who hold or aspire to hold ministerial office. General Sir Richard Barrons says

“Artificial intelligence is potentially more dangerous than nuclear weapons.”


If that is a proper assessment of the potential of these weapon systems, there can be no more compelling reason for their strict regulation and for them to be banned in lethal autonomous mode. It is essential that all of us, whether Ministers or not, who share responsibility for the weapons systems procured and deployed for use by our Armed Forces, fully understand the implications and risks that come with the weapons systems and understand exactly what their capabilities are and, more importantly, what they may become.

In my view, and I cannot overstate this, this is the most important issue for the future defence of our country, future strategic stability and potentially peace: that those who take responsibility for these weapons systems are civilians, that they are elected, and that they know and understand them. Anyone who listens to the podcast will dramatically realise why, because already there are conversations going on among military personnel that demand the informed oversight of politicians. The development of LAWS is not inevitable, and an international legal instrument would play a major role in controlling their use. Parliament, especially the House of Commons Defence Committee, needs to show more leadership in this area. That committee could inquire into what military AI capabilities the Government wish to acquire and how these will be used, especially in the long term. An important part of such an investigation would be consideration of whether AI capabilities could be developed and regulated so that they are used by armed forces in an ethically acceptable way.

As I have already referred to, the integrated review pledged to

“publish a defence AI strategy and invest in a new centre to accelerate adoption of this technology”.

Unfortunately, the Government’s delay in publishing the AI defence strategy has cast doubt on the goal stated in the integrated review’s commitment of security, defence, development and foreign policy that the UK will become a “science and technology superpower”. The technology is already outpacing us, and presently the UK is unprepared to deal with the ethical, legal and practical challenges presented by autonomous weapons systems. Will that change with the publication of the strategy and the establishment of the autonomy development centre? Perhaps the Minister can tell us.

18:40
Lord Evans of Weardale Portrait Lord Evans of Weardale (CB)
- Hansard - - - Excerpts

My Lords, I draw attention to my entry in the register of interests as an adviser to Luminance Technologies Ltd and to Darktrace plc, both of which use AI to solve business problems.

I welcome the opportunity to follow up the excellent 2018 report from the Select Committee on Artificial Intelligence. In 2020 the Committee on Standards in Public Life, which I chair, published a report, Artificial Intelligence and Public Standards. We benefited considerably from the work that had gone into the earlier report and from the advice and encouragement of the noble Lord, Lord Clement-Jones, for which I am very grateful.

It is most important that there should be a wide-ranging and well-informed public debate on the development and deployment of AI. It has the potential to bring enormous public benefits but it comes with potential risks. Media commentary on this subject demonstrates that by swinging wildly between boosterism on the one hand and tales of the apocalypse on the other. Balanced and well-informed debate is essential if we are to navigate the future successfully.

The UK remains well-positioned to contribute to and benefit from the development of AI. I have been impressed by the quality of the work done in government in some areas on these underlying ethical challenges. A good example was the publication last year of GCHQ’s AI and data ethics framework—a sign of a forward-looking and reflective approach to ethical challenges, in a part of government that a generation ago would have remained hidden from public view.

The view of my committee was that there was no reason in principle why AI should not both increase the efficiency of the public service and help to maintain high public standards, but in order to do so it had to manage the risks effectively and ensure that proper regulation was in place, otherwise public trust could be undermined and, consequently, the potential benefits of AI to public service would not be realised. The Liaison Committee report gives me some encouragement about the Government’s direction of travel on this, but the pace of change will not slow and continuing attention will be required to keep the policy up to date.

Specifically, I welcome The Roadmap to an Effective AI Assurance Ecosystem by the CDEI, which seems to me, admittedly as an interested layman rather than a technologist, to provide realistic and nuanced guidance on assurance in this area—and it is one where effective independent assurance will be essential. I therefore ask the Minister how confident he is that this guidance will reach and influence those offering assurance services to the users of AI. I welcome the consultation by DCMS on potential reforms to the data protection framework, which may need to be adjusted as advances in technology create novel challenges. I look forward to seeing the outcome of the consultation before too long.

The Government’s AI strategy suggests that further consideration will be given to the shape of regulation of AI and is to be published later this year, specifically considering whether we are better to have a more centralised regulatory model or one that continues to place the responsibility for AI regulation on the sectoral regulators. Our report concluded that a dispersed vertical model was likely in most areas to be preferable, since AI was likely to become embedded in all areas of the economy in due course and needed to be considered as part of the normal operating model of specific industries and sectors. I remain of that view but look forward to seeing the Government’s proposals on the issue in due course.

One area where we felt that improvement was needed was in using public procurement as a policy lever in respect of AI. The public sector is an increasingly important buyer of AI-related services and products. There is the potential to use that spending power to encourage the industry to develop capabilities that make AI-assisted decision-making more explicable, which is sometimes a problem at present. The evidence that we received suggested that that was not being used by government, at least as recently as 2020. I am not sure that we are doing this as well as we should and would therefore welcome the Minister’s observations on this point.

18:44
Lord Bishop of Oxford Portrait The Lord Bishop of Oxford
- Hansard - - - Excerpts

My Lords, it is a pleasure to follow the noble Lord, Lord Evans, and thank him in this context for his report, which I found extremely helpful when it was published and subsequently. It has been a privilege to engage with the questions around AI over the last five years through the original AI Select Committee so ably chaired by the noble Lord, Lord Clement-Jones, in the Liaison Committee and as a founding board member for three years of the Centre for Data Ethics and Innovation. I thank the noble Lord for his masterly introduction today and other noble Lords for their contributions.

There has been a great deal of investment, thought and reflection regarding the ethics of artificial intelligence over the last five years in government, the National Health Service, the CDEI and elsewhere—in universities, with several new centres emerging, including in the universities of Oxford and Oxford Brookes, and by the Church and faith communities. Special mention should be made of the Rome Call for AI Ethics, signed by Pope Francis, Microsoft, IBM and others at the Vatican in February 2020, and its six principles of transparency, inclusion, accountability, impartiality, reliability and security. The most reverend Primate the Archbishop of Canterbury has led the formation of a new Anglican Communion Science Commission, drawing together senior scientists and Church leaders across the globe to explore, among other things, the impact of new technologies.

Despite all this endeavour, there is in this part of the AI landscape no room for complacency. The technology is developing rapidly and its use for the most part is ahead of public understanding. AI creates enormous imbalances of power with inherent risks, and the moral and ethical dilemmas are complex. We do not need to invent new ethics, but we need to develop and apply our common ethical frameworks to rapidly developing technologies and new contexts. The original AI report suggested five overarching principles for an AI code. It seems appropriate in the Moses Room to say that there were originally 10 commandments, but they were wisely whittled down by the committee. They are not perfect, in hindsight, but they are worth revisiting five years on as a frame for our debate.

The first is that artificial intelligence should be developed for the common good and benefit of humanity; as the noble Lord, Lord Holmes, eloquently said, the debate often slips straight into the harms and ignores the good. This principle is not self-evident and needs to be restated. AI brings enormous benefits in medicine, research, productivity and many other areas. The role of government must be to ensure that these benefits are to the common good—for the many, not the few. Government, not big tech, must lead. There must be a fair distribution of the wealth that is generated, a fair sharing of power through good governance and fair access to information. This simply will not happen without national and international regulation and investment.

The second principle is that artificial intelligence should operate on principles of intelligibility and fairness. This is much easier to say than to put into practice. AI is now being deployed, or could be, in deeply sensitive areas of our lives: decisions about probation, sentencing, employment, personal loans, social care—including of children—predictive policing, the outcomes of examinations and the distribution of resources. The algorithms deployed in the private and public sphere need to be tested against the criteria of bias and transparency. The governance needs to be robust. I am sure that an individualised, contextualised approach in each field is the right way forward, but government has a key co-ordinating role. As the noble Lord, Lord Clement-Jones, said, we do not yet have that robust co-ordinating body.

Thirdly, artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities. As a society, we remain careless of our data. Professor Shoshana Zuboff has exposed the risks of surveillance capitalism and Frances Haugen, formerly of Meta, has exposed the way personal data is open to exploitation by big tech. Evidence was presented to the online safety scrutiny committee of the effects on children and adolescents of 24/7 exposure to social media. The Online Safety Bill is a very welcome and major step forward, but the need for new regulation and continual vigilance will be essential.

Fourthly, all citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence. It seems to me that of these five areas, the Government have been weakest here. A much greater investment is needed by the Department for Education and across government to educate society on the nature and deployment of AI, and on its benefits and risks. Parents need help to support children growing up in a digital world. Workers need to know their rights in terms of the digital economy, while fresh legislation will be needed to promote good work. There needs to be even better access to new skills and training. We need to strive as a society for even greater inclusion. How do the Government propose to offer fresh leadership in this area?

Finally, the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence, as others have said. This final point highlights a major piece of unfinished business in both reports: engagement with the challenging and difficult questions of lethal autonomous weapons systems. The technology and capability to deploy AI in warfare is developing all the time. The time has come for a United Nations treaty to limit the deployment of killer robots of all kinds. This Government and Parliament, as the noble Lord, Lord Browne, eloquently said, urgently need to engage with this area and, I hope, take a leading role in the governance of research and development.

AI can and has brought many benefits, as well as many risks. There is great openness and willingness on the part of many working in the field to engage with the humanities, philosophers and the faith communities. There is a common understanding that the knowledge brought to us by science needs to be deployed with wisdom and humility for the common good. AI will continue to raise sharp questions of what it means to be human, and to build a society and a world where all can flourish. As many have pointed out, even the very best examples of AI as yet come nowhere near the complexity and wonder of the human mind and person. We have been given immense power to create but we are ourselves, in the words of the psalmist, fearfully and wonderfully created.

18:53
Lord Bilimoria Portrait Lord Bilimoria (CB)
- Hansard - - - Excerpts

My Lords, the report Growing the Artificial Intelligence Industry in the UK was published in October 2017. It started off by saying:

“We have a choice. The UK could stay among the world leaders in AI in the future, or allow other countries to dominate.”


It went on to say that the increased use of AI could

“bring major social and economic benefits to the UK. With AI, computers can analyse and learn from information at higher accuracy and speed than humans can. AI offers massive gains in efficiency and performance to most or all industry sectors, from drug discovery to logistics. AI is software that can be integrated into existing processes, improving them, scaling them, and reducing their costs, by making or suggesting more accurate decisions through better use of information.”

It estimated at that time that AI could add £630 billion to the UK economy by 2035.

Even at that stage, the UK had an exceptional record in key AI research. We should be proud of that, but it also highlighted the importance of inward investment. We as a country need to be continually attractive to inward investment and be a magnet for it. We have traditionally between the second or third-largest recipient of inward investment. But will that continue to be the case when we have, for example, the highest tax burden in 71 years?

AI of course has great potential for increasing productivity; it helps our firms and people use resources more efficiently and it can help familiar tasks to be done in a more efficient manner. It enables entirely new business models and new approaches to old problems. It can help companies and individual employees be more productive. We all know its benefits. It can reduce the burden of searching large datasets. I could give the Committee example after example of how artificial intelligence can complement or exceed our abilities, of course taking into account what the right reverend Prelate the Bishop of Oxford so sensibly just said. It can work alongside us and even teach us. It creates new opportunities for creativity and innovation and shows us new ways to think.

In the Liaison Committee report on artificial intelligence policy in the UK, which is terrific, the Government state that artificial intelligence has

“huge potential to rewrite the rules of entire industries, drive substantial economic growth and transform all areas of life”

and that their ambition is for the UK to be an “AI superpower” that leads the world in innovation and development. The committee was first appointed in 2017. At that stage, it mentioned that the number of visas for people with valuable skills in AI-related areas should be increased. Now that we have the points-based system, will the Minister say whether it is delivering what the committee sought five years ago?

That was in February 2020, from the noble Lord, Lord Clement-Jones, whom I congratulate on leading this debate and on his excellent opening speech. What policies have the Government recently announced? There is the National AI Strategy. One of the points I noticed is that the Office for Artificial Intelligence is a joint department of the Department for Business, Energy and Industrial Strategy and the Department for Digital, Culture, Media and Sport, responsible for overseeing the implementation of the national AI strategy. This is a question I am asked quite regularly: why in today’s world does digital sit within DCMS and not BEIS? They are doing this together, so maybe this is a solution for digital overall moving forward. I do not know what the Minister’s or the Government’s view on that is.

The CBI, of which I am president, responded to the UK Government’s AI strategy. I shall quote Susannah Odell, the CBI’s head of digital policy:

“This AI strategy is a crucial step in keeping the UK a leader in emerging technologies and driving business investment across the economy. From trade to climate, AI brings unprecedented opportunities for increased growth and productivity. It’s also positive to see the government joining up the innovation landscape to make it more than the sum of its parts … With AI increasingly being incorporated into our workplaces and daily lives, it’s essential to build public trust in the technology. Proportionate and joined-up regulation will be a core element to this and firms look forward to engaging with the government’s continued work in this area. Businesses hope to see the AI strategy provide the long-term direction and fuel to reach the government’s AI ambitions.”


An important point to note is that linked to this is our investment in research and development and innovation. This is a point that I make like a stuck record. We spend 1.7% of GDP on R&D and innovation, compared with countries such as Germany and the United States of America, which spend 3.1% and 3.2%. If we spend just one extra percent of GDP on research and development and innovation, an extra £20 billion a year, just imagine how much that would power ahead our productivity and AI ability. Do the Government agree?

We have heard that the White Paper on AI governance has been delayed. Can the Minister give us any indication of when it will be published? Business has recognised the importance of AI governance and standards in driving the safe and trustworthy adoption of AI, which is complicated by the variety of AI technologies that we have heard about in this debate. Use cases and government mechanisms, such as standards, can help simplify and guide widespread adoption. What businesses need from AI standards differs by sector. To be effective, AI standards must be accessible, sector-specific and focused on use cases, and the AI standards hub has a critical role in delivering and developing AI standards across the economy.

The report AI Activity in UK Businesses was published on 12 January this year and had some excellent insights. It defined AI based on five technology categories: machine learning, natural language processing and generation, computer vision and image processing/generation, data management and analysis, and hardware. The report says:

“Current usage of AI technologies is limited to a minority of businesses, however it is more prevalent in certain sectors and larger businesses”.


For example,

“Around 15% of all businesses have adopted at least one AI technology … Around 2% of businesses are currently piloting AI and 10% plan to adopt at least one AI technology in the future … As businesses grow, they are more likely to adopt AI”.


Linked to this is the crucial importance of start-ups and scale-ups, growing companies and our economy:

“68% of large companies, 34% of medium sized companies and 15% of small companies have adopted at least one AI technology”.

It is used in the IT and telecommunications sector, the legal sector—it is used across all sectors. Large companies are more likely to adopt multiple AI technologies and there are innovative companies using multiple AI technologies as well.

Tech Nation had an event, “The UK and Artificial Intelligence: What’s Next?”, in which there were some useful insights. For example, Zara Nanu, the CEO of Applied AI 1.0, talked about gender diversity in AI and how important it is that you have more women. Just 10% of those working in the talent pool are women; for STEM it is 24%. As president of the CBI, I have launched Change the Race Ratio to promote ethnic minority participation across all business, including in AI. Sarah Drinkwater made the point that the UK is well positioned to continue attracting talent on the strength of its investment landscape, world-class universities and culture. We are so lucky to have the best universities in the world, along with the United States of America. I am biased, but the fact is that a British university has won more Nobel prizes than any other, including any American university, and that is the University of Cambridge. It was of course excellent that the Government announced £23 million to boost skills and diversity in AI jobs by creating 2,000 scholarships in AI and data science in England. This is fantastic, music to my ears.

To conclude, I go back to the 2017 report Growing the Artificial Intelligence Industry in the UK. It asked, “Why does AI matter?” and said that:

“In one estimate, the worldwide market for AI solutions could be worth more than £30bn by 2024, boosting productivity by up to 30% in some industries, and generating savings of up to 25%. In another estimate, ‘AI could contribute up to $15.7 trillion to the global economy in 2030, more than the current output of China and India combined. Of this, $6.6 trillion is likely to come from increased productivity and $9.1 trillion is likely to come from consumption-side effects.’”


This is phenomenal, huge, powerful and world-changing. However, it will happen only if we have sustained collaboration between government, universities and business; then we will continue to deliver the amazing potential of AI in the future.

19:03
Lord St John of Bletso Portrait Lord St John of Bletso (CB)
- Hansard - - - Excerpts

My Lords, I join in congratulating the noble Lord, Lord Clement-Jones, on his able chairmanship of the Liaison Committee report as well as the report that he chaired so ably in 2017. I was fortunate to be a member of that committee, and it was a steep learning curve. The noble Lord has comprehensively covered the key areas of the development of data trusts, the legal and ethical framework and the challenges of ensuring public trust. I had planned on speaking to the threat of bias in machine learning and the threats in some rather unfortunate circumstances, but that has been ably covered by the noble Lord, Lord Holmes of Richmond, so I can delete that from my speech and speak for two minutes less.

In welcoming the national AI strategy published in September last year, I shall focus my remarks on what needs to be achieved to retain—and I stress the word “retain”—the UK’s position as a world leader in AI and, in the words of Dame Wendy Hall, to remain an AI and science superpower fit for the next decade. I am cognisant of the three pillars of the national AI strategy being investing in the long-term needs of the AI ecosystem, ensuring that AI benefits all regions and sectors, and, of course, the governance issues, which I shall not address in my short speech today.

AI has already played, and continues to play, a major role in transforming many sectors, from healthcare to financial services, autonomous vehicles, defence and security—I could not possibly speak with the able knowledge of the noble Lord, Lord Browne—as well as climate change forecasting, to name but a few. Fintech has played, and continues to play, a major role in embracing AI to tackle some of the challenges in financial exclusion and inclusion, a subject ably covered in the previous debate. The healthcare sector also provides some of the most compelling and demonstrable proof that data science and AI can generate with advances in robotic surgery, automated medical advice and medical imaging diagnostics. Autonomous vehicles are soon going to be deployed on our roads, and we will need to ensure that they are safe and trusted by members of the public. Moreover, the Royal Mail is planning to deploy 500 drones to carry parcels to remote locations.

Are we building AI to the right applications? It is difficult to apply standards for AI when it is constantly evolving. AI can be equipped to learn from data that is generated by humans, systems and the environment. Can we ensure that AI remains safe and trusted as it evolves its functionality? To build AI that we can export as part of our products and services, it will need to be useful to and trusted by those countries where we seek to sell those products and services. Such trustworthiness can be achieved only through collaboration on standards, research and regulation. It is crucial to engage with industry, universities and public sectors not just within the UK but across the globe. Can the Minister elaborate on what the UK Government are doing to boost strategic operation with international partnerships?

I join in applauding the work of UKRI as well as the Alan Turing Institute, which has attracted and retained exceptional researchers, but a lot more investment is needed to retain and expand human resource expertise and further implement the AI strategy. It was conceived during the pandemic, but new threats and opportunities will invariably arise unexpectedly: wars, financial crises, climate disasters and pandemics can rapidly change Governments’ priorities. Can the Minister clarify how it will be ensured that the AI strategy remains relevant in times of change and a high priority?

The noble Lord, Lord Bilimoria, spoke about how the UK and various businesses are embracing AI, and I shall talk briefly about the AI SME ecosystem. Our report in 2017 recommended that the Government create an AI growth fund for UK SMEs to help them to scale up. Can the Minister elaborate on what measures are being taken to accelerate and support AI SMEs, particularly on the global stage?

I share the sentiments of the noble Lord, Lord Clement-Jones, that the pace, scale and ambition of the Government do not match the challenge of many people working in the UK. I hope there will be more funding and focus on promoting AI apprenticeships, with digital upskilling as well as digital skills partnerships. For the AI strategy to succeed, we need a combination of competent people and technology. We are all aware of the concerns about a massive skills shortage, particularly with data scientists. We have been hearing about the forthcoming government White Paper on common standards and governance, although it is difficult to apply standards for AI when it is constantly evolving.

In conclusion, while we have seen huge strides and advances in AI in the UK, we need to ensure that we do not take our foot off the pedal. How do we differentiate UK AI from international AI in terms of efficiency, resilience and relevance? How can we improve public sector efficiencies by embracing AI? China and the United States will invariably lead the way with their huge budgets and established ecosystems. There is no need for complacency.

19:11
Lord McNally Portrait Lord McNally (LD)
- Hansard - - - Excerpts

My Lords, I welcome the quality of this debate. In their speeches the noble Lords, Lord St John and Lord Bilimoria, have given us some of the more optimistic sides of what AI can deliver, but every one of the speeches has been extremely thoughtful.

I look forward to the speeches of the noble Baroness, Lady Merron, and the noble Lord, Lord Parkinson of Whitley Bay, two Front-Benchers who, I may say, I always admire as they speak common sense with clarity. Thus having blighted two careers, I will move on.

I also thank noble Lords—because he will be too modest to do so—for their comments about my colleague, my noble friend Lord Clement-Jones. He told us that a new AI development could do 604 functions simultaneously. I think that is a perfect description of my noble friend.

I come to this subject not with any of the recent experience that has been on show. This might send a shiver down the Committee’s spine but in 2010 I was appointed Minister for Data Protection in the coalition Government, and it was one of the first times when I had come across some of these challenges. We had an advisory board on which, although she was not then in the Lords, the noble Baroness, Lady Lane-Fox, made a great impression on me with her knowledge of these problems.

I remember the discussion when one of our advisers urged us to release NHS data as a valuable creator of new industries, possible new cures and so on. Even before we had had time to consider it, there was a campaign by the Daily Mail striking fear into everyone that we were about to release everyone’s private medical records, so that hit the buffers.

At that time, I was taken around one of the HM Government facilities to look at what we were doing with data. I remember seeing various things that had been done and having them explained to me. I said to the gentlemen showing me around, “This is all very interesting, but aren’t there some civil liberties aspects to what you are doing?” “Oh no, sir,” he said, “Tesco knows a lot more about you than we do.” However, that was 10 years ago.

I should probably also confess that another of my responsibilities related to the earlier discussion on GDPR. I also served before that, in 2003, on the Puttnam Committee on the Communications Act. It is very interesting in two respects. We did not try to advise on the internet, because we had no idea at that time what kind of impact the internet would have. I think the Online Safety Bill, nearly 20 years later, shows how there is sometimes a time lag—I am sure the same will apply with AI. One thing we did recommend was to give Ofcom special responsibility for digital education, and I have to say, although I think Ofcom has been a tremendous success as a regulator, it has lagged behind in picking up that particular ball. We still have a lot to do and I am glad that the right reverend Prelate the Bishop of Oxford and others placed such emphasis on this.

I note that the noble Baroness, Lady Merron, has put down a Question for 20 June, asking, further to the decision not to include media literacy provisions in the Online Safety Bill, whether the Government intend to impose updated statutory duties relating to media literacy and, if so, when. That is a very good question. Perhaps we could have an early glimpse at the reply.

A number of colleagues mentioned education. Many of us are familiar—although he never actually said it, as often with quotes—with Robert Lowe at the passing of the 1867 Act, not that he was very much in favour of it: “I suppose we must educate our masters”. I think there is a bit of a reverse now and the challenge is to ensure that both parliamentarians and the public have enough knowledge and skills to ensure that AI and other new technologies do not become our masters. In many ways, Parliament is still an 18th-century concept and I worry whether we have the structures to take account of these matters. What I have always refuted, though, is that AI and the related technologies are too complex or too international to come within the rule of law. It is important that we do not allow that.

I also think that we should take a couple of lessons from science fiction. Orwell’s Nineteen Eighty-Four warned of the capacity, particularly of the totalitarian states, to usurp civil liberties using technologies which in themselves may have positive value but have sinister implications. The noble Lord, Lord Browne, made a very powerful speech about some of the questions about defence—and one could also say about our police and security services—and how those are kept within the rule of law and proper political accountability. I have always been governed by two dictums. One was Eisenhower’s warning against the power of the military-industrial complex, a very powerful lobby now reinvigorated by Ukraine to urge on all of us a new arms race. Of course, we must respond to the threats posed by the Russians, but also to watch on what roads we are being taken. A number of points have been made on this.

The other dictum came from my old boss, Jim Callaghan, when it was just me and him together. He had been briefed by one of our security services and he said to me, “Always listen to what they say but never, never suspend your own political judgment.” I think it is important, in this fast-moving, complex world, for politicians not to be frightened to take on the responsibilities. One of my favourite films is “Dr. Strangelove”, where we saw how preordained plans could not be prevented from disaster. These are very high-risk areas.

I welcome the efforts to promote ethical AI nationally and internationally but note that paragraph 28 of the document we are considering today says:

“This guidance … is not a foundation for a countrywide ethical framework which developers could apply, the public could understand and the country could offer as a template for global use.”


This is all work in progress, but this debate is important because, as Parliament develops its skills and expertise, it must take on the responsibility to make informed decisions on these matters.

19:20
Baroness Merron Portrait Baroness Merron (Lab)
- Hansard - - - Excerpts

My Lords, I am glad to follow the noble Lord, Lord McNally, not least because of the generous observations he made about the similarity between me and the Minister, in a way that I am sure we both welcome.

I start my comments by expressing my congratulations to the noble Lord, Lord Clement-Jones, and all members of the committee. It is quite clear from this debate and the worldwide acclaim the committee has received just how insightful and incisive its work was. We also understand from the debate what a great catalyst the report has been for the Government to take action, and I am sure we will hear more about that from the Minister.

The development of artificial intelligence brings endless possibilities for improving our day-to-day lives. From its behind-the-scenes use in warehouse management and supply chain co-ordination to medical diagnosis and the piloting of driverless cars, artificial intelligence is being increasingly used across the country. The Government’s own statistics show that 15% of businesses already utilise it in at least one form.

I thank your Lordships for what they have brought to this extremely enlightening debate. I am struck not just by the amount of potential benefits and advances AI brings but by how those advances and potentials are matched by questions—ethical and practical challenges, with which we are all wrestling. This debate is a fantastic contribution to airing and addressing those points, which will not be going away.

As a nation, the UK is in a fortunate position to harness this potential. We have world-class universities, a culture of technological development and our strategic position, but the industry will need the support of the Government if it is to prosper. As the noble Lord, Lord Evans, rightly said, this includes the deployment of public procurement as an impact and lever. I hope the Minister will reflect on how that might be case.

However, as we have heard throughout this debate, there are associated risks with the development of new technologies and AI is no exception. As my noble friend Lord Browne so expertly set out, we have before us a changing landscape of conflict. Within that, AI can play a key role in weapons systems. On my point about the number of questions it raises, to which the right reverend Prelate also referred, is it right to delegate a machine to decide when and if to take a life? If the answer is so, it raises another set of questions which there will be no dodging.

In the last few weeks alone, we have seen more evidence of privacy breaches in the AI industry, and there have been numerous incidents globally of facial recognition technology, in particular, inheriting the racial bias of engineers. For that reason, ethics have to be central to our support for artificial intelligence and a condition for any projects that receive the support of government. If AI is developed in a vacuum of regulation, it will reflect biases and prejudices, and could reverse human progress rather than facilitate it.

The right reverend Prelate reminded us that, as with the Online Safety Bill and in fact so much of the legislation that we concern ourselves with, this is very much a moveable feast and we have to keep pace with it, not hold it back. That is a huge challenge in legislation but also in strategy.

As with any development of technology that brings prosperity, jobs and economic benefits, steps must also be taken to ensure that the benefits are experienced by towns and cities across the UK. That means driving private investment but also placing the trust of public support in new and emerging markets that are outside London and the south-east.

It is also important that new developments are sustainable and considerate of their implications for the natural environment, with AI being seen as a tool for confronting the climate crisis rather than an obstacle. Around the world it is already being adapted for use in mitigation and adaptation to climate change, and there are clear opportunities for this Government to support similar innovations to help the UK to meet our own climate obligations. I would be grateful if the Minister could comment on how that may be the case in respect of the environment.

We have to be alert to the consequences of AI for the world of work. For example, Frances O’Grady, the general secretary of the Trades Union Congress, pointed out earlier this year that employment rights have to keep pace. Again, we have to keep up with that moveable feast.

The question for us now to consider is what role the Government should take to ensure that the development of AI meets ethical, economic and environmental objectives. The committee was right to point to the need for co-ordination. There is no doubt that cross-departmental bodies, such as the Office for Artificial Intelligence, can help in that regard. Above all, we need the cross-government strategy to be effective and deliver on what it promises. I am sure the Minister will give us some indication in his remarks of what assessment has been made of how effective the strategy has been to date in bringing various aspects of government together. We have heard from noble Lords, including the noble Lord, Lord Clement-Jones, that some areas certainly need far greater attention in order to bring the strategy together.

Given the opportunities that this technology presents, the plan has to come from the heart of government and must seek to combine public and private investment in order to fuel innovation. As the committee said in the title of the report, there is no room for complacency. I feel that today’s debate has enhanced that point still further, and I look forward to hearing what the Minister has to say about the strategic plans for supporting the development of artificial intelligence across the UK, not just now but for many years ahead.

19:29
Lord Parkinson of Whitley Bay Portrait Lord Parkinson of Whitley Bay (Con)
- Hansard - - - Excerpts

My Lords, I am grateful to the noble Lord, Lord Clement-Jones, and all noble Lords who have spoken in today’s debate. I agree with the noble Lord, Lord McNally, that all the considerations we have heard have been hugely insightful and of very high quality.

The Government want to make sure that artificial intelligence delivers for people and businesses across the UK. We have taken important early steps to ensure we harness its enormous benefits, but agree that there is still a huge amount more to do to keep up with the pace of development. As the noble Lord, Lord Clement-Jones, said in his opening remarks, this is in many ways a moving target. The Government provided a formal response to the report of your Lordships’ committee in February 2021, but today’s debate has been a valuable opportunity to take stock of its conclusions and reflect on the progress made since then.

Since the Government responded to the committee’s 2020 report, we have published the National AI Strategy. The strategy, which I think it is fair to say has been well received, had three key objectives that will drive the Government’s activity over the next 10 years. First, we will invest and plan for the long-term needs of the AI ecosystem to continue our leadership as a science and AI superpower; secondly, we will support the transition to an AI-enabled economy, capturing the benefits of innovation in the UK, and ensuring that AI benefits all sectors and parts of the country; and, thirdly, we will ensure the UK gets the national and international governance of AI technologies right to encourage innovation and investment, and to protect the public and the values that we hold dear.

We will provide an update on our work to implement our cross-government strategy through the forthcoming AI action plan but, for now, I turn to some of the other key themes covered in today’s debate. As noble Lords have noted, we need to ensure the public have trust and confidence in AI systems. Indeed, improving trust in AI was a key theme in the National AI Strategy. Trust in AI requires trust in the data which underpin these technologies. The Centre for Data Ethics and Innovation has engaged widely to understand public attitudes to data and the drivers of trust in data use, publishing an attitudes tracker earlier this year. The centre’s early work on public attitudes showed how people tend to focus on negative experiences relating to data use rather than positive ones. I am glad to say that we have had a much more optimistic outlook in this evening’s debate.

The National Data Strategy sets out what steps we will take to rebalance this perception from the public, from one where we only see risks to one where we also see the opportunities of data use. It sets out our vision to harness the power of responsible data use to drive growth and improve services, including by AI-driven services. It describes how we will make data usable, accessible and available across the economy, while protecting people’s data rights and businesses’ intellectual property.

My noble friend Lord Holmes of Richmond talked about anonymisation. Privacy-enhancing technologies such as this were noted in the National Data Strategy and the Centre for Data Ethics and Innovation, which leads the Government’s work to enable trustworthy innovation, is helping to take that forward in a number of ways. This year the centre will continue to ensure trustworthy innovation through a world-first AI assurance road map and will collaborate with the Government of the United States of America on a prize challenge to accelerate the development of a new breed of privacy-enhancing technologies, which enable data use in ways that preserve privacy.

Our approach includes supporting a thriving ecosystem of data intermediaries, including data trusts, which have been mentioned, to enable responsible data-sharing. We are already seeing data trusts being set up; for example, pilots on health data and data for communities are being established by the Data Trusts Initiative, hosted by the University of Cambridge, and further pilots are being led by the Open Data Institute. Just as we must shift the debate on data, we must also improve the public understanding and awareness of AI; this will be critical to driving its adoption throughout the economy. The Office for Artificial Intelligence and the Centre for Data Ethics and Innovation are taking the lead here, undertaking work across government to share best practice on how to communicate issues regarding AI clearly.

Key to promoting public trust in AI is having in place a clear, proportionate governance framework that addresses the unique challenges and opportunities of AI, which brings me to another of the key themes of this evening’s debate: ethics and regulation. The UK has a world-leading regulatory regime and a history of innovation-friendly approaches to regulation. We are committed to making sure that new and emerging technologies are regulated in a way that instils public confidence in them while supporting further innovation. We need to make sure that our regulatory approach keeps pace with new developments in this fast-moving field. That is why, later this year, the Government will publish a White Paper on AI governance, exploring how to govern AI technologies in an innovation-friendly way to deliver the opportunities that AI promises while taking a proportionate approach to risk so that we can protect the public.

We want to make sure that our approach is tailored to context and proportionate to the actual impact on individuals and groups in particular contexts. As noble Lords, including the right reverend Prelate the Bishop of Oxford, have rightly set out, those contexts can be many and varied. But we also want to make sure our approach is coherent so that we can reduce unnecessary complexity or confusion for businesses and the public. We are considering whether there is a need for a set of cross-cutting principles which guide how we approach common issues relating to AI, such as safety, and looking at how to make sure that there are effective mechanisms in place to ensure co-ordination across the regulatory landscape.

The UK has already taken important steps forward with the formation of the Digital Regulation Cooperation Forum, as the noble Lord, Lord Clement-Jones, and others have noted, but we need to consider whether further measures are needed. Finally, the cross-border nature of the international market means that we will continue to collaborate with key partners on the global stage to shape approaches to AI governance and facilitate co-operation on key issues.

My noble friend Lord Holmes of Richmond and the noble Lord, Lord Evans of Weardale, both referred to the data reform Bill and the issues it covers. DCMS has consulted on and put together an ambitious package of reforms to create a new pro-growth regime for data which is trusted by people and businesses. This is a pragmatic approach which allows data-driven businesses to use data responsibly while keeping personal information safe and secure. We will publish our response to that later this spring.

My noble friend also mentioned the impact of AI on jobs and skills. He is right that the debate has moved on in an encouraging and more optimistic way and that we need to address the growing skills gap in AI and data science and keep developing, attracting and training the best and brightest talent in this area. Since the AI sector deal in 2018, the Government have been making concerted efforts to improve the skills pipeline. There has been an increased focus on reskilling and upskilling, so that we can ensure that, where there is a level of displacement, there is redeployment rather than unemployment.

As the noble Lord, Lord Bilimoria, noted with pleasure, the Government worked through the Office for AI and the Office for Students to fund 2,500 postgraduate conversion courses in AI for students from near and non-STEM backgrounds. That includes 1,000 scholarships for people from underrepresented backgrounds, and these courses are available at universities across the country. Last autumn, the Chancellor of the Exchequer announced that this programme would be bolstered by 2,000 more scholarships, so that many more people across the country can benefit from them. In the Spring Statement, 1,000 more PhD places were announced to complement those already available at 16 centres for doctoral training across the country. We want to build a world-leading digital economy that works for everyone. That means ensuring that as many people as possible can reap the benefits of new technologies. That is why the Government have taken steps to increase the skills pipeline, including introducing more flexible training routes into digital roles.

The noble Lord, Lord St John of Bletso, was right to focus on how the UK contributes to international dialogue on AI. The UK is playing a leading role in international discussions on ethics and regulation, including our work at the Council of Europe, UNESCO and the OECD. We should not forget that the UK was one of the founding members of the Global Partnership on Artificial Intelligence, the first multilateral forum looking specifically at this important area.

We will continue to work with international partners to support the development of the rules on use of AI. We have also taken practical steps to take some of these high-level principles and implement them when delivering public services. In 2020, we worked with the World Economic Forum to develop guidelines for responsible procurement of AI based on these values which have since been put into operation through the Crown Commercial Service’s AI marketplace. This service has been renewed and the Crown Commercial Service is exploring expanding the options available to government buyers. On an international level, this work resulted in a policy tool called “AI procurement in a box”, a framework for like-minded countries to adapt for their own purposes.

I am mindful that Second Reading of the Procurement Bill is taking place in the Chamber as we speak, competing with this debate. That Bill will replace the current process-driven EU regime for public procurement by creating a simpler and more flexible commercial system, but international collaboration and dialogue will continue to be a key part of our work in this area in the years to come.

The noble Lord, Lord Browne of Ladyton, spoke very powerfully about the use of AI in defence. The Government will publish a defence AI strategy this summer, alongside a policy ensuring the ambitious, safe and responsible use of AI in defence, which will include ethical principles based on extensive policy work together with the Centre for Data Ethics and Innovation. The policy will include an updated statement of our position on lethal autonomous weapons systems.

As the noble Lord, Lord Clement-Jones, said, there is no international agreement on the definition of such weapons systems, but the UK continues to contribute actively at the UN Convention on Certain Conventional Weapons, working closely with our international partners, seeking to build norms around their use and positive obligations to demonstrate how degrees of autonomy in weapons systems can be used in accordance with international humanitarian law. The defence AI centre will have a key role in delivering technical standards, including where these can support our implementation of ethical principles. The centre achieved initial operating capability last month and will continue to expand throughout this year, having already established joint military, government and industry multidisciplinary teams. The Centre for Data Ethics and Innovation has, over the past year, been working with the Ministry of Defence to develop ethical principles for the use of AI in defence—as, I should say, it has with the Centre for Connected and Autonomous Vehicles in the important context of self-driving vehicles.

The noble Baroness, Lady Merron, asked about the application of AI in the important sphere of the environment. Over the past two years, the Global Partnership on Artificial Intelligence’s data governance working group has brought together experts from across the world to advance international co-operation and collaboration in areas such as this. The UK’s Office for Artificial Intelligence provided more than £1 million to support two research projects on data trusts and data justice in collaboration with partner institutions including the Alan Turing Institute, the Open Data Institute and the Data Trusts Initiative at Cambridge University. These projects explored using data trusts to support action to protect our climate, as well as expanding understanding of data governance to include considerations of equity and justice.

The insights that have been raised in today’s debate and in the reports which tonight’s debate has concerned will continue to shape the Government’s thinking as we take forward our strategy on AI. As noble Lords have noted, by most measures the UK is a leader in AI, behind only the United States and China. We are home to one-third of Europe’s AI companies and twice as many as any other European nation. We are also third in the world for AI investment—again, behind the US and China—attracting twice as much venture capital as France and Germany combined, but we are not complacent. We are determined to keep building on our strengths, maintaining and building on this global position. This evening’s debate has provided many rich insights on the further steps we must take to make sure that the UK remains an AI and science superpower. I am very grateful to noble Lords, particularly to the noble Lord, Lord Clement-Jones, for instigating it.

Lord Clement-Jones Portrait Lord Clement-Jones (LD)
- Hansard - - - Excerpts

My Lords, first I thank noble Lords for having taken part in this debate. We certainly do not lack ambition around the table, so to speak. I think everybody saw the opportunities and the positives, but also saw the risks and challenges. I liked the use by the noble Baroness, Lady Merron, of the word “grappling”. I think we have grappled quite well today with some of the issues and I think the Minister, given what is quite a tricky cross-departmental need to pull everything together, made a very elegant fist of responding to the debate. Of course, inevitably, we want stronger meat in response on almost every occasion.

I am not going to do another wind-up speech, so to speak, but I think it was a very useful opportunity, prompted by the right reverend Prelate, to reflect on humanity. We cannot talk about artificial intelligence without talking about human intelligence. That is the extraordinary thing: the more you talk about what artificial intelligence can do, the more you have to talk about human endeavour and what humans can do. In that context, I congratulate the noble Lords, Lord Holmes and Lord Bilimoria, on their versatility. They both took part in the earlier debate, and it is very interesting to see the commonality between some of the issues raised in the previous debate on digital exclusion —human beings being excluded from opportunity— which arise also in the case of AI. I was very interested to see how, back to back, they managed to deal with all that.

The Minister said a number of things, but I think the trust and confidence aspect is vital. The proof of the pudding will be in the data reform Bill. I may differ slightly on that from the noble Lord, Lord Holmes, who thinks it is a pretty good thing, by the sound of it, but we do not know what it is going to contain. All I will say is that, when Professor Goldacre appeared before the Science and Technology Committee, I think it was a lesson for us all. He is the chap who has just written the definitive report on data use in the health area for the Department of Health, and he deliberately opted out, last year, of the GP request for consent to share data, and he is the leading data scientist in health. He was not convinced of the fact that his data would be safe. We can talk about trusted research environments and all that, but public trust in data use, whether it is in health or anything else, needs engagement by government and needs far more work.

The thing that frightens a lot of us is that we can see all the opportunities but if we do not get it right, and if we do not get permission to use the technology, we cannot deploy it in the way we conceived, whether it is for the sustainable development goals or for other forms of public benefit in the public service. Provided we get the compliance mechanisms right we can see the opportunities, but we have to get that public trust on board, not least in the area of lethal autonomous weapons. I think the perception of what the Government are doing in that area is very different from what the Ministry of Defence may think it is doing, particularly if they are developing some splendid principles of which we will all approve, when it is all about what is actually happening on the ground.

I will say no further. I am sure we will have further debates on this and I hope that the Minister has enjoyed having to brief himself for this debate, because it is very much part of the department’s responsibilities.

Motion agreed.
Committee adjourned at 7.48 pm.