Artificial Intelligence (Select Committee Report)

Lord Browne of Ladyton Excerpts
Monday 19th November 2018

(6 years, 1 month ago)

Lords Chamber
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Lord Browne of Ladyton Portrait Lord Browne of Ladyton (Lab)
- Hansard - -

My Lords, as I intend to restrict my remarks to the part of the report that deals with autonomous weapons, I draw attention to my entry in the register of interests, particularly my consultancy with the Nuclear Threat Initiative and its nascent collaboration with the Centre for the Study of Existential Risk at Cambridge University, which of course is the brainchild of the noble Lord, Lord Rees of Ludlow, whom I see in his place. I look forward to his contribution.

I add my congratulations and appreciation to the noble Lord, Lord Clement-Jones, and his committee on this wide-ranging report into the use and development of artificial intelligence. I agreed with many of its recommendations—at least those that I fully understood—particularly for the Government to get a grip on algorithmic bias. The committee’s identification of the probable dangers of a small number of companies owning the lion’s share of the world’s data more than supports its recommendation that the Government stop these companies monopolising control of our data which is in their possession.

I agree also with the report placing a strong emphasis on the UK’s role as an ethical leader in the AI world. This is especially important in the military use of AI. Paragraphs 334 to 346 of the report deal with autonomous weapons, and open with the following sentence:

“Perhaps the most emotive and high-stakes area of AI development today is its use for military purposes”.


I wholeheartedly agree. The report concedes, unfortunately, that the committee had insufficient time or capacity to deal with this, and goes on to record the view that this area merits a “full inquiry” on its own. I fully agree and argue that the sooner your Lordships’ House has this inquiry, the better. We should move directly from this debate to the instruction of that important inquiry.

I strongly agree with the report’s recommendation that,

“the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world”.

In particular, I agree that the present UK definition, which was explained so simply by the noble Lord, Lord Clement-Jones, is problematic. It depends on there being no difference between “automatic” and “autonomous”; it limits the UK’s participation in the international debate, because it speaks a different language; it restricts our ability to show moral and ethical leadership; and it blocks the possibility that the current international process that is considering how to control these weapons systems will reach an agreed definition, which is after all its primary purpose.

Since 2016, in an attempt to find a suitable multilateral vehicle to regulate lethal autonomous weapons, the signatory states to the Convention on Certain Conventional Weapons—a treaty signed in 1980 with the purpose of eliminating weapons deemed excessively cruel or injurious—sought to assess the potential dangers posed and consider whether new measures were needed to control LAWs, as they are often referred to. Early in their deliberations, the high-contracting parties subcontracted this task to a group of governmental experts, who have become known as GGE. The group most recently met in Geneva in August and the draft report from its 2018 deliberations reveals that it was defeated by the challenge of finding an agreed definition of autonomous weapons, meaning that its concluding recommendations are—as they were the year before, and the year before that—that it should meet again next year. This, despite the fact that most experts believe that the unregulated deployment of LAWs could lead to violations of the law of war and international humanitarian law, while increasing the risk of uncontrolled escalation should there be a major-power crisis.

Almost every delegate to the GGE meetings argued that humans should always retain ultimate control over weapons systems but the group still failed to agree anything other than that it should continue further expert-level discussion next year, which will be the fourth year of discussion. In my view it has had ample time to investigate the dangers posed by autonomous weapons and, although important technical issues about definition remain, the time for discussion is over. It is beyond disappointment that, in response to the Select Committee’s recommendation, the Government yet again explained that their policy is to await the outcome of this expert discussion, in the meantime sticking with their “problematic” definition. I suggest to the Government that this expert discussion will never end. There is no sign of it ending at the moment.

We have in this debate an opportunity to ask the Government to think again. It is timeous, because the high-contracting parties to the CCW are themselves meeting later this week in Geneva to discuss the recommendations of the GGE. It is now clear that the only way this process will progress is if the high-contracting parties take back control and initiate proper negotiations on autonomous weapons, with the aim of crafting an outcome ensuring continued human control over weapons of war and the decision to employ lethal force.

Last week at the Centre for the Study of Existential Risk, I met experts who are working together on this challenge. They agree that the development of LAWs poses serious risk to international security and could spark arms races or lower the threshold for the use of force. They are concerned about how to prevent their deployment, once developed, in urban or other settings where mistakes and their consequences are likely to be very costly. In particular, I am impressed by the views of Dr Shahar Avin, one of the three researchers from CSER who will attend the meeting in Geneva this week. He agrees with the growing consensus that the UN’s negotiations have made little progress, that the discussions are slowed by disagreements about definitions and that there has been little constructive engagement, let alone vision and leadership, from major military powers. For a variety of reasons the United States—and consequently Russia and China—is unlikely to provide that leadership. As Dr Avin says:

“In January, the Prime Minster said she wanted the UK to lead the world in the ‘safe, ethical and innovative deployment of artificial intelligence’. Some of the world's leading Al researchers and organisations, who are based in the UK, have joined international calls to prevent the development of LAWs.


This makes the United Kingdom the leading candidate to provide leadership in the LAWs negotiations, as part of a wider vision for a responsible, safe and ethical future for artificial intelligence. Instead of taking a backseat while holding on to a problematic definition, the UK could furnish its CCW delegates with a national vision generated through a multi-stakeholder conversation, and be seen globally as a leader”—


or in partnership with France and Germany, which are already taking the lead—

“in how to respond to emerging technologies”.

I am conscious that this approach, although similar, is different from the second recommendation of the Select Committee—the formation of a panel of military and AI experts, if I remember correctly, to meet within eight months to agree a revised definition. I strongly believe that these matters should not be restricted to the military and the experts. The whole of society has a stake in this, and there should be a broad and ongoing UK conversation. In particular, legislators—Members of both Houses of Parliament, who have been largely silent on these issues—should be involved in this conversation. I thank the Select Committee for creating an opening for the beginning of just such a multi-stakeholder conversation, and I challenge the Minister to go back to his colleagues and ask them to begin it soon.