Artificial Intelligence in Weapon Systems Committee Report Debate
Full Debate: Read Full DebateLord Lisvane
Main Page: Lord Lisvane (Crossbench - Life peer)Department Debates - View all Lord Lisvane's debates with the Ministry of Defence
(8 months ago)
Lords ChamberThat this House takes note of the Report from the Artificial Intelligence in Weapon Systems Committee Proceed with Caution: Artificial Intelligence in Weapon Systems (HL Paper 16).
My Lords, it is a pleasure to introduce this debate on the report of the AI in Weapon Systems Committee. I am very grateful to the business managers for arranging an early debate; this is a fast-moving subject and it is right that the House has an early opportunity to consider it.
It was a real privilege to chair the committee, for two reasons. The first was its most agreeable and expert membership, who were thoroughly collegiate and enthusiastic. The second was the outstanding staff support that we received. The excellent Alasdair Love led a first-class team by example. As well as Alasdair, I thank Sarah Jennings, our gifted policy adviser; Cathy Adams, who led us authoritatively through the complexities of international humanitarian law; Stephen Reed, who provided Rolls-Royce administration; and Louise Shewey, who was the ideal communications expert. Our two specialist advisers, Professor Dame Muffy Calder from the University of Glasgow and Adrian Weller from the Alan Turing Institute at the University of Cambridge, were invaluable.
AI will have a major influence on the future of warfare. Forces around the world are investing heavily in AI capabilities but fighting is still largely a human activity. AI-enabled autonomous weapon systems—AWS—could revolutionise defence technology and are one of the most controversial uses of AI today. How, for example, can autonomous weapons comply with the rules of armed conflict, which exist for humanitarian purposes?
There is widespread interest in the use of AI in autonomous weapons but there is concern as well. Noble Lords will be aware of recent reports that Israel is using AI to identify targets in the Gaza conflict, potentially leading to a high civilian casualty rate. In a society such as ours, there must be democratic endorsement of any autonomous weapon capability. There needs to be greater public understanding; an enhanced role for Parliament in decision-making; and the building and retention of public confidence in the development and potential use of autonomous weapons.
The Government aim to be “ambitious, safe, responsible”. Of course we agree in principle, but aspiration has not entirely lived up to reality. In our report, we therefore made proposals to ensure that the Government approach the development and use of AI in AWS in a way that is ethical and legal, providing key strategic and battlefield benefits, while achieving public understanding and democratic endorsement. “Ambitious, safe and responsible” must be translated into practical implementation. We suggest four priorities.
First, the Government should lead by example in international engagement on regulation of AWS. The AI Safety Summit was a welcome initiative, but it did not cover defence. The international community has been debating the regulation of AWS for several years. Outcomes could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law—and each approach has its advocates. Despite differences about form, an effective international instrument must be a high priority.
A key element in pursuing international agreement will be prohibiting the use of AI in nuclear command, control and communications. On one hand, advances in AI offer greater effectiveness. For example, machine learning could improve detection capabilities of early warning systems, make it easier for human analysts to cross-analyse intelligence, surveillance and reconnaissance data, and improve the protection of nuclear command, control and communications against cyberattacks.
However, the use of AI in nuclear command, control and communications could spur arms races or increase the likelihood of states escalating to nuclear use during a crisis. AI will compress the time for decision-making. Moreover, an AI tool could be hacked, its training data poisoned or its outputs interpreted as fact when they are statistical correlations—all leading to potentially catastrophic outcomes.
Secondly, the Government should adopt an operational definition of AWS which, surprisingly, they do not have. The Ministry of Defence is cautious about adopting one because
“such terms have acquired a meaning beyond their literal interpretation”,
and an
“overly narrow definition could become quickly outdated in such a complex and fast-moving area and could inadvertently hinder progress in international discussions”.
I hear what the Government say, but I am not convinced. I believe it is possible to create a future-proofed definition. Doing so would aid the UK’s ability to make meaningful policy on autonomous weapons and engage fully in discussions in international fora. It would make us a more effective and influential player.
Thirdly, the Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis on information obtained from sensors. However, it is essential to have human control over the deployment of the system, to ensure both human moral agency and legal compliance. This must be buttressed by our absolute national commitment to the requirements of international humanitarian law.
Finally, the Government must ensure that their procurement processes can cope with the world of AI. We heard that the Ministry of Defence’s procurement suffers from a lack of accountability and is overly bureaucratic—not the first time such criticisms have been levelled. In particular, we heard that it lacks capability on software and data, both of which are central to the development of AI. This may require revolutionary change. If so, so be it—but time is short.
Your Lordships have the Government’s reply to our report. I am grateful for the work that has gone into it. There are six welcome points, which I will deal with expeditiously.
First, there is a commitment to ensuring meaningful human control and human accountability throughout the lifecycle of a system and the acknowledgement that accountability cannot be transferred to machines.
Secondly, I welcome their willingness to explore the establishment of an
“‘AI-enabled military operator’ skill set”
and to institute processes for the licensing and recertification of operators, including training that covers technical, legal and ethical compliance.
Thirdly, I welcome the commitment to giving force to the ethical principles in “ambitious, safe and responsible”. The Government must become a leader in setting responsible standards at every stage of the lifecycle of AWS, including responsible development and governance of military AI.
Fourthly, I am glad that the Government are reviewing the role of its AI ethics advisory panel, including in relation to our recommendation to increase transparency, which is key if the Government are to retain public confidence in their policies.
Fifthly, I welcome the recognition of the importance of retaining ultimate ownership over data, and making this explicit in commercial arrangements with suppliers, as well as the importance of pursuing data-sharing agreements and partnerships with allies. This is crucial for the development of robust AI models.
Finally, I welcome the Government’s readiness to make defence AI a more attractive profession, including by making recruitment and retention allowances more flexible, enabling more exchange between the Government and the technology sector and by appointing a capability lead for AI talent and skills. This is essential if MoD civil servants are to deal on equal terms with the private sector.
Two cheers so far—the Government could do more. They have no intention of adopting an operational definition of AWS, and I think they must if the UK is to be a more serious player. Perhaps the Minister can update us on a trip down the Damascus road on that one, but at the moment there appears to be no movement.
They do not commit to an international instrument on lethal AWS, arguing that current international humanitarian law is sufficient. If the Government want to fulfil their ambition to promote the safe and responsible development of AI around the world, they must be a leader in pressing for early agreement on an effective international instrument. The reports of the use of AI in the Gaza conflict are clear evidence of the urgency.
Our recommendation on the importance of parliamentary accountability is accepted, but the Government seemingly make no effort to show how accountability would be achieved. Parliament must be at the centre of decision-making on the development and use of AWS, but that depends on the transparency and availability of information, on anticipating issues rather than reacting after the event and on Parliament’s ability to hold Ministers effectively to account.
The Government accept that human control should be ensured in nuclear command, control and communications, but they do not commit to removing AI entirely. However, the risk of potentially apocalyptic consequences means that the Government should at least lead international efforts to achieve its prohibition.
The Government have accepted the need to scrutinise procurement offers more effectively and our recommendation to explore bringing in external expertise through an independent body, but they provide no detail on how they would create standards relating to data quality and sufficiency, human-machine interaction and the transparency and resilience of AWS.
Overall, the Government’s response to our report was “of constructive intent”. I hope that that does not sound too grudging. They have clearly recognised the role of responsible AI in our future defence capability, but they must embed ethical and legal principles at all stages of design, development and deployment. Technology should be used when advantageous, but not at an unacceptable cost to the UK’s moral principles. I beg to move.
My Lords, 3 pm on a Friday afternoon is not a particularly auspicious time for a long final spot, but I am extremely grateful to noble Lords on all sides of the House who have taken part in the debate. Their interest, views and expertise have made this a very valuable proceeding. I am extremely grateful for the kind remarks from many about the committee’s work. I especially thank the noble Lord, Lord Clement-Jones, whose idea it originally was that the committee should be set up. I hope that he is pleased with the result.
I am also grateful to the Minister for some positive announcements made during his speech, although he will accept there are issues on which he and I will need to agree to disagree, at least for the time being. Finally, the importance of the subject and the speed of developments make it certain that your Lordships’ House will need to consider these matters again before long, and I look forward to the occasion.