Artificial Intelligence in Weapon Systems Committee Report Debate

Full Debate: Read Full Debate
Department: Ministry of Defence

Artificial Intelligence in Weapon Systems Committee Report

Lord Stevens of Birmingham Excerpts
Friday 19th April 2024

(7 months, 1 week ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Lord Stevens of Birmingham Portrait Lord Stevens of Birmingham (CB)
- View Speech - Hansard - -

My Lords, I too welcome the excellent report from the committee and thank it for this work. My brief contribution will focus on AI in the maritime domain. My starting point is that if, like me, you believe we need a bigger Navy then it is obvious that we will need to use AI-enabled systems as an effective force multiplier.

We should therefore enthusiastically welcome the Royal Navy’s leadership in a wide range of maritime use cases. For example, in the surface fleet there is the so-called intelligent ship human autonomy teaming; in the subsurface environment, autonomous uncrewed mine hunting, partly supported by the new RFA “Stirling Castle”, as well as new sensor technologies and acoustic signature machine learning for anti-submarine warfare; and in maritime air defence, AI-enhanced threat prioritisation and kinetic response using tools such as Startle and Sycoiea, which are obviously vital in an era of drone swarms and ballistic and hypersonic missiles. These and other AI systems are undoubtedly strengthening our nation’s ability to deter and defend at sea. They also enhance the Royal Navy’s centuries-old global contribution to rules-based freedom of navigation, which underpins our shared prosperity.

Looking forwards, my second point is that Parliament itself can help. When it comes to experimentation and trialling, there is a sense in some parts of defence that peacetime risk-minimisation mindsets are not currently well calibrated to the evolving and growing threats that we now face. Parliament could therefore accept and encourage a greater risk appetite, within carefully set parameters. Many innovations will come from within the public sector and we should support investment, including in the excellent Dstl and DASA. But in parentheses, I am not convinced by the report’s recommendation at paragraph 17 that the MoD should be asked to publish its annual spending on AI, given that it will increasingly become ubiquitous, embedded and financially impossible to demarcate.

Where Parliament can help is by recognising that most innovation in this space will probably involve partnerships with the commercial sector, often with dual-use civil and military elements, as the noble Lord, Lord Hamilton, argued. In fact, figures from Stanford published in Nature on Monday this week show that the vast majority of AI research is happening in the private sector, rather than in universities or the public sector. The MoD’s and the Navy’s accounting officers and top-level budget holders should be given considerable latitude to use innovative procurement models and development partnerships, without post-hoc “Gotcha” censure from us.

This brings me to my third and final point, which is that we need to be careful about how we regulate. The Royal Navy is, rightly, not waiting for new international public law but is pragmatically applying core UNCLOS requirements to the IMO’s four-part typology of autonomous maritime vehicles and vessels. As for the Navy’s most profound responsibility, the UK’s continuous at-sea deterrent, the Lords committee’s report rightly reasserts that nuclear weapons must remain under human control. Anyone who doubts that should Google “Stanislav Petrov” or “Cold War nuclear close calls”. But the report is also right to argue, at paragraph 51, that this paradigmatic case for restraint is not wholly generalisable. Parliament would be making a category mistake if we attempted to regulate AI as a discrete category of weapon system, when in fact it is a spectrum of rapidly evolving general-purpose technologies.

An alternative approach is set out in the 2023 Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, which includes key ethical and international humanitarian law guard-rails. That framework is now endorsed by more than 50 countries, including the US, France and the UK, but, regrettably, not by the other two permanent members of the UN Security Council, Russia and China, nor of course by Iran or North Korea. Work should continue, however, to expand its reach internationally.

To conclude, for the reasons I have set out, AI systems clearly offer enormous potential benefits in the maritime environment. Parliament can and should help our nation capitalise on them. Although the committee’s report is titled Proceed with Caution, for the reasons I have given today, the signal we should send to the Royal Navy should be: continue to proceed with speed.