Autonomous Weapons

(asked on 21st July 2022) - View Source

Question to the Ministry of Defence:

To ask Her Majesty's Government, further to their policy paper Ambitious, Safe, Responsible: Our approach to the delivery of AI enabled capability in Defence, published on 15 June, which says that "We do not rule out incorporating AI within weapon systems" and that real-time human supervision of such systems "may act as an unnecessary and inappropriate constraint on operational performance", when this would be seen as a constraint; and whether they can provide assurance that the UK's weapon systems will remain under human supervision at the point when any decision to take a human life is made.


Answered by
Baroness Goldie Portrait
Baroness Goldie
This question was answered on 4th August 2022

The 'Ambitious, Safe, Responsible' policy sets out that the Ministry of Defence opposes the creation and use of AI enabled weapon systems which operate without meaningful and context-appropriate human involvement throughout their lifecycle. This involvement could take the form of real-time human supervision, or control exercised through the setting of a system's operational parameters.

We believe that Human-Machine teaming delivers the best outcomes in terms of overall effectiveness. However, in certain cases it may be appropriate to exert rigorous human control over AI-enabled systems through a range of safeguards, process and technical controls without always requiring some form of real-time human supervision. For example, in the context of defending a maritime platform against hypersonic weapons, defensive systems may need to be able to detect incoming threats and open fire faster than a human could react.

In all cases, human responsibility for the use of AI must be clearly established, and that responsibility underpinned by a clear and consistent articulation of the means by which human control is exercised across the system lifecycle, including the nature and limitations of that control.

Reticulating Splines