Artificial Intelligence in Weapon Systems Committee Report Debate
Full Debate: Read Full DebateEarl of Minto
Main Page: Earl of Minto (Conservative - Excepted Hereditary)Department Debates - View all Earl of Minto's debates with the Ministry of Defence
(8 months ago)
Lords ChamberMy Lords, I am grateful to those present for their considered and, at times, heartfelt contributions to this debate. I am equally grateful to the noble Lord, Lord Lisvane, for bringing this debate to the House and for his excellent opening remarks; and to the entire committee for its informative report on artificial intelligence in weapon systems, which was published at the end of last year and which the Government have considered and contributed to most seriously.
As many noble Lords will be aware, the Government published their formal response to the committee’s recommendations in February. They concurred with the committee’s advice, as a number of noble Lords pointed out, to proceed with caution in this domain. As we have heard today, all sides of this House appreciate that artificial intelligence will be transformative in many ways—a balance of risk and opportunity.
For defence, we can already start to see the influence of artificial intelligence in planning operations, in the analysis of vast quantities of data and imagery, in protecting our people, in the lethality of our weaponry and, crucially, in keeping both our Armed Forces and innocent civilians out of harm’s way.
Take the example of revolutionising the efficacy of CCTV, and surveillance more broadly, in removing the serious levels of risk in bomb or mine disposal, or in refining the pinpoint accuracy of a military strike specifically to avoid collateral damage, as the noble and gallant Lord, Lord Houghton, identified. In this fast-evolving sector, as the noble Lord, Lord Hamilton, and the noble Baroness, Lady Hodgson, also rightly pointed out, it is essential that our Armed Forces are able to embrace emerging advances, drive efficiencies and maintain a technological edge over our adversaries who, noble Lords can be sure, will be pursuing the opportunity with vigour.
The MoD has established the Defence AI Centre to spearhead this critical work, bringing together experts from its strategic command centre in Northwood, its Defence Equipment and Support body in Bristol, and its science and technology laboratories near Salisbury, alongside a broad range of industry and academia: a genuine government and private sector partnership of significant potential.
The MoD also has some 250 projects either already under way or imminently starting work, and has tripled investment in artificial intelligence-specific research over the last three years, reaching more than £54 million in the last financial year. It is £115 million directly over the last three years, to answer the question from the noble Lord, Lord St John of Bletso.
AI is an enabling component, not a capability per se. It is contained within so many capabilities that, probably, the investment is nearer to about £400 million in activities outside raw research.
Evidently, the potential of artificial intelligence in defence will continue to raise myriad technical, ethical and legal questions and challenges. This Government will continue to work through these judiciously, with as much transparency and consultation as possible, within the obvious national security constraints. To guide its work and its use of artificial intelligence in any form, defence is governed by clear ethical and legal principles. In June 2022, defence published its defence AI strategy alongside our “Ambitious, safe, responsible” policy statement, which set out those principles. We were one of the first nations to publish our approach to AI transparently in this way.
To inform our development of appropriate policies and control frameworks, we are neither complacent nor blinkered. The MoD regularly engages with a wide range of experts and civil society representatives to understand different perspectives. Equally, it takes the views expressed in this House and the other place most seriously.
To categorically reassure noble Lords, the British Ministry of Defence will always have context-appropriate human involvement and, therefore, meaningful human control, responsibility and accountability. We know, however, that other nations have not made similar commitments and may seek to use these new technologies irresponsibly. In response to this, we are pushing and pursuing a two-pronged approach. First, the UK is working with its allies and international partners to develop and champion a set of norms and standards for responsible military AI, grounded in the core principles of international humanitarian law. Secondly, we are working to identify and attribute any dangerous use of military AI, therefore holding those irresponsible parties to account.
I realise that the question of how and whether to define autonomous weapons systems is extremely sensitive. The noble Lords, Lord Lisvane and Lord Clement-Jones, and the noble Lord, Lord Browne of Ladyton, who is no longer in his place, have raised this matter. These systems are already governed by international humanitarian law so, unfortunately, defining them will not strengthen their lawful use. Indeed, it is foreseeable that, in international negotiations, those who wilfully disregard international law and norms could use a definition to constrain the capabilities and legitimate research of responsible nations. It is also for that reason that, after sincere and deep consideration, we do not support the committee’s call for a swift agreement of an effective international instrument on lethal autonomous weapons systems—that would be a gift to our adversaries. However, I must emphasise that this Government will work tirelessly with allies to provide global leadership on the responsible use of AI.
On the question of fully autonomous weapons, we have been clear that we do not possess fully autonomous weapons systems and have no intention of developing them. On the very serious issue of autonomous nuclear weapons, which is understandably a troubling thought, as identified by a number of noble Lords, specifically the noble Lords, Lord Lisvane and Lord Hamilton, we call on all other nuclear states to match our commitment to always maintaining human political control over nuclear capabilities.
We will continue to shape international discussions on norms and standards, remaining an active and influential participant in international dialogues to regulate autonomous weapons systems, particularly the UN group of governmental experts under the scope of the Convention on Certain Conventional Weapons, which we believe is the most appropriate international forum to advance co-operation on these issues.
International compliance will continue to be paramount, as the noble Earl, Lord Erroll, brought attention to and the noble Lord, Lord Clement-Jones, mentioned. I will write in detail about the many questions that he asked about this specific point; I am afraid we just do not have the time now.
We believe the key safeguard over military AI is not a definition or document but, instead, ensuring human involvement throughout the life cycle. The noble Lords, Lord Lisvane and Lord Clement-Jones, and the noble and gallant Lord, Lord Houghton, rightly raised that issue. What that looks like in practice varies from system to system and on the environment in which they are deployed. That means every defence activity with an AI component must be subject to rigorous planning and control by suitably trained, qualified and experienced people to ensure that we meet our military objectives in full compliance with international humanitarian law, as well as all our other legal obligations.
This year we will publish governance, accountability and reporting mechanisms. We will build challenge into our processes and input from outside experts in the form of an independent ethics panel. The MoD accepts the committee’s recommendation to increase the transparency of that panel’s advice, and we have just published the minutes of all six previous advisory panel meetings on GOV.UK, alongside the panel’s membership and terms of reference. We are also re-examining the role and options for the ethics advisory panel to include the views of more external experts in sensitive cases.
The committee made a number of recommendations around expertise, training, recruitment and pay, and quite rightly so. The MoD offers some unique opportunities for people interested in national and international security, but we are far from taking this for granted. We have accepted recommendations for the Haythornthwaite review, which will be familiar to many in the House and the other place, to enable any new joiners the option of careers that zigzag between regulars and reserves and, importantly, between the public and private sectors.
This is a highly attractive and highly competitive market, as outlined by a number of noble Lords, in the widest context. We are taking a range of additional steps to make defence AI an attractive and aspirational choice. We are looking at recruitment, retention and reward allowances, developing new ways to identify and incubate existing AI talent within our ranks, and developing new partnerships with private sector companies of whatever company size—particularly SMEs, because they are particularly strong in this area—to encourage more exchanges of expertise.
I also point out that my honourable friend the Minister for Defence Procurement is alive to this issue and has been driving substantial reform through the integrated procurement model, injecting agility and challenge into a system that I think everybody accepts needs considerable work. We will also shortly appoint a capability lead for AI talent and skills to drive this work forward in partnership with the frontline commands and our enabling organisations.
The committee also made a number of eminently sensible recommendations around testing of AI systems and operators. The MoD already has effective processes and procedures in place to ensure that new or novel military capabilities are safe and secure and operate as intended. As the noble Lord, Lord Stevens, illustrated, trial and risk appetite is an important aspect of consideration. We will ensure these are reviewed and updated as necessary as we integrate AI technologies into our armoury.
The Government are committed to providing as much transparency as possible around defence AI investment to aid public and parliamentary scrutiny. However, AI is always going to be an enabling component of much broader systems and programs. It can therefore be very difficult to isolate and quantify the cost of the AI element separate to the wider system. However, we are exploring solutions in the medium term that may give a better picture of specific and overall AI spending and investment across defence.
In conclusion, the department welcomes the overarching conclusions of the committee and the very wise advice to “proceed with caution”. We are determined to use AI to preserve the strategic edge, but we are equally committed to do so responsibly and in conformity with our values and obligations. Defence has a proven track record of integrating new technologies across the UK Armed Forces, and we should meet this one head-on. While we recognise that the adoption of AI will raise many new challenges, we believe that being open to challenge ourselves, including from Parliament, is an important part of the way forward.