Artificial Intelligence in Weapon Systems Committee Report Debate
Full Debate: Read Full DebateBaroness Hodgson of Abinger
Main Page: Baroness Hodgson of Abinger (Conservative - Life peer)Department Debates - View all Baroness Hodgson of Abinger's debates with the Ministry of Defence
(8 months ago)
Lords ChamberMy Lords, like others, I thank the noble Lord, Lord Lisvane, for his excellent introduction and for chairing the committee so ably. I also thank all fellow colleagues on the committee. We had some very interesting discussions, and those who were more informed were patient with people like me who were probably less informed. I also thank our advisers and the clerks, who supported us so well. This has indeed been a fascinating committee to serve on and is an example of how the House of Lords plays an outstanding role in highlighting some of the most pressing concerns of the day. My remarks are mostly personal reflections.
Whether we like AI or not, it is here to stay and is developing exponentially at a previously unimaginable rate. This complex technological revolution has the potential to reshape the nature of warfare, with advantages but also disadvantages. As the noble and gallant Lord, Lord Houghton, mentioned, today’s warfare, in a competitive, volatile and challenging world, is often conducted in the grey zone, through hyper competition, on the internet and in so many areas of life. It begs the question: what is a weapon in today’s world? Interference with a country’s systems, be they economic, infrastructure or social, can be subtle but effective in undermining and disabling. However, with a limited time to report, we confined our conversations to lethal weapon systems.
Although AI creates the ability to calculate with such stupendous speed, we should be mindful that there are areas not covered by binary calculations—humanity, empathy and kindness, to name a few. Will faster analysis fuel escalation, due to rapid response and a lack of time to consider repercussions? As others have mentioned, we can see the chilling ability to quickly identify thousands of targets, as the use of the Lavender system in Gaza reveals, with, it is reported, 20 seconds’ consideration given to each individual target.
Whatever military systems are used, we have a national commitment to the requirements of international humanitarian law, and there are huge ethical implications in relinquishing human control over lethal decision-making, with profound questions about accountability and morality. To what point can machines be entrusted with the responsibility of the enormity and breadth of decision-making about life and death on the battlefield?
The MoD’s defence AI strategy, published in 2022, signalled its intention to utilise AI
“from ‘back office’ to battlespace”,
demonstrating how all-pervasive AI will be in every system. While recognising the advantages in many ways, we also have to recognise the dangers in this strategy. Systems can be hacked, so it is equally important to develop security to ensure that they are not accessible by those who wish us harm. The strategy also sets out an autonomy spectrum framework, demonstrating the different levels of interrelationships between humans and machines.
AI is being developed mostly in companies and academic institutions. This too presents challenges: the threat of an arms race with systems that can be sold to the highest bidder, who may or may not be hostile. The majority of this development is being carried out by men, but, as half the world is female and women see things in a different way, we must encourage more girls and women to play their part to ensure a lack of gender bias.
With the glaring example of the Post Office scandal, the opaque nature of AI algorithms makes it difficult to judge whether they are accurate, up to date and appropriate. However much testing is carried out, it is not easy to know for sure whether systems are reliable and accurate until they are deployed. But the reality is that there is no going back, and as these systems proliferate, hostile nations and non-state actors may have access to, interfere with and deploy their own systems, and they may not wish to conform to any international standards imposed.
I thank the Government for their response to our report and congratulate them on the AI summit held last November at Bletchley Park, resulting in a commitment from 28 nations and leading AI companies with a focus on safety. However, I understand that weapon systems were not part of the conversation. It will be difficult to harness the development of this new technology as it gathers speed, so I hope that weapon systems will be part of the conversation at future summits.
Stephen Hawking once warned that the development of full AI
“could spell the end of the human race”,
so “proceed with caution” has to be the mantra with regard to AI in weapon systems.