House of Commons (9) - Commons Chamber (7) / Written Statements (2)
House of Lords (4) - Lords Chamber (4)
(9 months ago)
Lords ChamberThat the Bill be now read a second time.
My Lords, I declare my technology interests as adviser to Boston Ltd. I thank all noble Lords who have signed up to speak; I eagerly anticipate all their contributions and, indeed, hearing from my noble friend the Minister. I also thank all the organisations that got in contact with me and other noble Lords for their briefings, as well as those that took time to meet me ahead of this Second Reading debate. Noble Lords and others who would like to follow this on social media can use #AIBill #AIFutures.
If we are to secure the opportunities and control the challenges of artificial intelligence, it is time to legislate and to lead. We need something that is principles-based and outcomes-focused, with input transparent, permissioned and wherever applicable paid for and understood.
There are at last three reasons why we should legislate on this: social, democratic and economic. On reason one, the social reason, some of the greatest benefits we could secure from AI come in this space, including truly personalised education for all, and healthcare. We saw only yesterday the exciting early results from the NHS Grampian breast-screening AI programme. Then there is mobility and net zero sustainability.
Reason two is about our democracy and jurisdiction. With 40% of the world’s democracies going to the polls this year, with deepfakes, cheap fakes, misinformation and disinformation, we are in a high-threat environment for our democracy. As our 2020 Democracy and Digital Technologies Select Committee report put it, with a proliferation of misinformation and disinformation, trust will evaporate and, without trust, democracy as we know it will simply disappear.
On our jurisdiction and system of law, the UK has a unique opportunity at this moment in time. We do not have to fear being in the first mover spotlight—the EU has taken that with its Act, in all its 892 pages. The US has had the executive order but is still yet to commit fully to this phase. The UK, with our common-law tradition, respected right around the world, has such an opportunity to legislate in a way that will be adaptive, versatile and able to develop through precedent and case law.
On reason three, our economy, PwC’s AI tracker says that by 2030, there will be a 14% increase in global GDP worth $15.7 trillion. The UK must act to ensure our share of that AI boom. To take just one technology, the chatbot global market grew tenfold in just four years. The Alan Turing Institute report on AI in the public sector, which came out just this week, says that 84% of government services could benefit from AI automation in over 200 different services. Regulated markets perform better. Right-sized regulation is good for innovation and good for inward investment.
Those are the three reasons. What about three individual impacts of AI right now? What if we find ourselves on the wrong end of an AI decision in a recruitment shortlisting, the wrong end of an AI decision in being turned down for a loan, or, even worse, the wrong end of an AI decision when awaiting a liver transplant? All these are illustrations of AI impacting individuals, often when they would not even know that AI was involved. We need to put paid to the myth, the false dichotomy, that you must have heavy, rules-based regulation or a free hand—that we have to pay tribute to the cry of the frontierists in every epoque: “Don’t fence me in”. Right-sized regulation is good socially, democratically and economically. Here is the thing: AI is to human intellect what steam was to human strength. You get the picture. Steam literally changed time. It is our time to act, and that is why I bring this Bill to your Lordships’ House today.
In constructing the Bill, I have sought to consult widely, to be very cognisant of the Government’s pro-innovation White Paper, of all the great work of BCS, technology, industry, civil society and more. I wanted the Bill to be threaded through with the principles of transparency and trustworthiness; inclusion and innovation; interoperability and international focus; accountability and assurance.
Turning to the clauses, Clause 1 sets up an AI authority. Lest any noble Lord suddenly feels that I am proposing a do-it-all, huge, cumbersome regulator, I am most certainly not. In many ways, it would not be much bigger in scope than what the DSIT unit is proposing: an agile, right-sized regulator, horizontally focused to look across all existing regulators, not least the economic regulators, to assess their competency to address the opportunities and challenges presented by AI and to highlight the gaps. And there are gaps, as rightly identified by the excellent Ada Lovelace Institute report. For example, where do you go if you are on the wrong end of that AI recruitment shortlisting decision? It must have the authority, similarly, to look across all relevant legislation—consumer protection and product safety, to name but two—to assess its competency to address the challenges and opportunities presented by AI.
The AI authority must have at its heart the principles set out in Clause 2: it must be not just the custodian of those principles, but a very lighthouse for them, and it must have an educational function and a pro-innovation purpose. Many of those principles will be very recognisable; they are taken from the Government’s White Paper but put on a statutory footing. If they are good enough to be in the White Paper, we should commit to them, believe in them and know that they will be our greatest guides for the positive path forward, when put in a statutory framework. We must have everything inclusive by design, and with a proportionality thread running through all the principles, so none of them can be deployed in a burdensome way.
Clause 3 concerns sandboxes, so brilliantly developed in the UK in 2016 with the fintech regulatory sandbox. If you want a measure of its success, it is replicated in well over 50 jurisdictions around the world. It enables innovation in a safe, regulated, supported environment: real customers, real market, real innovations, but in a splendid sandbox concept.
Clause 4 sets up the AI responsible officer, to be conceived of not as a person but as a role, to ensure the safe, ethical and unbiased deployment of AI in her or his organisation. It does not have to be burdensome, or a whole person in a start-up; but that function needs to be performed, with reporting requirements under the Companies Act that are well understood by any business. Again, crucially, it must be subject to that proportionality principle.
Clause 5 concerns labelling and IP, which is such a critical part of how we will get this right with AI. Labelling: so that if anybody is subject to a service or a good where AI is in the mix, it will be clearly labelled. AI can be part of the solution to providing this labelling approach. Where IP or third-party data is used, that has to be reported to the AI authority. Again, this can be done efficiently and effectively using the very technology itself. On the critical question of IP, I met with 25 organisations representing tens of thousands of our great creatives: the people that make us laugh, make us smile, challenge us, push us to places we never even knew existed; those who make music, such sweet music, where otherwise there may be silence. It is critical to understand that they want to be part of this AI transformation, but in a consented, negotiated, paid-for manner. As Dan Guthrie, director-general of the Alliance for Intellectual Property, put it, it is extraordinary that businesses together worth trillions take creatives’ IP without consent and without payment, while fiercely defending their own intellectual property. This Bill will change that.
Clause 6 concerns public engagement. For me, this is probably the most important clause in the Bill, because without public engagement, how can we have trustworthiness? People need to be able to ask, “What is in this for me? Why should I care? How is this impacting my life? How can I get involved?” We need to look at innovative ways to consult and engage. A good example, in Taiwan, is the Alignment Assemblies, but there are hundreds of novel approaches. Government consultations should have millions of responses, because this is both desirable and now, with the technology, analysable.
Clause 7 concerns interpretation. At this stage, I have drawn the definitions of AI deliberately broadly. We should certainly debate this, but as set out in Clause 7, much would and should be included in those definitions.
Clause 8 sets out the potential for regulating for offences and fines thereunder, to give teeth to so much of what I have already set out and, rightly, to pay the correct respect to all the devolved nations. So, such regulations would have to go through the Scottish Parliament, Senedd Cymru and the Northern Ireland Assembly.
That brings us to Clause 9, the final clause, which makes this a UK-wide Bill.
So, that is the Bill. We know how to do this. Just last year, the Electronic Trade Documents Act showed that we know how to legislate for the possibilities of these new technologies; and, my word, we know how to innovate in the UK—Turing, Lovelace, Berners-Lee, Demis at DeepMind, and so many more.
If we know how to do this, why are we not legislating? What will we know in, say, 12 months’ time that we do not know now about citizens’ rights, consumer protection, IP rights, being pro-innovation, labelling and the opportunity to transform public engagement? We need to act now, because we know what we need to know—if not now, when? The Bletchley summit last year was a success. Understandably, it focused on safety, but having done that it is imperative that we stand up all the other elements of AI already impacting people’s lives in so many ways, often without their knowledge.
Perhaps the greatest and finest learning from Bletchley is not so much the safety summit but what happened there two generations before, when a diverse team of talent gathered and deployed the technology of their day to defeat the greatest threat to our civilisation. Talent and technology brought forth light in one of the darkest hours of human history. As it was in Bletchley in the 1940s, so it is in the United Kingdom in the 2020s. It is time for human-led, human-in-the-loop, principle-based artificial intelligence. It is time to legislate and to lead; for transparency and trustworthiness, inclusion and innovation, interoperability and international focus, accountability and assurance; for AI developers, deployers and democracy itself; for citizens, creatives and our country—our data, our decisions, #ourAIfutures. That is what this Bill is all about. I beg to move.
My Lords, there can have been few Private Members’ Bills that have set out to address such towering issues as this Bill from the noble Lord, Lord Holmes of Richmond. He has been an important voice on the opportunities and challenges arising from generative AI in this House and outside it. This Bill and his powerful introduction to it are only the latest contributions to the vital issue of regulating AI to ensure that the social and financial interests and security of consumers are protected as a first priority.
The noble Lord also contributed to a wide-ranging discussion on the regulation of AI in relation to misinformation and disinformation convened by the Thomson Foundation, of which, as recorded in the register, I am chair. Disinformation in news has existed and grown as a problem since long before the emergence of generative AI, but each iteration of AI makes the disinformation that human bad actors promote even harder to detect and counteract.
This year a record number of people in the world will go to the polls in general elections, as the noble Lord said. The Thomson Foundation has commissioned research into the incidence of AI-fuelled disinformation in the Taiwanese presidential elections in mid-January, conducted by Professor Chen-Ling Hung of National Taiwan University. Your Lordships may not be surprised that the preliminary conclusions of the work, which will be continued in relation to other elections, confirms the concerns that the noble Lord voiced in his introduction. Generative AI’s role in exacerbating misinformation and disinformation in news and the impact that can have on the democratic process are hugely important, but this is only one of a large number of areas where generative AI is at the same time an opportunity and a threat.
I strongly support this well-judged and balanced Bill, which recognises the fast-changing, dynamic nature of this technology—Moore’s law on steroids, as I have previously suggested—and sets out a logical and coherent role for the proposed AI authority, bringing a transparency and clarity to the regulation of AI for its developers and users that is currently lacking.
I look forward to the Minister’s winding up, but with my expectations firmly under control. The Prime Minister’s position seems incoherent. On the one hand he says that generative AI poses an existential threat and on the other that no new regulatory body is needed and the technology is too fast-moving for a comprehensive regulatory framework to be established. That is a guarantee that we will be heaving to close a creaking stable door as the thoroughbred horse disappears over the horizon. I will not be surprised to hear the Minister extol the steps taken in recent months, such as the establishment of the AI unit, as a demonstration that everything is under control. Even if these small initiatives are welcome, they fall well short of establishing the transparency and clarity of regulation needed to engender confidence in all parties—consumers, employers, workers and civil society.
If evidence is needed to make the case for a transparent, well-defined regulatory regime rather than ad hoc, fragmented departmental action, the Industry and Regulators Committee, of which I am privileged to be a member, today published a letter to the Secretary of State for Levelling Up about the regulation of property agents. Five years ago, a working group chaired by the noble Lord, Lord Best, recommended that the sector should be regulated, yet despite positive initial noises from the Government, nothing has happened. Even making allowance for the impact of the pandemic during this time, this does not engender confidence in their willingness and ability to grasp regulatory nettles in a low-tech industry, let alone in a high-tech one.
It is hard not to suspect that this reflects an ideological suspicion within the Conservative Party that regulation is the enemy of innovation and economic success rather than a necessary condition, which I believe it is. Evidence to the Industry and Regulators Committee from a representative of Merck confirmed that the life sciences industry thrives in countries where there is strong regulation.
I urge the Government to give parliamentary time to this Bill to allow it to go forward. I look forward to addressing its detailed issues then.
My Lords, one of the advantages of sitting every day between my noble friends Lord Holmes and Lord Kirkhope is that their enthusiasm for a subject on which they have a lot of knowledge and I have very little passes by a process of osmosis along the Bench. I commend my noble friend on his Bill and his speech. I will add a footnote to it.
My noble friend’s Bill is timely, coming after the Government published their consultation outcome last month, shortly after the European Commission published its Artificial Intelligence Act and as we see how other countries, such as the USA, are responding to the AI challenge. Ideally, there should be some global architecture to deal with a phenomenon that knows no boundaries. The Prime Minister said as much in October:
“My vision, and our ultimate goal, should be to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe”.
However, we only have to look at the pressures on existing international organisations, like the United Nations and the WTO, to see that that is a big ask. There is a headwind of protectionism, and at times nationalism, making collaboration difficult. It is not helped by the world being increasingly divided between democracies and autocracies, with the latter using AI as a substitute for conventional warfare.
The most pragmatic approach, therefore, is to go for some lowest common denominators, building on the Bletchley Declaration which talks about sharing responsibility and collaboration. We want to avoid regulatory regimes that are incompatible, which would lead to regulatory arbitrage and difficulties with compliance.
The response to the consultation refers to this in paragraphs 71 and 72, stating:
“the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation. It may be the case that commercial incentives are not always aligned with the public good”.
It concludes:
“the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured”.
My noble friend’s Private Member’s Bill is a heroic first shot at what that legislation might look like. To simplify, there is a debate between top-down, as set out in the Bill, and bottom-up, as set out in the Government’s response, delegating regulation to individual regulators with a control function in DSIT. At some point, there will have to be convergence between the two approaches.
There is one particular clause in my noble friend’s Bill that I think is important: Clause 1(2)(c), which states that the function of the AI authority is to,
“undertake a gap analysis of regulatory responsibilities in respect of AI”.
The White Paper and the consultation outcome have numerous references to regulators. What I was looking for and never found was a list of all our regulators, and what they regulate. I confess I may have missed it, but without such a comprehensive list of regulators and what they regulate, any strategy risks being incomplete because we do not have a full picture.
My noble friend mentioned education. We have a shortage of teachers in many disciplines, and many complain about paperwork and are thinking of leaving. There is a huge contribution to be made by AI. But who is in charge? If you put the question into Google, it says,
“the DFE is responsible for children’s services and education”.
Then there is Ofsted, which inspects schools; there is Ofqual, which deals with exams; and then there is the Office for Students. The Russell group of universities have signed up to a set of principles ensuring that pupils would be taught to become AI literate.
Who is looking at the huge volume of material which AI companies are drowning schools and teachers with, as new and more accessible chatbots are developed? Who is looking at AI for marking homework? What about AI for adaptive testing? Who is looking at AI being used for home tuition, as increasingly used by parents? Who is looking at AI for marking papers? As my noble friend said, what happens if they get it wrong?
The education sector is trying to get a handle on this technological maelstrom and there may be some bad actors in there. However, the same may be happening elsewhere because the regulatory regimes lack clarity. Hence, should by any chance my noble friend’s Bill not survive in full, Clause 1(2)(c) should.
My Lords, I also warmly support the Bill introduced by the noble Lord, Lord Holmes of Richmond. I support it because it has the right balance of radicalism to fit the revolution in which we are living. I will look at it through eight points—that may be ambitious in five minutes, but I think I can do it.
There is a degree of serious common ground. First, we need fair standards to protect the public. We need to protect privacy, security, human rights, fraud and intellectual property. We also need to protect, however, rights to access, like data and the processes by which artificial intelligence makes decisions in respect of you. An enforcement system is needed to make that work. If we have that, we do not need the elaborate mechanism of the EU by regulating individual products.
Secondly, it is clear there has to be a consistency of standards. We cannot have one rule for one market, and one rule for another market. If you look back at the 19th century, when we underwent the last massive technological revolution, the courts sometimes made the mistake of fashioning rules to fit individual markets. That was an error, and that is why we need to look at it comprehensively.
Thirdly, we have got to protect innovation. I believe that is common ground, but the points to which I shall come in a moment show the difficulties.
Fourthly, we have got to produce a system that is interoperable. The noble Lord, Lord Holmes, referred to the trade documents Bill, which was the product of international development. We have adapted the common law to fit it and other countries’ systems will do it. That is a sine qua non.
I believe all those points are common ground, but I now come to four points that I do not think are common ground. The first is simplicity. When you look at Bills in this House, I sometimes feel we are making the law unworkable by its complexity. There can be absolutely no doubt that regulation is becoming unworkable because of the complexity. I can quite understand why innovators are horrified at the prospect of regulation, but they have got the wrong kind of regulation. They have got what we have created, unfortunately; it is a huge burden and is not based on simplicity and principles. If we are to persuade people to regulate, we need a radically different approach, and this Bill brings it about.
Secondly, there needs to be transparency and accountability. I do not believe that doing this through a small body within a ministry is the right way; it has to be done openly.
Thirdly—and this is probably highly controversial—when you look at regulation, our idea is of the statutory regulator with its vast empire created. Do we need that? Look back at the 19th century: the way in which the country developed was through self-regulation supported by the courts, Parliament and government. We need to look at that again. I see nothing wrong with self-regulation. It has itself a shocking name, as a result of what happened in the financial markets at the turn of the century, but I believe that we should look at it again. Effective self-regulation can be good regulation.
Finally, the regulator must be independent. There is nothing inconsistent with self-regulation and independence.
We need a radical approach, and the Bill gives us that. No one will come here if we pretend we are going to set up a regulator—like the financial markets regulator, the pensions regulator and so on—because people will recoil in horror. If we have this Bill, however, with its simplicity and emphasis on comprehensiveness, we can do it. By saying that, it seems to me that the fundamental flaw in what the Government are doing is leaving the current regulatory system in place. We cannot afford to do that. We need to face the new industrial revolution with a new form of regulation.
My Lords, it is a great pleasure to follow the noble and learned Lord, Lord Thomas, and his interesting speech. I remind noble Lords that the Communications and Digital Committee, which I have the privilege to chair, published our report Large Language Models and Generative AI only last month. For anyone who has not read it, I wholeheartedly recommend it, and I am going to draw heavily on it in my speech.
It is a pleasure to speak in a debate led by my noble friend Lord Holmes, and I congratulate him on all that he does in the digital and technology space. As he knows, I cannot support his Bill because I do not agree with the concept of an AI authority, but I have listened carefully to the arguments put forward by the noble and learned Lord, Lord Thomas, a moment ago. But neither would I encourage the Government to follow the Europeans and rush to develop overly specific legislation for this general-purpose technology.
That said, there is much common ground on which my noble friend and I can stand when it comes to our ambitions for AI, so I will say a little about that and where I see danger with the Government’s current approach to this massively important technological development.
As we have heard, AI is reshaping our world. Some of these changes are modest, and some are hype, but others are genuinely profound. Large language models in particular have the potential to fundamentally reshape our relationship with machines. In the right hands, they could drive huge benefits to our economy, supporting ground-breaking scientific research and much more.
I agree with my noble friend Lord Holmes about how we should approach AI. It must be developed and used to benefit people and society, not just big tech giants. Existing regulators must be equipped and empowered to hold tech firms to account as this technology operates in their own respective sectors, and we must ensure that there are proper safety tests for the riskiest models.
That said, we must maintain an open market for AI, and so any testing must not create barriers to entry. Indeed, one of my biggest fears is an even greater concentration of power among the big tech firms and repeating the same mistakes which led to a single firm dominating search, no UK-based cloud service, and a couple of firms controlling social media. Instead, we must ensure that generative AI creates new markets and, if possible, use it to address the existing market distortions.
Our large language model report looked in detail at what needs to happen over the next three years to catalyse AI innovation responsibly and mitigate risks proportionately. The UK is well-placed to be among the world leaders of this technology, but we can only achieve that by being positive and ambitious. The recent focus on existential sci-fi scenarios has shifted attention towards too narrow a view of AI safety. On its own, a concentration on safety will not deliver the broader capabilities and commercial heft that the UK needs to shape international norms. However, we cannot keep up with international competitors without more focus on supporting commercial opportunities and academic excellence. A rebalance in government strategy and a more positive vision is therefore needed. The Government should improve access to computing power, increase support for digital, and do more to help start-ups grow out of university research.
I do not wish to downplay the risks of AI. Many need to be addressed quickly—for example, cyberattacks and synthetic child sexual abuse, as well as bias and discrimination, which we have already heard about. The Government should scale up existing mitigations, and ensure industry improves its own guard-rails. However, the overall point is about balance. Regulation should be thoughtful and proportionate, to catalyse rather than stifle responsible innovation, otherwise we risk creating extensive rules that end up entrenching incumbents’ market power, and we throttle domestic industry in the process. Regulatory capture is a real danger that our inquiry highlighted.
Copyright is another danger, and this is where there is a clear case for government action now. The point of copyright is to reward innovation, yet tech firms have been exploiting rights holders by using works without permission or payment. Some of that is starting to change, and I am pleased to see some firms now striking deals with publishers. However, these remain small steps, and the fundamental question about respecting copyright in the first place remains unresolved.
The role for government here is clear: it should endorse the principles behind copyright and uphold fair play, and should then update legislation. Unfortunately, the current approach remains unclear and inadequate. It has abandoned the IPO-led process, but apparently without anything more ambitious in its place. I hope for better news in the Government’s response to our report, expected next month, and it would be better still if my noble friend the Minister could say something reassuring today.
In the meantime, I am grateful to my noble friend Lord Holmes for providing the opportunity to debate such an important topic.
My Lords, I too congratulate the noble Lord, Lord Holmes, on his wonderful speech. I declare my interests as an adviser to the Oxford Institute for Ethics in AI and the UN Secretary-General’s AI Advisory Body.
When I read the Bill, I asked myself three questions. Do we need an AI regulation Bill? Is this the Bill we need? What happens if we do not have a Bill? It is arguable that it would be better to deal with AI sector by sector—in education, the delivery of public services, defence, media, justice and so on—but that would require an enormous legislative push. Like others, I note that we are in the middle of a legislative push, with digital markets legislation, media legislation, data protection legislation and online harms legislation, all of which resolutely ignore both existing and future risk.
The taxpayer has been asked to make a £100 million investment in launching the world’s first AI safety institute, but as the Ada Lovelace Institute says:
“We are concerned that the Government’s approach to AI regulation is ‘all eyes, no hands’”,
with plenty of “horizon scanning” but no
“powers and resources to prevent those risks or even to react to them effectively after the fact”.
So yes, we need an AI regulation Bill.
Is this the Bill we need? Perhaps I should say to the House that I am a fan of the Bill. It covers testing and sandboxes, it considers what the public want, and it deals with a very important specific issue that I have raised a number of times in the House, in the form of creating AI-responsible officers. On that point, the CEO of the International Association of Privacy Professionals came to see me recently and made an enormously compelling case that, globally, we need hundreds of thousands of AI professionals, as the systems become smarter and more ubiquitous, and that those professionals will need standards and norms within which to work. He also made the case that the UK would be very well-placed to create those professionals at scale.
I have a couple of additions. Unless the Minister is going to make a surprise announcement, I think we are allowed to consider that he is going to take the Bill on in full. In addition, under Clause 2, which sets out regulatory principles, I would like to see consideration of children’s rights and development needs; employment rights, concerning both management by AI and job displacement; a public interest case; and more clarity that material that is an offence—such as creating viruses, CSAM or inciting violence—is also an offence, whether created by AI or not, with specific responsibilities that accrue to users, developers and distributors.
The Stanford Internet Observatory recently identified hundreds of known images of child sexual abuse material in an open dataset used to train popular AI text-to-image models, saying:
“It is challenging to clean or stop the distribution of publicly distributed datasets as it has been widely disseminated. Future datasets could use freely available detection tools to prevent the collection of known CSAM”.
The report illustrates that it is very possible to remove such images, but that it did not bother, and now those images are proliferating at scale.
We need to have rules upon which AI is developed. It is poised to transform healthcare, both diagnosis and treatment. It will take the weight out of some of the public services we can no longer afford, and it will release money to make life better for many. However, it brings forward a range of dangers, from fake images to lethal autonomous weapons and deliberate pandemics. AI is not a case of good or bad; it is a question of uses and abuses.
I recently hosted Geoffrey Hinton, whom many will know as the “godfather of AI”. His address to parliamentarians was as chilling as it was compelling, and he put timescales on the outcomes that leave no time to wait. I will not stray into his points about the nature of human intelligence, but he was utterly clear that the concentration of power, the asymmetry of benefit and the control over resources—energy, water and hardware—needed to run these powerful systems would be, if left until later, in so few hands that they, and not we, would be doing the rule setting.
My final question is: if we have no AI Bill, can the Government please consider putting the content of the AI regulation Bill into the data Bill currently passing through Parliament and deal with it in that way?
My Lords, I thought that this would be one of the rare debates where I did not have an interest to declare, but then I heard the noble Lord, Lord Young, talking about AI and education and realised that I am a paid adviser to Common Sense Media, a large US not-for-profit that campaigns for internet safety and has published the first ever ratings of AI applications used in schools. I refer the noble Lord to its excellent work in this area.
It is a pleasure to speak in the debate on this Bill, so ably put forward by the noble Lord, Lord Holmes. It is pretty clear from the reaction to his speech how much he is admired in this House for his work on this issue and so many others to do with media and technology, where he is one of the leading voices in public affairs. Let me say how humiliating it is for me to follow the noble Baronesses, Lady Stowell and Lady Kidron, both of whom are experts in this area and have done so much to advance public policy.
I am a regulator and in favour of regulation. I strongly supported the Online Safety Act, despite the narrow byways and culs-de-sac it ended up in, because I believe that platforms and technology need to be accountable in some way. I do not support people who say that the job is too big to be attempted—we must attempt it. What I always say about the Online Safety Act is that the legislation itself is irrelevant; what is relevant is the number of staff and amount of expertise that Ofcom now has, which will make it one of the world’s leaders in this space.
We talk about AI now because it has come to the forefront of consumers’ minds through applications such as ChatGPT, but large language models and the use of AI have been around for many years. As AI becomes ubiquitous, it is right that we now consider how we could or should regulate it. Indeed, with the approaching elections, not just here in the UK but in the United States and other areas around the world, we will see the abuse of artificial intelligence, and many people will wring their hands about how on earth to cope with the plethora of disinformation that is likely to emerge.
I am often asked at technology events, which I attend assiduously, what the Government’s policy is on artificial intelligence. To a certain extent I have to make it up, but to a certain extent I think that, broadly speaking, I have it right. On the one hand, there is an important focus on safety for artificial intelligence to make it as safe as possible for consumers, which in itself begs the question of whether that is possible; on the other, there is a need to ensure that the UK remains a wonderful place for AI innovation. We are rightly proud that DeepMind, although owned by Google, wishes to stay in the UK. Indeed, in a tweet yesterday the Chancellor himself bigged up Mustafa Suleyman for taking on the role of leading AI at Microsoft. It is true that the UK remains a second-tier nation in AI after China and the US, but it is the leading second-tier nation.
The question now is: what do we mean by regulation? I do not necessarily believe that now is the moment to create an AI safety regulator. I was interested to hear the contribution of the noble and learned Lord, Lord Thomas, who referred to the 19th century. I refer him to the late 20th century and the early 21st century: the internet itself has long been self-regulated, at least in terms of the technology and the global standards that exist, so it is possible for AI to proceed largely on the basis of self-regulation.
The Government’s approach to regulation is the right one. We have, for example, the Digital Regulation Cooperation Forum, which brings together all the regulators that either obviously, such as Ofcom, or indirectly, such as the FCA, have skin the game when it comes to digital. My specific request to the Minister is to bring the House up to date on the work of that forum and how he sees it developing.
I was surprised by the creation of the AI Safety Institute as a stand-alone body with such generous funding. It seems to me that the Government do not need legislation to do an examination of the plethora of bodies that have sprung up over the last 10 or 15 years. Many of them do excellent work, but where their responsibilities begin and end is confusing. They include the Ada Lovelace Institute, the Alan Turing Institute, the AI Safety Institute, Ofcom and DSIT, but how do they all fit together into a clear narrative? That is the essential task that the Government must now undertake.
I will pick up on one remark that the noble Baroness, Lady Stowell, made. While we look at the flashy stuff, if you like, such as disinformation and copyright, she is quite right to say that we have to look at the picks and shovels as AI becomes more prevalent and as the UK seeks to maintain our lead. Boring but absolutely essential things such as power networks for data centres will be important, so they must also be part of the Government’s task.
My Lords, like other Members, I congratulate the noble Lord, Lord Holmes, on what he has been doing.
The general public have become more aware of AI in very recent times, but it is nothing new; people have been working on it for decades. Because it is reaching such a critical mass and getting into all the different areas of our lives, it is now in the public mind. While I do not want to get into the minutiae of the Bill—that is for Committee—speaking as a non-expert, I think that the general public are now at a stage where they have a right to know what legislators think. Given the way things have developed in recent years, the Government cannot stand back and do nothing.
Like gunpowder, AI cannot be uninvented. The growing capacity of chips and other developments, and the power of a limited number of companies around the world, ensure that such a powerful tool will now be in the hands of a very small number of corporations. The Prime Minister took the lead last year and indicated that he wished to see the United Kingdom as a world leader in this field, and he went to the United States and other locations. I rather feel that we have lost momentum and that nothing is currently happening that ought to be happening.
As with all developments, there are opportunities and threats. In the last 24 hours, we have seen both. As the noble Lord, Lord Holmes, pointed out, studies on breast cancer were published yesterday, showing that X-rays, CT scans, et cetera were interpreted more accurately by AI than by humans. How many cases have we had in recent years of tests having to be recalled by various trusts, causing anxiety and stress for thousands upon thousands of patients? It is perfectly clear that, in the field of medicine alone, AI could not only improve treatment rates but relieve people of a lot of the anxieties that such inaccuracies cause. We also saw the threats on our television screens last night. As the noble Lord referred to, a well-known newscaster showed that she was presented by AI in a porn movie—she had it there on the screens for us to see last night. So you can see the threats as well as the opportunities.
So the question is: can Parliament, can government, stand by and just let things happen? I believe that the Government cannot idly stand by. We have an opportunity to lead. Yes, we do not want to create circumstances where we suffocate innovation. There is an argument over regulation between what the noble Viscount, Lord Chandos, said, what the noble and learned Lord, Lord Thomas, said, and what I think the Government’s response will be. However, bolting bits on to existing regulators is not necessarily the best way of doing business. You need a laser focus on this and you need the people with the capacity and the expertise. They are not going to be widely available and, if you have a regulator with too much on its agenda, the outcome will be fairly dilute and feeble.
In advance of this, I said to the Minister that we have all seen the “Terminator” movies, and I am sure that the general public have seen them over the years. The fact is that it is no longer as far-fetched as it once was. I have to ask the Minister: what is our capacity to deal with hacking? If it gets into weapons systems, never mind utilities, one can see straight away a huge potential for danger.
So, once again, we are delighted that the Bill has been brought forward. I would like to think that ultimately the Government will take this over, because that is the only way that it will become law, and it does need refinement. A response from the Minister, particularly on the last point, which creates huge anxiety, would be most beneficial.
My Lords, I too am very grateful to the noble Lord, Lord Holmes of Richmond, for introducing this important Artificial Intelligence (Regulation) Bill. In my contribution today, I will speak directly to the challenges and threats posed to visual artists by generative AI and to the need for regulatory clarity to enable artists to explore the creative potential of AI. I declare my interest as having a background in the visual arts.
Visual artists have expressed worries, as have their counterparts in other industries and disciplines, about their intellectual property being used to train AI models without their consent, credit or payment. In January 2024, lists containing the names of more than 16,000 non-consenting artists whose works were allegedly used to train the Midjourney generative AI platform were accidentally leaked online, intensifying the debate on copyright and consent in AI image creation even further.
The legality of using human artists’ work to train generative AI programmes remains unclear, but disputes over documents such as the Midjourney style list, as it became known, provide insight into the real procedures involved in turning copyrighted artwork into AI reference material. These popular AI image-generator models are extremely profitable for their owners, the majority of whom are situated in the United States. Midjourney was valued at around $10.5 billion in 2022. It stands to reason that, if artists’ IP is being used to train these models, it is only fair that they be compensated, credited and given the option to opt out.
DACS, the UK’s leading copyright society for artists, of which I am a member, conducted a survey that received responses from 1,000 artists and their representatives, 74% of whom were concerned about their own work being used to train AI models. Two-thirds of artists cited ethical and legal concerns as a barrier to using such technology in their creative practices. DACS also heard first-hand accounts of artists who found that working creatively with AI has its own set of difficulties, such as the artist who made a work that included generative AI and wanted to distribute it on a well-known global platform. The platform did not want the liabilities associated with an unregistered product, so it asked for the AI component to be removed. If artists are deterred from using AI or face legal consequences for doing so, creativity will suffer. There is a real danger that artists will miss out on these opportunities, which would worsen their already precarious financial situation and challenging working conditions.
In the same survey, artists expressed fear that human-made artworks will have no distinctive or unique value in the marketplace in which they operate, and that AI may thereby render them obsolete. One commercial photographer said, “What’s the point of training professionally to create works for clients if a model can be trained on your own work to replace you?” Artists rely on IP royalties to sustain a living and invest in their practice. UK artists are already low-paid and two-thirds are considering abandoning the profession. Another artist remarked in the survey, “Copyright makes it possible for artists to dedicate time and education to become a professional artist. Once copyright has no meaning any more, there will be no more possibility to make a living. This will be detrimental to society as a whole”.
It is therefore imperative that we protect their copyright and provide fair compensation to artists whose works are used to train artificial intelligence. While the Bill references IP, artists would have welcomed a specific clause on remuneration and an obligation for owners of copyright material used in AI training to be paid. To that end, it is therefore critical to maintain a record of every work that AI applications use, particularly to validate the original artist’s permission. It is currently not required by law to reveal the content that AI systems are trained on. Record-keeping requirements are starting to appear in regulatory proposals related to AI worldwide, including those from China and the EU.
The UK ought to adopt a similar mandate requiring companies using material in their AI systems to keep track of the works that they have learned and ingested. To differentiate AI-generated images from human-composed compositions, the Government should make sure that any commercially accessible AI-generated works are branded as such. As the noble Lord, Lord Holmes, has already mentioned, labelling shields consumers from false claims about what is and is not AI-generated. Furthermore, given that many creators work alone, every individual must have access to clear, appropriate redress mechanisms so that they can meaningfully challenge situations where their rights have been misused. Having said that, I welcome the inclusion in the Bill that any training data must be preceded by informed consent. This measure will go some way to safeguarding artists’ copyright and providing them with the necessary agency to determine how their work is used in training, and on what terms.
In conclusion, I commend the noble Lord, Lord Holmes, for introducing this Bill, which will provide much-needed regulation. Artists themselves support these measures, with 89% of respondents to the DACS survey expressing a desire for more regulation around AI. If we want artists to use AI and be creative with new technology, we need to make it ethical and viable.
My Lords, I join other noble Lords in commending the noble Lord, Lord Holmes, for bringing forward this Bill.
I come to this debate with the fundamental belief that supporting innovation and investment must be embedded in all regulation, but even more so in the regulation of artificial intelligence. After all, this wave of artificial intelligence is being billed as a catalyst that could propel economic growth and human progress for decades to come. The United Kingdom should not miss this supercycle and the promise of a lengthy period of economic expansion—the first of its kind since deglobalisation and deregulation 40 years ago.
With this in mind, in reading the AI regulation Bill I am struck by the weight of emphasis on risk mitigation, as opposed to innovation and investment. I must say that right at this moment, notwithstanding the fact that I realise that the Government, through other routes, including the pro-innovation stance that we talked about, are looking into innovation in investment. Even so, I feel that, on balance, the weight here is more on risk mitigation than innovation. I am keen that, in the drafting and execution of the artificial intelligence authority’s mandate in particular, and in the evolution of this Bill in general, the management of risk does not deter investment in this game-changing innovation.
I am of course reassured that innovation or opportunity are mentioned at least two times in the Bill. For example, Clause 6(a) signals that the public engagement exercise will consider
“the opportunities and risks presented by AI”.
Perhaps more pointedly, Clause 1(2)(e) states that the list of functions of the Al Authority are to include support for innovation. However, this mandate is at best left open to interpretation and at worst downgrades the importance and centrality of innovation.
My concern is that the new AI authority could see support for innovation as a distant or secondary objective, and that risk-aversion and mitigation become the cultural bedrock of the organisation. If we were to take too heavy-handed a risk-mitigation approach to AI, what opportunities could be missed? In terms of economic growth, as my noble friend Lord Holmes mentioned, PricewaterhouseCoopers estimates that AI could contribute more than $15 trillion to the world economy by 2030. In this prevailing era of slow economic growth, AI could meaningfully alter the growth trajectory.
In terms of business, AI could spur a new start-up ecosystem, creating a new generation of small and medium-sized enterprises. Furthermore, to underscore this point, AI promises to boost productivity gains, which could help generate an additional $4.4 trillion in annual profits, according to a 2023 report by McKinsey. To place this in context, this annual gain is nearly one and a half times larger than the UK’s annual GDP.
On public goods such as education and healthcare, the Chancellor in his Spring Budget a few weeks ago indicated the substantial role that a technology upgrade, including the use of AI, could play in improving delivery and access and in unlocking up to £35 billion of savings.
Clearly, a lot is at stake. This is why it is imperative that this AI Bill, and the way it is interpreted, strikes the right balance between mitigating risk and supporting investment and innovation.
I am very much aware of the perennial risks of malevolent state actors and errant new technologies, and thus, the need for effective regulation is clear, as the noble and learned Lord, Lord Thomas, stressed. This is unambiguous, and I support the Bill. However, we must be alert to the danger of regulation becoming a synonym for risk-management. This would overshadow the critical regulatory responsibility of ensuring a competitive environment in which innovation can thrive and thereby attract investment.
My Lords, I guarantee that this is not an AI-generated speech. Indeed, Members of the House might decide after five minutes that there is not much intelligence of any kind involved in its creation. Be that as it may, we on these Benches have engaged extensively with the impacts and implications of new technologies for years—from contributions to the Warnock committee in the 1980s through to the passage of the Online Safety Bill through this House last year. I am grateful to the noble Lord, Lord Holmes, for this timely and thoughtful Bill and for his brilliant introduction to it. Innovation must be enthusiastically encouraged, as the noble Baroness, Lady Moyo, has just reminded us. It is a pleasure to follow her.
That said, I will take us back to first principles for a moment: to Christian principles, which I hope all of good will would want to support. From these principles arise two imperatives for regulation and governance, whatever breakthroughs new technologies enable. The first is that a flourishing society depends on respecting human dignity and agency. The more any new tool threatens such innate dignity, the more carefully it should be evaluated and regulated. The second imperative is a duty of government, and all of us, to defend and promote the needs of the nation’s weak and marginalised —those who cannot always help themselves. I am not convinced that the current pro-innovation and “observe first, intervene later” approach to AI get this perennial balance quite right. For that reason, I support the ambitions outlined in the Bill.
There are certainly aspects of last year’s AI White Paper that get things in the right order: I warmly commend the Government for including fairness, accountability and redress among the five guiding principles going forward. Establishing an AI authority would formalise the hub-and-spoke structure the Government are already putting in place, with the added benefit of shifting from a voluntary to a compulsory basis, and an industry-funded regulatory model of the kind the Online Safety Act is beginning to implement.
The voluntary code of practice on which the Government’s approach currently depends is surely inadequate. The track record of the big tech companies that developed the AI economy and are now training the most powerful AI models shows that profit trumps users’ safety and well-being time and again. “Move fast and break things” and “act first, apologise later” remains the lodestar. Sam Altman’s qualities of character and conduct while at the helm of OpenAI have come under considerable scrutiny over the last few months. At Davos in January this year, the Secretary-General of the United Nations complained:
“Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact.”
How can it be right that the richest companies in history have no mandatory duties to financially support a robust safety framework? Surely, it should not be for the taxpayer alone to shoulder the costs of an AI digital hub to find and fix gaps that lead to risks or harm. Why should the taxpayer shoulder the cost of providing appropriate regulatory sandboxes for testing new product safety?
The Government’s five guiding principles are a good guide for AI, but they need legal powers underpinning them and the sharpened teeth of financial penalties for corporations that intentionally flout best practice, to the clear and obvious harm of consumers.
I commend the ambitions of the Bill. A whole-system, proportional and legally enforceable approach to regulating AI is urgently needed. Balancing industry’s need to innovate with its duty to respect human dignity and the vulnerable in society is vital if we are safely to navigate the many changes and challenges not just over the horizon but already in plain sight.
My Lords, I speak not as an expert in AI but as a user, and I make no apology for the fact that I use it to do my work here in this Chamber. Your Lordships can form your own judgment as to which bits of my following remarks were written by me, and which are from ChatGPT.
I very much welcome the Bill. The Noble Lord, Lord Holmes of Richmond, gave us an inspirational speech which was totally convincing on the need for legislation. The Bill is obviously the first step on that way. The promise of artificial intelligence is undeniable. There is a large degree of hype from those with vested interests, and there is, to a significant extent, a bubble. Nevertheless, even if that is true, we still need an appropriate level of regulation.
AI provides the opportunity to revolutionise industries, enhance our daily lives and solve some of the most pressing problems we face today—from healthcare to climate change—and solutions that are not available in other ways. However, with greater power comes greater responsibility. The rapid advance of AI technology has outpaced our regulatory frameworks, leading to innovation without adequate oversight, ethical consideration or accountability, so we undoubtedly need a regulator. I take the point that it has to be focused and simple. We need rigorous ethical standards and transparency in AI development to ensure that these technologies serve the good of all, not just commercial interests. We cannot wait for these forces to play out before deciding what needs to be done. I very much support the remarks of the previous speaker, the right reverend Prelate the Bishop of Worcester, who set out the position very clearly.
We need to have a full understanding of the implications of AI for employment and the workforce. These technologies will automate tasks previously performed by humans, and we face significant impacts on the labour market. The prevailing model for AI is to seek the advantage for the developers and not so much for the workers. This is an issue we will need to confront. We will have to debate the extent to which that is the job of the regulator.
As I indicate, I favour a cautious approach to AI development. We should be focusing on meaningful applications that prioritise human well-being and benefits to society over corporate profit. Again, how this fits in with the role of the regulator is for discussion, but a particular point that needs to be made here is that we need to understand the massive amounts of energy that even simple forms of AI consume. This needs to be borne in mind in any approach to developing this industry.
In the Bill, my attention was caught by the use of the undefined term “relevant regulators”. Perhaps the noble Lord, Lord Holmes, could fill that in a bit more; it is a bit of a catch-all at the moment. My particular concern is the finance industry, which will use this technology massively, not necessarily to the benefit of consumers. The noble and learned Lord, Lord Thomas of Cwmgiedd, emphasised the problem of regulatory arbitrage. We need a consistent layer of regulation. Another concern is mental health: there will be AI systems that claim to offer benefits to those with mental health problems. Again, this will need severe regulation.
To conclude, I agree with my noble friend Lord Chandos that regulation is necessarily the enemy of economic success. There is a balance to be drawn between gaining all the benefits of technology and the potential downsides. I welcome the opportunity to discuss how this should be regulated.
My Lords, I too congratulate my noble friend Lord Holmes on bringing forward this AI regulation Bill, in the context of the continuing failure of the Government to do so. At the same time, I declare my interest as a long-term investor in at least one fund that invests in AI and tech companies.
A year ago, one of the so-called godfathers of AI, Geoffrey Hinton, cried “fire” about where AI was going and, more importantly, when. Just last week, following the International Dialogue on AI Safety in Beijing, a joint statement was issued by leading western and Chinese figures in the field, including Chinese Turing award winner Andrew Yao, Yoshua Bengio and Stuart Russell. Among other things, that statement said:
“Unsafe development, deployment, or use of AI systems may pose catastrophic or even existential risks to humanity within our lifetimes … We should immediately implement domestic registration for AI models and training runs above certain compute or capability thresholds”.
Of course, we are talking about not only extinction risks but other very concerning risks, some of which have been mentioned by my noble friend Lord Holmes: extreme concentration of power, deepfakes and disinformation, wholesale copyright infringement and data-scraping, military abuse of AI in the nuclear area, the risk of bioterrorism, and the opacity and unreliability of some AI decision-making, to say nothing of the risk of mass unemployment. Ian Hogarth, the head of the UK AI Safety Institute, has written in the past about some of these concerns and risks.
Nevertheless, despite signing the Center for AI Safety statement and publicly admitting many of these serious concerns, the leading tech companies continue to race against each other towards the holy grail of artificial general intelligence. Why is this? Well, as they say, “It’s the money, stupid”. It is estimated that, between 2020 and 2022, $600 billion in total was invested in AI development, and much more has been since. This is to be compared with the pitifully small sums invested by the AI industry in AI safety. We have £10 million from this Government now. These factors have led many people in the world to ask how it is that they have accidentally outsourced their entire futures to a few tech companies and their leaders. Ordinary people have a pervading sense of powerlessness in the face of AI development.
These facts also raise the question of why the Government continue to delay putting in place proper and properly funded regulatory frameworks. Others, such as the EU, US, Italy, Canada and Brazil, are taking steps towards regulation, while, as noble Lords have heard, China has already regulated and India plans to regulate this summer. Here, the shadow IT Minister has indicated that, if elected, a new Labour Government would regulate AI. Given that a Government’s primary duty is to keep their country safe, as we so often heard recently in relation to the defence budget, this is both strange and concerning.
Why is this? There is a strong suspicion in some quarters that the Prime Minister, having told the public immediately before the Bletchley conference that AI brings national security risks that could end our way of life, and that AI could pose an extinction risk to humanity, has since succumbed to regulatory capture. Some also think that the Government do not want to jeopardise relations with leading tech companies while the AI Safety Institute is gaining access to their frontier models. Indeed, the Government proudly state that they
“will not rush to legislate”,
reinforcing the concern that the Prime Minister may have gone native on this issue. In my view, this deliberate delay on the part of the Government is seriously misconceived and very dangerous.
What have the Government done to date? To their credit, they organised and hosted Bletchley, and importantly got China to attend too. Since then, they have narrowed the gap between themselves and the tech companies—but the big issues remain, particularly the critical issue of regulation versus self-regulation. Importantly, and to their credit, the Government have also set up the UK AI Safety Institute, with some impressive senior hires. However, no one should be in any doubt that this body is not a regulator. On the critical issue of the continuing absence of a dedicated unitary AI regulator, it is simply not good enough for the Government to say that the various relevant government bodies will co-operate together on oversight of AI. It is obvious to almost everyone, apart from the Government themselves, that a dedicated, unitary, high-expertise and very well-funded UK AI regulator is required now.
The recent Gladstone AI report, commissioned by the US Government, has highlighted similar risks to US national security from advanced AI development. Against this concerning background, I strongly applaud my noble friend Lord Holmes for bringing forward the Bill. It may of course be able to be improved, but its overall intention and thrust are absolutely right.
My Lords, I entirely agree with those last sentiments, which will get us thinking about what on earth we do about this. An awful lot of nonsense is talked, and a lot of great wisdom is talked. The contributions to the debate have been useful in getting people thinking along the right lines.
I will say something about artificial general intelligence, which is very different, because it may well aim to control people or the environment in which we live, rather than generative AI or large language models, which I think people are thinking of: ChatGPT, Llama, Google Gemini, and all those bits and pieces. They are trawling through large amounts of information incredibly usefully and producing a good formatted epitome of what is in there. Because you do not have time to read, for instance, large research datasets, they can find things in them that you have not had time to trawl through and find. They can be incredibly useful for development there.
AI could start to do other things: it could control things and we could make it take decisions. Some people suggest that it could replace the law courts and a lot of those sorts of things. But the problem with that is that we live in a complex world and complex systems are not deterministic, to use a mathematical thing. You cannot control them with rules. Rules have unintended consequences, as is well known—the famous butterfly effect. You cannot be certain about what will happen when you change one little bit. AI will not necessarily be able to predict that because, if you look at how it trains itself, you do not know what it has learned—it is not done by algorithm, and some AI systems can modify their own code. So you do not know what it is doing and you cannot regulate for the algorithms or any of that.
I think we have to end up regulating, or passing laws on, the outcomes. We always did this in common law: we said, “Thou shalt not kill”, and then we developed it a bit further, but the principle of not going around killing people was established. The same is true of other simple things like “You shan’t nick things”. It is what comes out of it that matters. This applies when you want to establish liability, which we will have to do in the case of self-driving cars, for instance, which will take over more and more as other things get clogged up. They will crash less, kill fewer people and cause fewer accidents. But, because it is a machine doing it, it will be highly psychologically unacceptable—with human drivers, there will be more accidents. There will have to be changes in thought on that.
Regulation or legislation has to be around the outcomes rather than the method, because we cannot control where these things go. A computer does not have an innate sense of right and wrong or empathy, which comes into human decisions a lot. We may be able to mimic it, and we could probably train computers up on models to try to do that. One lot of AI might try to say whether another lot of AI is producing okay outcomes. It will be very interesting. I have no idea how we will get there.
Another thing that will be quite fun is when the net-zero people get on to these self-training models. An LLM trawling through data uses huge amounts of energy, which will not help us towards our net-zero capabilities. However, AI might help if we put it in charge of planning how to get electricity from point A to point B in an acceptable fashion. But on the other hand people will not trust it, including planners. I am sorry—I am trying to illustrate a complex system. How on earth can you translate that into something that you can put on paper and try to control? You cannot, and that is what people have to realise. It is an interesting world.
I am glad that the Bill is coming along, because it is high time we started thinking about this and what we expect we can do about it. It is also transnational—it goes right across all borders—so we cannot regulate in isolation. In this new interconnected and networked world, we cannot have a little isolated island in the middle of it all where we can control it—that is just not going to happen. Anyway, we live in very interesting times.
My Lords, as has been illustrated this morning, we stand on the cusp of a technological revolution. We find ourselves at the crossroads between innovation and responsibility. Artificial intelligence, a marvel of modern science, promises to reshape the world. Yet with great power comes great responsibility, and it is therefore imperative that we approach this with caution. Regulation in the realm of AI is not an adversary to innovation; rather, it is the very framework within which responsible and sustainable innovation must occur. Our goal should not be to stifle the creative spirit but to channel it, ensuring that it serves the common good while safeguarding our societal values and ethical standards.
However, we must not do this in isolation. In the digital domain, where boundaries blur, international collaboration becomes not just beneficial but essential. The challenges and opportunities presented by AI do not recognise national borders, and our responses too must be global in perspective. The quest for balance in regulation must be undertaken with a keen eye on international agreements, ensuring that the UK remains in step with the global community, not at odds with it. In our pursuit of this regulatory framework suitable for the UK, we must consider others. The European Union’s AI Act, authored by German MEP Axel Voss, offers valuable insights and, by examining what works within the EU’s and other approaches, as well as identifying areas for improvement, we can learn from the experiences of our neighbours to forge a path that is distinctly British, yet globally resonant.
Accountability stands as a cornerstone in the responsible deployment of AI technologies. Every algorithm and every application that is released into the world must have a clearly identifiable human or corporate entity behind it. This is where the regulatory approach must differ to that inherent in the general data protection regulations, which I had the pleasure of helping to formulate in Brussels. This accountability is crucial for ethical, legal and social reasons, ensuring that there is always a recourse and a responsible party when AI systems interact with our world.
Yet, as we delve into the mechanics of regulation and oversight, we must also pause to reflect on the quintessentially human aspect of our existence that AI can never replicate: emotion. The depth and complexity of emotions that define our humanity remain beyond the realm of AI and always will. These elements, intrinsic to our being, highlight the irreplaceable value of the human touch. While AI can augment, it can never replace human experience. The challenge before us is to foster an environment where innovation thrives within a framework of ethical and responsible governance. We must be vigilant not to become global enforcers of compliance at the expense of being pioneers of innovation.
The journey we embark on with the regulation of AI is not one that ends with the enactment of laws; that is merely the beginning. The dynamic nature of AI demands that our regulatory frameworks be agile and capable of adapting to rapid advancements and unforeseen challenges. So, as I have suggested on a number of occasions, we need smart legislation—a third tier of legislation behind the present primary and secondary structures—to keep up with these things.
In the dynamic landscape of AI, the concept of sandboxes stands out as a forward-thinking approach to innovation in this field. This was referred to by my noble friend in introducing his Bill. They offer a controlled environment where new technologies can be tested and refined without the immediate pressures and risks associated with full-scale deployment.
I emphasise that support for small and medium-sized enterprises in navigating the regulatory landscape is of paramount importance. These entities, often the cradles of innovation, must be equipped with the tools and knowledge to flourish within the bounds of regulation. The personnel in our regulatory authorities must also be of the highest calibre—individuals who not only comprehend the technicalities of AI but appreciate its broader implications for society and the economy.
At this threshold of a new era shaped by AI, we should proceed with caution but also with optimism. Let us never lose sight of the fact that at the heart of all technological advancement lies the indomitable spirit of human actions and emotions, which no machine or electronic device can create alone. I warmly welcome my noble friend Lord Holmes’s Bill, which I will fully support throughout its process in this House.
My Lords, I am most grateful to the noble Lord, Lord Holmes, for the meeting he arranged to explain his Bill in detail and to answer some of the more naive questions from some Members of this House. Having gone through the Bill, I cannot see how we can ignore the importance of this, going forwards. I am also grateful to my noble friend Lady Kidron for the meeting that she established, which I think educated many of us on the realities of AI.
I want to focus on the use of AI in medicine because that is my field. The New England Journal of Medicine has just launched NEJM AI as a new journal to collate what is happening. AI use is becoming widespread but across the NHS tends to be small-scale. People hope that AI will streamline administrative tasks which are burdensome, improve communication with patients and do even simple things such as making out-patient appointments more suitable for people and arranging transport better.
For any innovations to be used, however, the infrastructure needs to be set up. I was struck yesterday at a meeting on emergency medicine where the consultant explained that it now takes longer to enter patient details in the computer system than it used to using old-fashioned pen and paper—the reason being that the computer terminals are not in the room where the consultation is happening so it takes people away.
A lot of people in medicine are tremendously enthusiastic—we see the benefits in diagnostics for images of cells and X-rays and so on—but there has to be reliability. Patients and people in the general population are buying different apps to diagnose things such as skin cancers, but the reliability of these apps is unproven. What we need is the use of AI to improve diagnostic accuracy. Currently, post-mortems show about a 5% error in what is written on the death certificate; in other words, at post-mortem, people are found to have died of something different from the disease or condition they were being treated for. So we have to improve diagnostics right across the piece.
But the problem is that we have to put that in a whole system. The information that goes in to train and teach these diagnostic systems has to be of very high quality, and we need audit in there to make sure that high quality is maintained. Although we are seeing fantastic advances in images such as mammograms, supporting radiologists, and in rapid diagnosis of strokes and so on, there is a need to ensure quality control in the system, so that it does not go wild on its own, that the input is being monitored and, as things change in the population, that that change is also being detected.
Thinking about this Bill, I reflected on the Covid experience, when the ground-glass appearance on X-rays was noted to be somewhat unique and new in advanced Covid lung disease. We have a fantastic opportunity, if we could use all of that data properly, to follow up people in the long term, to see how many have got clotting problems and how many later go on to develop difficulties or other conditions of which we have been unaware. There could be a fantastic public health benefit if we use the technology properly.
The problem is that, if it is not used properly, it will lose public trust. I noted that, in his speech introducing the Bill, the noble Lord, Lord Holmes, used the word “trust” very often. It seems that a light-touch regulator that goes across many domains and areas is what we will need. It will protect copyright, protect the intellectual property rights of the people who are developing systems, keep those investments in the UK and foster innovation in the UK. Unless we do that, and unless we develop trust across the board, we will fail in our developments in the longer term. The real world that we live in today has to be safe, and the world that AI takes us into has to be safe.
I finish with a phrase that I have often heard resonate in my ears:
“Trust arrives on foot and leaves on horseback”.
We must not let AI be the horses that take all the trust in the developments away.
My Lords, are we ready for the power of artificial intelligence? With each leap in human ability to invent and change what we can achieve, we have utilised a new power, a new energy that has redefined the boundaries of imagination: steam and the Industrial Revolution; electricity and the age of light; and so, again, we stand on the precipice of another seismic leap.
However, the future of AI is not just about what we can do with it but about who will have access to control its power. So I welcome the attempt made by my noble friend Lord Holmes via this Bill to encourage an open public debate on democratic oversight of AI, but I do have some concerns. Our view of AI at this early stage is heavily coloured by how this power will deliver automation and the potential reduction of process-reliant jobs and how those who hold the pen on writing the algorithms behind AI could exert vast power and influence on the masses via media manipulation. We fear that the AI genie is out of the bottle and we may not be able to control it. The sheer, limitless potential of AI is intimidating.
If, like me, you are from a certain generation, these seeds of fear and fascination at the power of artificial intelligence have long been planted by numerous Hollywood movies picking on our hopes, dreams and fears of what AI could do to us. Think of the unnerving subservience of HAL in Stanley Kubrick’s “2001: A Space Odyssey” made in 1968, the menacing and semi-obedient robot Maximilian from the 1979 Disney production “The Black Hole”, the fantasy woman called Lisa created by the power of 80s home computing in “Weird Science” from 1985, and, of course, the ultimate hellish future of machine intelligence taking over the world in the form of Skynet in “The Terminator” made in 1984. These and many other futuristic interpretations of AI helped to fan the flames in the minds of engineers, computer scientists and super-geeks, many of whom created and now run the biggest tech firms in the world.
But where are we now? The advancement in processing power, coupled with vast amounts of big data and developments such as large language models, have led to the era of commercialisation of AI. Dollops of AI are available in everyday software programmes via chatbots and automated services. Obviously, the emergence of ChatGPT turbocharged the public awareness and usage of the technology. We have poured algorithms into machines and made them “think”. We have stopped prioritising trying to get robots to look and feel like us, and focused instead on the automation of systems and processes, enabling them to do more activities. We have moved from the pioneering to the application era of AI.
With all this innovation, with so many opportunities and benefits to be derived by its application, what should we fear? My answer is not from the world of Hollywood science fiction; it relates not to individuals losing control to machines but, rather, to how we will ensure that this power remains democratic and accessible and benefits the many. How will we ensure that control does not fall into the hands of the few, that wealth does not determine the ability to benefit from innovation and that a small set of organisations do not gain ultimate global control or influence over our lives? How, also, will we ensure that Governments and bureaucracies do not end up ever furthering the power and control of the state through well-intentioned regulatory control? This is why we must appreciate the size of this opportunity, think about the long-term future, and start to design the policy frameworks and new public bodies that will work in tandem with those who will design and deliver our future world.
But here is the rub: I do not believe we can control, manage or regulate this technology through a single authority. I am extremely supportive of the ambitions of my noble friend Lord Holmes to drive this debate. However, I humbly suggest that the question we need to focus on will be how we can ensure that the innovations, outcomes and quality services that AI delivers are beneficial and well understood. The Bill as it stands may be overambitious for the scope of this AI authority: to act as oversight across other regulators; to assess safety, risks and opportunities; to monitor risks across the economy; to promote interoperability and regulatory frameworks; and to act as an incubator to innovation. To achieve this and more, the AIA would need vast cross-cutting capability and resources. Again, I appreciate what my noble friend Lord Holmes is trying to achieve and, as such, I would say that we need to consider with more focus the questions that we are trying to answer.
I wholeheartedly believe and agree that the critical role will be to drive public education, engagement and awareness of AI, and where and how it is used, and to clearly identify the risks and benefits to the end-users, consumers, customers and the broader public. However, I strongly suggest that we do not begin this journey by requiring labelling, under Clause 5(1)(a)(iii), using “unambiguous health warnings” on AI products or services. That would not help us to work hand in hand with industry and trade bodies to build trust and confidence in the technology.
I believe there will eventually be a need for some form of future government body to help provide guidance to both industry and the public about how AI outcomes, especially those in delivering public sector services, are transparent, fair in design and ethical in approach. Such a body will need to take note of the approach of other nations and will need to engage with local and global businesses to test and formulate the best way forward. So, although I am sceptical of many of the specifics of the Bill, I welcome and support the journey that it, my noble friend Lord Holmes and this debate are taking us on.
My Lords, I congratulate the noble Lord, Lord Holmes, on his inspiring introduction and on stimulating such an extraordinarily good and interesting debate.
The excellent House of Lords Library guide to the Bill warns us early on:
“The bill would represent a departure from the UK government’s current approach to the regulation of AI”.
Given the timidity of the Government’s pro-innovation AI White Paper and their response, I would have thought that was very much a “#StepInTheRightDirection”, as the noble Lord, Lord Holmes, might say.
There is clearly a fair wind around the House for the Bill, and I very much hope it progresses and we see the Government adopt it, although I am somewhat pessimistic about that. As we have heard in the debate, there are so many areas where AI is and can potentially be hugely beneficial, despite the rather dystopian narratives that the noble Lord, Lord Ranger, so graphically outlined. However, as many noble Lords have emphasised, it also carries risks, not just of the existential kind, which the Bletchley Park summit seemed to address, but others mentioned by noble Lords today, such as misinformation, disinformation, child sexual abuse, and so on, as well as the whole area of competition, mentioned by the noble Lord, Lord Fairfax, and the noble Baroness, Lady Stowell—the issue of the power and the asymmetry of these big tech AI systems and the danger of regulatory capture.
It is disappointing that, after a long gestation of national AI policy-making, which started so well back in 2017 with the Hall-Pesenti review, contributed to by our own House of Lords Artificial Intelligence Committee, the Government have ended up by producing a minimalist approach to AI regulation. I liked the phrase used by the noble Lord, Lord Empey, “lost momentum”, because it certainly feels like that after this period of time.
The UK’s National AI Strategy, a 10-year plan for UK investment in and support of AI, was published in September 2021 and accepted that in the UK we needed to prepare for artificial general intelligence. We needed to establish public trust and trustworthy AI, so often mentioned by noble Lords today. The Government had to set an example in their use of AI and to adopt international standards for AI development and use. So far, so good. Then, in the subsequent AI policy paper, AI Action Plan, published in 2022, the Government set out their emerging proposals for regulating AI, in which they committed to develop
“a pro-innovation national position on governing and regulating AI”,
to be set out in a subsequent governance White Paper. The Government proposed several early cross-sectoral and overarching principles that built on the OECD principles on artificial intelligence: ensuring safety, security, transparency, fairness, accountability and the ability to obtain redress.
Again, that is all good, but the subsequent AI governance White Paper in 2023 opted for a “context-specific approach” that distributes responsibility for embedding ethical principles into the regulation of AI systems across several UK sector regulators without giving them any new regulatory powers. I thought the analysis of this by the noble Lord, Lord Young, was interesting. There seemed to be no appreciation that there were gaps between regulators. That approach was confirmed this February in the response to the White Paper consultation.
Although there is an intention to set up a central body of some kind, there is no stated lead regulator, and the various regulators are expected to interpret and apply the principles in their individual sectors in the expectation that they will somehow join the dots between them. There is no recognition that the different forms of AI are technologies that need a comprehensive cross-sectoral approach to ensure that they are transparent, explainable, accurate and free of bias, whether they are in an existing regulated or unregulated sector. As noble Lords have mentioned, discussing existential risk is one thing, but going on not to regulate is quite another.
Under the current Data Protection and Digital Information Bill, data subject rights regarding automated decision-making—in practice, by AI systems—are being watered down, while our creatives and the creative industries are up in arms about the lack of support from government in asserting their intellectual property rights in the face of the ingestion of their material by generative AI developers. It was a pleasure to hear what the noble Lord, Lord Freyberg, had to say on that.
For me, the cardinal rules are that business needs clarity, certainty and consistency in the regulatory system if it is to develop and adopt AI systems, and we need regulation to mitigate risk to ensure that we have public trust in AI technology. As the noble Viscount, Lord Chandos, said, regulation is not necessarily the enemy of innovation; it can be a stimulus. That is something that we need to take away from this discussion. I was also very taken with the idea of public trust leaving on horseback.
This is where the Bill of the noble Lord, Lord Holmes, is an important stake in the ground, as he has described. It provides for a central AI authority that has a duty of looking for gaps in regulation; it sets out extremely well out the safety and ethical principles to be followed; it provides for regulatory sandboxes, which we should not forget are an innovation invented in the UK; and it provides for AI responsible officers and for public engagement. Importantly, it builds in a duty of transparency regarding data and IP-protected material where they are used for training purposes, and for labelling AI-generated material, as the noble Baroness, Lady Stowell, and her committee have advocated. By itself, that would be a major step forward, so, as the noble Lord knows, we on these Benches wish the Bill very well, as do all those with an interest in protecting intellectual property, as we heard the other day at the round table that he convened.
However, in my view what is needed at the end of the day is the approach that the interim report of the Science, Innovation and Technology Committee recommended towards the end of last year in its inquiry into AI governance: a combination of risk-based cross-sectoral regulation and specific regulation in sectors such as financial services, applying to both developers and adopters, underpinned by common trustworthy standards of risk assessment, audit and monitoring. That should also provide recourse and redress, as the Ada Lovelace Institute, which has done so much work in the area, asserts, and as the noble Lord, Lord Kirkhope, mentioned.
That should include the private sector, where there is no effective regulator for the workplace, as the noble Lord, Lord Davies, mentioned, and the public sector, where there is no central or local government compliance mechanism; no transparency yet in the form of a public register of use of automated decision-making, despite the promised adoption of the algorithmic recording standard; and no recognition by the Government that explicit legislation and/or regulation for intrusive AI technologies used in the public sector, such as live facial recognition and other biometric capture, is needed. Then, of course, we need to meet the IP challenge. We need to introduce personality rights to protect our artists, writers and performers. We need the labelling of AI-generated material alongside the kinds of transparency duties contained in the noble Lord’s Bill.
Then there is another challenge, which is more international. This was mentioned by the noble Lords, Lord Kirkhope and Lord Young, the noble and learned Lord, Lord Thomas of Cwmgiedd, and the noble Earl, Lord Erroll. We have world-beating AI researchers and developers. How can we ensure that, despite differing regulatory regimes—for instance, between ourselves and the EU or the US—developers are able to commercialise their products on a global basis and adopters can have the necessary confidence that the AI product meets ethical standards?
The answer, in my view, lies in international agreement on common standards such as those of risk and impact assessment, testing, audit, ethical design for AI systems, and consumer assurance, which incorporate what have become common internationally accepted AI ethics. Having a harmonised approach to standards would help provide the certainty that business needs to develop and invest in the UK more readily, irrespective of the level of obligation to adopt them in different jurisdictions and the necessary public trust. In this respect, the UK has the opportunity to play a much more positive role with the Alan Turing Institute’s AI Standards Hub and the British Standards Institution. The OECD.AI group of experts is heavily involved in a project to find common ground between the various standards.
We need a combination of proportionate but effective regulation in the UK and the development of international standards, so, in the words of the noble Lord, Lord Holmes, why are we not legislating? His Bill is a really good start; let us build on it.
My Lords, like others, I congratulate the noble Lord, Lord Holmes of Richmond, on his Private Member’s Bill, the Artificial Intelligence (Regulation) Bill. It has been a fascinating debate and one that is pivotal to our future. My noble friend Lord Leong apologises for his absence and I am grateful to the Government Benches for allowing me, in the absence of an AI-generated hologram of my noble friend, to take part in this debate. If the tone of my comments is at times his, that is because my noble friend is supremely organised and I will be using much of what he prepared for this debate. Like the noble Lord, Lord Young, I am relying heavily on osmosis; I am much more knowledgeable on this subject now than two hours ago.
My first jobs were reliant on some of the now-defunct technologies, although I still think that one of the most useful skills I learned was touch-typing. I learned that on a typewriter, complete with carbon paper and absolutely no use of Tipp-Ex allowed. However, automation and our continued and growing reliance on computers have improved many jobs rather than simply replacing them. AI can help businesses save money and increase productivity by adopting new technologies; it can also release people from repetitive data-entry tasks, enabling them to focus on creative and value-added tasks. New jobs requiring different skills can be created and, while this is not the whole focus of the debate, how we achieve people being able to take up new jobs also needs to be a focus of government policy in this area.
As many noble Lords have observed, we stand on the brink of an AI revolution, one that has already started. It is already changing the way we live, the way we work and the way we relate to one another. I count myself in the same generation of film viewers as the noble Lord, Lord Ranger. The rapidly approaching tech transformation is unlike anything that humankind has experienced in its speed, scale and scope: 20th-century science fiction is becoming commonplace in our 21st-century lives.
As the noble Baroness, Lady Moyo, said, it is estimated that AI technology could contribute up to £15 trillion to the world economy by 2030. As many noble Lords mentioned, AI also presents government with huge opportunities to transform public services, potentially delivering billions of pounds in savings and increasing the service to the public. For example, it could help with the workforce crisis in health, particularly in critical health diagnostics, as highlighted by the noble Lord, Lord Empey. The noble Baroness, Lady Finlay, highlighted the example of how diagnosis of Covid lung has benefited through the use of AI, but, as she said, that introduces requirements for additional infrastructure. My noble friend Lord Davies also noted that AI can help to contribute to how we tackle climate change.
The use of AI by government underpins Labour’s missions to revive our country’s fortunes and ensure that the UK thrives and is at the forefront of the coming technological revolution. However, we should not and must not overlook the risks that may arise from its use, nor the unease around AI and the lack of confidence among the public around its use. Speaking as someone who generally focuses on education from these Benches, this is not least in the protection of children, as the noble Baroness, Lady Kidron, pointed out. AI can help education in a range of ways, but these also need regulation. As the noble Baroness said, we need rules to defend against the potential abuses.
Goldman Sachs predicts that the equivalent of 300 million full-time jobs globally will be replaced; this includes around a quarter of current work tasks in the US and Europe. Furthermore, as has been noted, AI can damage our physical and mental health. It can infringe upon individual privacy and, if not protected against, undermine human rights. Our collective response to these concerns must be as integrated and comprehensive as our embracing of the potential benefits. It should involve all stakeholders, from the public and private sectors to academia and civil society. Permission should and must be sought by AI developers for the use of copyright-protected work, with remuneration and attribution provided to creators and rights holders, an issue highlighted by the noble Lord, Lord Freyberg. Most importantly, transparency needs to be delivered on what content is used to train generative AI models. I found the speech of the noble Earl, Lord Erroll, focusing on outcomes, of particular interest.
Around the world, countries and regions are already beginning to draft rules for AI. As the noble Lord, Lord Kirkhope, said, this does not need to stifle innovation. The Government’s White Paper on AI regulation adopted a cross-sector and outcome-based framework, underpinned by its five core principles. Unfortunately, there are no proposals in the current White Paper for introducing a new AI regulator to oversee the implementation of the framework. Existing regulators, such as the Information Commissioner’s Office, Ofcom and the FCA have instead been asked to implement the five principles from within their respective domains. As a number of noble Lords referred to, the Ada Lovelace Institute has expressed concern about the Government’s approach, which it has described as “all eyes, no hands”. The institute says that, despite
“significant horizon-scanning capabilities to anticipate and monitor AI risks … it has not given itself the powers and resources to prevent those risks or even react to them effectively after the fact”.
The Bill introduced by the noble Lord, Lord Holmes, seeks to address these shortcomings and, as he said in his opening remarks: if not now, when? Until such time as an independent AI regulator is established, the challenge lies in ensuring its effective implementation across various regulatory domains. This includes data protection, competition, communications and financial services. A number of noble Lords mentioned the multitude of regulatory bodies involved. This means that effective governance between them will be paramount. Regulatory clarity, which enables business to adopt and scale investment in AI, will bolster the UK’s competitive edge. The UK has so far been focusing on voluntary measures for general-purpose AI systems. As the right reverend Prelate the Bishop of Worcester said, this is not adequate: human rights and privacy must also be protected.
The noble Lord, Lord Kirkhope, noted that AI does not respect national borders. A range of international approaches to AI safety and governance are developing, some of which were mentioned by the noble Lord, Lord Fairfax. The EU has opted for a comprehensive and prescriptive legislative approach; the US is introducing some mandatory reporting requirements; for example, for foundation models that pose serious national or economic security risks.
Moreover, a joint US-EU initiative is drafting a set of voluntary rules for AI businesses—the AI code of conduct. In the short term, these may serve as de facto international standards for global firms. Can the Minister tell your Lordships’ House whether the Government are engaging with this drafting? The noble Lord, Lord Empey, suggested that the Government have lost momentum. Can the Minister explain why the Government are allowing the UK to lose influence over the development of international AI regulation?
The noble Lord, Lord Clement-Jones, noted that the Library briefing states that this Bill marks a departure from government approach. The Government have argued that introducing legislation now would be premature and that the risks and challenges associated with AI, the regulatory gaps and the best way to address them must be better understood. This cannot be the case. Using the horse analogy adopted by the noble Baroness earlier, we need to make sure that we do not act after the horse has bolted.
I pay tribute, as others have done, to the work of the House of Lords Communications and Digital Committee. I found the points highlighted by its chair and her comments very helpful. We are facing an inflection point with AI. It is regrettable that the government response is not keeping up with the change. Why are the Government procrastinating while all other G7 members are adopting a different, more proactive approach? A Labour Government would act decisively and not delay. Self-regulation is simply not enough.
The honourable Member for Hove, the shadow Secretary of State for Science, Innovation and Technology, outlined Labour’s plans recently at techUK’s annual conference. He said:
“Businesses need fast, clear and consistent regulation … that … does not unnecessarily slow down innovation”—
a point reflected in comments by the noble and learned Lord, Lord Thomas. We also need regulation that encourages risk taking and finding new ways of working. We need regulation that addresses the concerns and protects the privacy of the public.
As my noble friend Lord Chandos said, the UK also needs to address concerns about misinformation and disinformation, not least in instances where these are democratic threats. This point was also reflected by the noble Lords, Lord Vaizey and Lord Fairfax.
Labour’s regulatory innovation office would give strategic steers aligned with our industrial strategy. It would set and monitor targets on regulatory approval timelines, benchmark against international comparators and strengthen the work done by the Regulatory Horizons Council. The public need to know that safety will be baked into how AI is used by both the public and the private sectors. A Labour Government would ensure that the UK public sector is a leader in responsibly and transparently applying AI. We will require safety reports from the companies developing frontier AI. We are developing plans to make sure that AI works for everyone.
Without clear regulation, widespread business adoption and public trust, the UK’s adoption of AI will be too slow. It is the Government’s responsibility to acknowledge and address how AI affects people’s jobs, lives, data and privacy, and the rapidly changing world in which they live. The Government are veering haphazardly between extreme risk, extreme optimism and extreme delay on this issue. Labour is developing a practical, well-informed and long-term approach to regulation.
In the meantime, we support and welcome the principles behind the Private Member’s Bill from the noble Lord, Lord Holmes, but remain open-minded on the current situation and solution, while acknowledging that there is still much more to be done.
I join my thanks to those of others to my noble friend Lord Holmes for bringing forward this Bill. I thank all noble Lords who have taken part in this absolutely fascinating debate of the highest standard. We have covered a wide range of topics today. I will do my best to respond, hopefully directly, to as many points as possible, given the time available.
The Government recognise the intent of the Bill and the differing views on how we should go about regulating artificial intelligence. For reasons I will now set out, the Government would like to express reservations about my noble friend’s Bill.
First, with the publication of our AI White Paper in March 2023, we set out proposals for a regulatory framework that is proportionate, adaptable and pro-innovation. Rather than designing a new regulatory system from scratch, the White Paper proposed five cross-sectoral principles, which include safety, transparency and fairness, for our existing regulators to apply within their remits. The principles-based approach will enable regulators to keep pace with the rapid technological change of AI.
The strength of this approach is that regulators can act now on AI within their own remits. This common-sense, pragmatic approach has won endorsement from leading voices across civil society, academia and business, as well as many of the companies right at the cutting edge of frontier AI development. Last month we published an update through the Government’s response to the consultation on the AI White Paper. The White Paper response outlines a range of measures to support existing regulators to deliver against the AI regulatory framework. This includes providing further support to regulators to deliver the regulatory framework through a boost of more than £100 million to upskill regulators and help unlock new AI research and innovation.
As part of this, we announced a £10 million package to jump-start regulators’ AI capabilities, preparing and upskilling regulators to address the risks and to harness the opportunities of this defining technology. It also includes publishing new guidance to support the coherent implementation of the principles. To ensure robust implementation of the framework, we will continue our work to establish the central function.
Let me reassure noble Lords that the Government take mitigating AI risks extremely seriously. That is why several aspects of the central function have already been established, such as the central AI risk function, which will shortly be consulting on its cross-economy AI risk register. Let me reassure the noble Lord, Lord Empey, that the AI risk function will maintain a holistic view of risks across the AI ecosystem, including misuse risks, such as where AI capabilities may be leveraged to undermine cybersecurity.
Specifically on criminality, the Government recognise that the use of AI in criminal activity is a very important issue. We are working with a range of stakeholders, including regulators, and a range of legal experts to explore ways in which liability, including criminal liability, is currently allocated through the AI value chain.
In the coming months we will set up a new steering committee, which will support and guide the activities of a formal regulator co-ordination structure within government. We also wrote to key regulators, requesting that they publish their AI plans by 30 April, setting out how they are considering, preparing for and addressing AI risks and opportunities in their domain.
As for the next steps for ongoing policy development, we are developing our thinking on the regulation of highly capable general-purpose models. Our White Paper consultation response sets out key policy questions related to possible future binding measures, which we are exploring with experts and our international partners. We plan to publish findings from this expert engagement and an update on our thinking later this year.
We also confirmed in the White Paper response that we believe legislative action will be required in every country once the understanding of risks from the most capable AI systems has matured. However, legislating too soon could easily result in measures that are ineffective against the risks, are disproportionate or quickly become out of date.
Finally, we make clear that our approach is adaptable and iterative. We will continue to work collaboratively with the US, the EU and others across the international landscape to both influence and learn from international development.
I turn to key proposals in the Bill that the noble Lord has tabled. On the proposal to establish a new AI authority, it is crucial that we put in place agile and effective mechanisms that will support the coherent and consistent implementation of the AI regulatory framework and principles. We believe that a non-statutory central function is the most appropriate and proportionate mechanism for delivering this at present, as we observe a period of non-statutory implementation across our regulators and conduct our review of regulator powers and remits.
In the longer term, we recognise that there may be a case for reviewing how and where the central function has delivered, once its functions have become more clearly defined and established, including whether the function is housed within central government or in a different form. However, the Government feel that this would not be appropriate for the first stage of implementation. To that end, as I mentioned earlier, we are delivering the central function within DSIT, to bring coherence to the regulatory framework. The work of the central function will provide clarity and ensure that the framework is working as intended and that joined-up and proportionate action can be taken if there are gaps in our approach.
We recognise the need to assess the existing powers and remits of the UK’s regulators to ensure they are equipped to address AI risks and opportunities in their domains and to implement the principles consistently and comprehensively. We anticipate having to introduce a statutory duty on regulators requiring them to have due regard to the principles after an initial period of non-statutory implementation. For now, however, we want to test and iterate our approach. We believe this approach offers critical adaptability, but we will keep it under review; for example, by assessing the updates on strategic approaches to AI that several key regulators will publish by the end of April. We will also work with government departments and regulators to analyse and review potential gaps in existing regulatory powers and remits.
Like many noble Lords, we see approaches such as regulatory sandboxes as a crucial way of helping businesses navigate the AI regulatory landscape. That is why we have funded the four regulators in the Digital Regulation Cooperation Forum to pilot a new, multiagency advisory service known as the AI and digital hub. We expect the hub to launch in mid-May and will provide further details in the coming weeks on when this service will be open for applications from innovators.
One of the principles at the heart of the AI regulatory framework is accountability and governance. We said in the White Paper that a key part of implementation of this principle is to ensure effective oversight of the design and use of AI systems. We have recognised that additional binding measures may be required for developers of the most capable AI systems and that such measures could include requirements related to accountability. However, it would be too soon to mandate measures such as AI-responsible officers, even for these most capable systems, until we understand more about the risks and the effectiveness of potential mitigations. This could quickly become burdensome in a way that is disproportionate to risk for most uses of AI.
Let me reassure my noble friend Lord Holmes that we continue to work across government to ensure that we are ready to respond to the risks to democracy posed by deep fakes; for example, through the Defending Democracy Taskforce, as well as through existing criminal offences that protect our democratic processes. However, we should remember that AI labelling and identification technology is still at an early stage. No specific technology has yet been proven to be both technically and organisationally feasible at scale. It would not be right to mandate labelling in law until the potential benefits and risks are better understood.
Noble Lords raised the importance of protecting intellectual property, a profoundly important subject. In the AI White Paper consultation response, the Government committed to provide an update on their approach to AI and copyright issues soon. I am confident that, when we do so, it will address many of the issues that noble Lords have raised today.
In summary, our approach, combining a principles-based framework, international leadership and voluntary measures on developers, is right for today, as it allows us to keep pace with rapid and uncertain advances in AI. The UK has successfully positioned itself as a global leader on AI, in recognition of the fact that AI knows no borders and that its complexity demands nuanced international governance. In addition to spearheading thought leadership through the AI Safety Summit, the UK has supported effective action through the G7, the Council of Europe, the OECD, the G5, the G20 and the UN, among other bodies. We look forward to continuing to engage with all noble Lords on these critical issues as we continue to develop our regulatory approach.
My Lords, I thank all noble Lords who have contributed to this excellent debate. It is pretty clear that the issues are very much with us today and we have what we need to act today. To respond to a question kindly asked by the noble Lord, Lord Davies of Brixton, in my drafting I am probably allowing “relevant” regulators to do some quite heavy lifting, but what I envisage within that is certainly all the economic regulators, and indeed all regulators who are in a sector where AI is being developed, deployed and in use. Everybody who has taken part in this debate and beyond may benefit from having a comprehensive list of all the regulators across government. Perhaps I could ask that of the Minister. I think it would be illuminating for all of us.
At the autumn FT conference, my noble friend the Minister said that heavy-handed regulation could stifle innovation. Certainly, it could. Heavy-handed regulation would not only stifle innovation but would be a singular failure of that creation of the regulatory process. History tells us that right-size regulation is pro-citizen, pro-consumer and pro-innovation; it drives innovation and inward investment. I was taken by so much of what the Ada Lovelace Institute put in its report. The Government really have given themselves all the eyes and not the hands to act. It reminds me very much of a Yorkshire saying: see all, hear all, do nowt. What is required is for these technologies to be human led, in our human hands, and human in the loop throughout. Right-size regulation, because it is principles-based, is necessarily agile, adaptive and can move as the technology moves. It should be principles-based and outcomes focused, with inputs that are transparent, understood, permissioned and, wherever and whenever applicable, paid for.
My noble friend the Minister has said on many occasions that there will come a time when we will legislate on AI. Let 22 March 2024 be that time. It is time to legislate; it is time to lead.
(9 months ago)
Lords ChamberMy Lords, I propose that the Bill be read a second time with some trepidation, not because this is a momentous Bill but, on the contrary, because it is a very modest measure indeed.
I shall go through its clauses, which are very few. The first requires the Secretary of State to establish a committee and allows the Secretary of State to appoint the members of that committee. I have not chosen to specify who they should be or how many they should be, because I trust the Secretary of State in whatever Government, of whatever political colour, to make sensible decisions about that and appoint appropriate and skilled people. The clause also states what the purpose of the committee is, which bears reading out. It is
“to be a source of evidence-based, scientific expertise on the sentience of the human foetus in the light of developments in scientific and medical knowledge, and to advise the government on the formulation of relevant policy and legislation”.
The second clause requires the committee to publish reports. It actually requires the committee to publish only one report per annum, for the purposes of transparency, saying what the committee has done and giving an account of any income or expenditure it has had, as well as who its members are—a normal sort of annual report. The Government are not required to respond to that, but the committee is then free to publish further reports of a more scientific character. Clause 3—I shall come to this—requires the Government to respond to reports of that character. The other part of Clause 2 is language that ensures that the Bill is consistent with devolution legislation.
Clause 3 refers to the response that the Government have to make to those reports. There is nothing to stop the Government responding by simply saying that they have noted the report, if that is as far as they wish to go.
Finally, Clauses 4 and 5 are supplementary and general clauses, which I have been advised are appropriate for this Bill.
Why would such a committee be needed, and what value would it have? The question of human foetal sentience has been addressed by a number of bodies, but principally by the Royal College of Obstetricians and Gynaecologists. As the very helpful note from the Library makes clear, the current conclusion—because, of course, this is a shifting and developing scientific field—is that, to date, evidence indicates that the possibility of pain perception before 28 weeks of gestation is unlikely. However, one of the members who formed the committee that reached that view has now changed his mind and takes the view that the perception of pain could arise as low as 12 weeks.
The British Association of Perinatal Medicine takes the view that foetuses born as early as 22 weeks’ gestation show physical and physiological responses to pain, and there is no reason to think that foetuses at this gestation are any different. In addition, it might be said that the NHS recommends the use of analgesia for the foetus in the case of operations in utero for spina bifida from 20 weeks onwards.
So it is fair to say that there is considerable breadth of view on the question of human foetal sentience and when it kicks in. We would all benefit—government and all the relevant professions—from having a forum in which a clearer and more determined view, and one which developed over time, could be thrashed out between different medical professions. It would also have the advantage that the Government generally, in responding to questions on this issue, have tended to rely on the work of the Royal College of Obstetricians and Gynaecologists, which places a heavy burden on it. The advantage of having a committee such as I propose would mean that there are opportunities to bring together other royal colleges, including those representing paediatricians, midwives and others, so that their view could be contributed on an equal basis.
This all brings me to the question of advances in medicine and medical science, and rapid advances in surgery. I have referred to the rare but important cases of operations in utero for spina bifida, but there are other reasons why operations may need to be carried out on the human foetus while still in the womb. There are also, of course, cases where it is necessary to operate on a pregnant woman for her own sake, and in those circumstances consideration should also be given to what consequences might arise in relation to the sentience of the foetus that she is carrying in her womb.
All of this, at the moment, is being conducted against a background of inconsistency of professional opinion. If one says, as one could, that this should all be left, as a matter of clinical judgment, to the medical practitioner, I am all in favour of medical practitioners being able to exercise clinical judgment freely and professionally, but in fact it is very difficult to do that without some sort of agreed guidance. We do not, as a matter of practice, leave practitioners free of guidance—there is a great deal of guidance on a range of topics, which they follow when carrying out their necessary and valuable work—so I do not think it impinges on the freedom of the medical practitioner to exercise their professional judgment that there should be a better-informed agreement on the time at which foetal sentience arises than currently exists, given the inconsistencies that I have drawn attention to.
There are also inconsistencies with the way in which we treat sentient animals. The then Animal Welfare (Sentience) Bill 2022, which came through your Lordships’ House, established a precedent for this Bill by requiring the Government to set up and maintain a committee precisely to give them advice on policy in relation to animal sentience. That Act, noble Lords may recall, declares mammals and certain categories of shellfish to be sentient. I would be surprised if my noble friend the Minister wanted to say that a human foetus should be denied the same esteem as a lobster, but in fact that is the current position. We have legal protections for lobsters and decapod crustaceans—I remember the discussions during the passage of that Bill about those animals—as well as all mammals, but we have no view, let alone protection, for the human foetus.
There is also an inconsistency with the Animals (Scientific Procedures) Act 1986, which defines protected animals and protects their foetuses from a point two-thirds through the gestation period. We have legal protection for canine foetuses from seven weeks onwards, but we do not even have informal policy advice for the human foetus and its own sentience. This Bill would open a path to correcting that, by allowing scientists to come together and reach an agreed view and a developing view, in the light of new discoveries.
Finally, I come somewhat reluctantly to the question of abortion, which I have not mentioned until now because the Bill is not about abortion. The question of sentience is much broader than that and relates to foetuses where the mother is extremely keen, devoted and committed—as indeed are her professional carers—to the healthy birth of that child.
The Bill does nothing to change abortion law or the way in which any proposed future changes to abortion law are carried out. It has no implications, other than to provide a focus for scientific knowledge, on the course of legal developments relating to abortion. It does nothing to impinge on the legal rights of women to terminate a pregnancy. Anyone who argues that it does is implicitly arguing that those rights are defensible only if scientific knowledge is somehow suppressed and dispersed.
This is a modest Bill intended to provide scientific knowledge and inform public debate. It is also based on a clear precedent advanced by the Government; the Animal Welfare (Sentience) Act was a government Bill. It is hard to see on what grounds the Government or noble Lords would object to it. I beg to move.
My Lords, when I saw this Bill on our prospectus I was immediately suspicious. It follows close on the heels of an effort during the Public Order Bill to enable protests on the doorstep of abortion clinics. Happily, that effort failed and it was agreed that buffer zones were necessary. The amendment would have allowed people who totally opposed the termination of a pregnancy to harass women as they entered clinics for medical attention.
Why would an independent committee be needed to respond to the issue before us today? The Royal College of Obstetricians and Gynaecologists updated its research and guidance less than two years ago, in 2022. The royal colleges—I am a fellow of three of them—are the seats of high-level monitoring of global developments in research and conduct of medical matters. They do it with great care and their research relates to what happens not just in the United Kingdom but around the world.
Why am I concerned? The politics of the United States of America is riven with divisions on the issue of abortion. For many decades it has been weaponised by far-right, deeply misogynistic organisations calling themselves Christian, which oppose women’s right to reproductive freedom. I always say, “Follow the money”. Dark money has surged into the United Kingdom’s anti-abortion groups in recent years. We should be concerned about overseas political influence inside our country. Sadly, many far-right organisations are being funded by such sources. Shadowy funds whose sources are obscured or not fully disclosed play an alarming part in enabling think tanks and far-right political groups to distort our politics.
One group, the Alliance Defending Freedom, has doubled its activities in this country in the last couple of years. Founded in the United States in 1993, the Alliance Defending Freedom—the freedom of only some—is an influential conservative group that aims to promote Christian principles and ethics. It is behind legal efforts to roll back abortion rights, remove LGBT+ protections and demonise trans people—that is not very Christian, and I count myself as one. It claims that its tireless work—
Is the noble Baroness suggesting that I have been in receipt of dark money or any money at all, or would she like to take the opportunity to state that she is not making such an allegation?
I am perfectly happy to say that some innocent dupes are used by some of the organisations funded in this way.
This organisation claims that its tireless work helped the United States Supreme Court overturn Roe v Wade, which guaranteed the right to abortion. The ADF has supported controversial anti-abortion activity in this country, including supporting and funding protesters outside clinics. We are seeing the ramping up of spending to bring US-style abortion politics into our country.
May I ask the noble Baroness what precisely this has to do with a Bill proposing a committee of research and analysis?
It is quite clear that the purpose of the Bill is to seek to roll back advances that have been made in relation to abortion, and to try to reduce the time limits we currently have. The House should know that in 2020, £390,000 came through the ADF into the UK, and it is not disclosed where those funds come from. That money doubled to £770,000 in 2022. We do not have a current figure, but I am sure it is multiplying at a rate of knots. We are seeing, I am afraid, an effort to weaponise the issue of abortion and women’s freedom in order to create divisions in our society. I really hope the House sees the purport in the Bill.
My Lords, I congratulate my noble friend Lord Moylan on bringing forward the Bill—
My Lords, I believe I am the next speaker; thank you.
There was an occasion in the last Session when a speech was made—it may well have been by the noble Baroness, Lady Kennedy of The Shaws—and it was so impassioned that the late Lord Cormack looked at me as the next speaker and said, “Follow that!”, and I said, “I shall try”. Sadly, I failed to follow it very effectively.
Today’s debate is clearly very heated. Yet, as the noble Lord, Lord Moylan, said in his opening remarks, it is a very small Bill and is setting the framework for a committee. It is supposed to be an evidence-based committee building on scientific expertise and changes in scientific and medical knowledge. From sedentary positions from the Opposition Front Benches, I have heard that the Bill has everything to do with abortion. Yes, it may have something to do with abortion, but not only abortion. It has nothing on the face of it, or in terms of intent, that is about rolling back women’s rights. What is discussed in the excellent briefing from the Library is that the Royal College of Obstetricians and Gynaecologists and the British Medical Association have different guidance.
The British Medical Association has suggested that
“even if there is no incontrovertible evidence that the fetus feels pain, the use of fetal analgesia when carrying out any procedure (whether an abortion or a therapeutic intervention) on the fetus in utero may go some way in relieving the anxiety of the woman and health professionals”.
Surely, if a foetus of 24, 25 or 26 weeks’ gestation is sentient—whether the proposal is for a medical intervention or for abortion—no one would want the foetus to suffer, including the woman carrying the foetus, whether they intend to carry it to term or they do not wish it to live. Surely nobody wants to inflict pain. If we understand at what point foetal sentience really comes into play, appropriate decisions and recommendations can be made. At the moment, arrangements for medical interventions are in place only for spina bifida, but there are other cases of in utero interventions that should be explored.
There are differences of opinion and there may be different medical judgments in terms of analgesia and anaesthesia, precisely because the questions of the impact on the unborn child will be different. It may be necessary to use analgesia or anaesthesia, or it may not be appropriate, but we need to understand the situation. The proposed committee would be looking at scientific evidence. It would help clinicians to form views and be able better to advise parents and clinicians about the most appropriate way forward.
The suggestion that this is simply about rolling back rights to abortion is disingenuous. I know that from the Front Benches, there is considerable disagreement. I am used to being a lone voice from these Liberal Democrat Benches. Nevertheless, given that my party—and, I believe, other parties—spends a lot of time saying how important it is to have evidence-based policy-making, surely, setting up a committee to look at the evidence and give appropriate information to parents would actually be of benefit to all.
My Lords, first, I apologise to the noble Baroness, Lady Smith, for my inability to read. Secondly, I congratulate my noble friend Lord Moylan on bringing forward this Bill. To the noble Baroness, Lady Kennedy, I say, not only am I not in receipt of any dark money; I am not a member of any sort of pro-life group, APPG or anything like that.
I think it fair to say that the Bill is not that likely to become law, so I suggest that my noble friend is putting down a marker. The noble Baroness, Lady Smith, expressed very well the way we should be looking at these things, on a scientific basis. I am, in fact, going to talk about abortion, which is a path down which one should tread very warily. Last year, there were some 200,000 or more abortions, of which the vast majority will have been perfectly healthy foetuses that people just did not wish to take to term. That was not the intention of David Steel in 1967; it was thought to be quite a minor adjustment to the number of children that would be aborted.
My own view is that abortion is necessary on many occasions, but it is a necessary evil. It is not something that anybody could contemplate lightly or would wish to see happen—either the mother or indeed the child. This is not about women’s rights. The reason why I am putting down a marker today is that there is talk of decriminalising late abortions, after one or two very high-profile cases of a mother being prosecuted. In the particular case I am thinking of, a mother aborted at home, through drugs, a 36-week-old foetus.
Of course, that child could have lived perfectly happily, so we have to ask ourselves not about women’s rights, but about where murder begins and murder ends. A child that could have been born perfectly happily—that is being born in the ward next door—being aborted when it could have lived, seems to me to be a very, very serious matter. I put this down as a marker because I hope that nobody will pursue the idea that we decriminalise late abortions, which may take place at home. This is not about women’s rights, but about a decent, humane society.
My Lords, in declaring non-financial interests as listed in the register, I express my gratitude to the noble Lord, Lord Moylan, for bringing in this Bill. I entirely endorse what my noble friend Lady Smith of Newnham has said to the House today. Being pro-life—for a woman and a child—and believing in the right to life as a human right does not make people misogynist bigots, and they should not be caricatured as such.
During the passage of the Animal Welfare (Sentience) Act 2022, I wrote that it left a gaping hole because of the lack of any comparable mechanism for the consideration of the human foetus, a point the noble Lord made earlier. I agree with what CS Lewis said in support of the National Anti-Vivisection Society: if you start by being cruel to animals, you will also end up being cruel to human beings. It is that incongruity, and how we treat the most vulnerable of our own species, that is close to my heart, and I make no apology for that.
In the 18th century, Jeremy Bentham argued that the relevance of pain was not dependent on the ability to think rationally, but rather to feel, as animals can do. In 1789 he wrote,
“the question is not, Can they reason? Nor, can they talk? But, can they suffer?”
There is an analogy here with foetal sentience, one which emerged in two ad hoc inquiries held in Parliament and in which I took part, one chaired by the late Lord Rawlinson of Ewell, a former Solicitor-General. We said that, like a newborn infant, a foetus may not be rational in the way an older child or adult is, but, if there are grounds to believe that a child in the womb may be able to suffer, we have a responsibility to do what we can to minimise such suffering. If we are uncertain about the exact point at which, and by how much, an unborn baby suffers, we should always err on the side of caution.
The noble Lord, Lord Moylan, referenced, by implication, Professor John Wyatt, who has three decades of experience treating extremely premature babies, including a large number born at 22 or 23 weeks, below the current abortion time limit. A few years ago, when addressing parliamentarians, in sobering evidence, Professor Wyatt told us that there was a link between what the foetus and premature babies experience. He said:
“I think from my observation of extremely premature babies that they are sentient, they are conscious, and they are responsive to their environment”.
Why should we care? First, this is a human rights issue. The preamble to the UN Convention on the Rights of the Child, to which the UK is a signatory, states that the child
“needs special safeguards and care, including appropriate legal protection, before”—
please note that word—
“as well as after birth”.
We have obligations that must be honoured, and how will we do that without expert research or guidance policy?
Secondly, as we have heard from the noble Lord, Lord Robathan, barbaric, discriminatory legislation permits abortion up to and even during birth in the cases of Down’s syndrome—it was World Down Syndrome Day yesterday—club foot and cleft lip and palate. As the noble Baroness told us, the NHS recommends the use of analgesics when performing foetal surgery on babies with spina bifida after 20 weeks, but pain relief is not mandatory for foeticide abortions.
Noble Lords should study the recommendations of the UN Committee on the Rights of Persons with Disabilities and attempts in the other place, in one instance, to increase the opportunities of abortion right up to birth in all cases, and, in another, to lower the abortion time limit from 24 to 22 weeks, in line with the increase in survival rates of babies born at 22 and 23 weeks. I also gently point out that, as long ago as 1988, I succeeded in the House of Commons in persuading 296 MPs—a majority of 45—to vote for my Bill to reduce the upper time limit to 18 weeks, the Swedish upper time limit. It would have saved the lives of some of the 10 million babies who have been aborted in Britain, but it was talked out by opponents. The case today is even more compelling; please note that the EU average upper time limit is around 12 weeks, with many Parliaments greatly influenced by the questions of sentience and pain.
In other areas of medicine, the precautionary principle is often applied: the idea that, where there is uncertainty, we should err on the side of caution. So, it seems to me that we ought to be prudent when it comes to foetal sentience.
I end with Professor Wyatt’s words:
“I think we should play safe, we should give the foetus the benefit of the doubt. We should assume that it is capable of experiencing pain and unpleasant sensations, and we should then treat the foetus appropriately, which would if necessary be with strong pain relief medication or with anaesthesia”.
This is both sensible and humane. A foetal sentience committee, which is all that the noble Lord, Lord Moylan, is asking us to support today, would enable us to increase our understanding in this area. I therefore commend the Bill to the House, and I gently say to my friend, the noble Baroness, Lady Kennedy of The Shaws, that Article 3 of the Universal Declaration of Human Rights states:
“Everyone has the right to life”.
Supporting that does not make me right wing, a bigot or a misogynist.
My Lords, I am grateful to my noble friend Lord Moylan for introducing this Bill for a foetal sentience committee to review understanding of foetal sentience and to inform policy. It is a pleasure—indeed, I am humbled—to follow the noble Lord, Lord Alton, with whose views I find myself so often in agreement. The last time was in a committee that discussed China. I find him the most persuasive of human rights defenders in this Chamber and have done since I arrived.
This is a modest Bill, with modest aims: to approach policy in this area in the same way as in others, through consultation and the careful weighing of specialist evidence. That, as you would expect, continues to change, with new research and new evidence. In this area especially, there are many disagreements about the weight given to different parts of the evidence, and specialists themselves often change their views, as indeed has been pointed out earlier in this debate.
In particular, there are now doubts about whether some of the physiological assumptions that have dominated the debate are justified. Emphasis has often been placed on the role of the thalamus, a group of cells centrally in the brain that helps to control how sensory and motor signals are passed from one part of the cerebral cortex to another, and of the cortex, the grey matter that has a role in memory, thinking, learning, reasoning, problem-solving, consciousness and functions related to the senses. The emphasis has been often focused on them in respect of the perception of pain, but some researchers regard this as too narrow. There is, therefore, very good reason for all this complex, controversial and developing material to be weighed by an independent committee that can help advise government and parliamentarians to make and shape policy and legislation.
The approach already exists in the case of animal welfare, as we have heard, where there is a committee on animal sentience—see the Animal Welfare (Sentience) Act 2022. I see it as a model for the Bill. Indeed, as we heard earlier, in the UK, the foetus of protected animals in the case of a mammal, bird or reptile is protected when half the gestation or incubation period of the relevant species has elapsed, as set out in Section 1 of the Animal (Scientific Procedures) Act 1986. This Act regulates the use of protected animals in any experiment or other scientific procedure which may cause pain, suffering, distress or lasting harm to the animal. It is important that, as a society, we do not knowingly and unnecessarily inflict pain. We have legislated to stop this happening in protected animals and prenatal animals. We should now extend this welfare to our own species, and a small but significant step in doing so is to gather and sift the relevant evidence.
I understand that one reason why some, including the noble Baroness on the Benches opposite, oppose this Bill is that they see it as a covert attack on the present abortion laws. If the committee is set up as proposed, they fear that it will, as the science develops, find more and more evidence that foetuses, as they like to regard them, are indeed prenatal babies, able to feel pain from an early stage, and that abortion is merely premature infanticide. Yet, however strong their views, they should not try to bury evidence that goes against them. They should be willing for the scientific picture to be fully understood and presented in all its nuances to policymakers, as this Bill proposes, and to make their arguments, just as those who oppose them should do in the light of this Bill.
My Lords, I commend my noble friend for tabling this Bill, which is on such an important issue. I had hoped that we would restrict our debate to empirical evidence on the merits of this modest Bill, rather than hear smears about right-wing dark money and conspiracies.
I will restrict my remarks to a few reflections on relevant studies on both sides of the debate and highlight the need for objectivity in this area, of a kind that could be provided by a suitably comprised committee.
Why do noble Lords who are proposing and supporting this Bill assume that the Royal College of Obstetricians and Gynaecologists is not capable of researching in the way that the noble Lord describes? Why are they again attacking institutions that have expertise and do this constantly? It is like the attack on the Supreme Court. It is basically expressing contempt for the institutions that currently exist and doing precisely what they want, because they want to set up committees that I suggest would be weighted with people that they would choose.
I think that is a fatuous conspiracy theory again, but, if the noble Baroness satisfactorily answers my question about the involvement of Marie Stopes International and BPAS in the RCOG, I will gladly debate with her on the issues that she raises.
If I can continue—
I am not addressing the noble Lord. I am speaking to my colleagues on his Front Bench. I am very sorry, but shouting “you” and pointing is not the conduct that we expect in this House. It is in our guidance, so I ask the Government Whip to please remonstrate with his colleague not to behave like that.
I say to noble Lords that the noble Baroness, Lady Kennedy of The Shaws, had ample opportunity to make her points. She intervened on me and I put a very reasonable question back to her. Perhaps I can now continue.
Noble Lords may be aware of a fascinating peer-reviewed academic study published in 2010 of twins in the womb at 14 weeks of gestation. The study found that the twins’ self-directed hand movements were more calibrated than movements to the uterine wall, while movements towards the co-twin exhibited even greater care. The study determined that such deliberate actions could not be the result simply of spontaneous reflexes. The team behind the study concluded that these findings force us to predate the emergence of social behaviour. Another study published by a team of child psychologists and neuroscientists in 2006 found “surprisingly advanced motor planning” in foetuses at 22 weeks’ gestation, again pointing towards a sentience of the foetus during the second trimester of pregnancy.
These are precisely the kinds of studies that ought to be informing government policy, yet neither was cited in the RCOG reports on foetal sentience to which my noble friend alluded earlier. Some will no doubt argue that a committee is not required when we have the Royal College of Obstetricians and Gynaecologists to guide us, but, on the contrary, I would suggest that RCOG reports on foetal sentience highlight the need for objectivity in this area and there are a number of good reasons to be cautious about accepting the conclusions. The RCOG itself has now distanced itself from some of the conclusions in its 2010 report. For example, its updated 2022 report no longer asserts, as the earlier one did, that a foetus is in “continuous sleep-like unconsciousness or sedation”. The 2022 report also removed a section on responding to common questions that included answering the question, “Will the baby feel or suffer pain?” with “No, the foetus does not experience pain”. Seemingly, it is no longer sure.
Since the RCOG has rejected sections of its own report, it would seem wise not to assume that its 2022 update is wholly reliable either. In a letter published in the European Journal of Pain, Italian neonatologist and bioethicist Carlo Bellini, who has written extensively on foetal pain, has questioned the conclusions of the 2022 report, arguing that they were based on misrepresentations and incorrect extrapolations of research cited in their support. As a layman of course it is difficult for me to comment objectively on differing research, but what is clear is that government policy would be assisted by a committee that can provide objectivity in this debate and consider all relevant findings. In fact, this is something that ought to be supported by the RCOG.
Let me finish with a final reflection on why this matters beyond simply informing the abortion debate. A 2007 academic journal cited in Neurodevelopment Changes of Foetal Pain asserted:
“Exposure of the foetus and premature newborn to pain has been associated with long-term alterations in pain response thresholds as well as changes in behavioural responses relating to the painful stimuli”.
In other words, if a baby experiences pain before birth, it may impact its development and behaviour in later life. It is therefore imperative that we understand foetal sentience adequately so that any treatment of unborn babies is performed in a way that will not lead to long-term damage. I therefore strongly support my noble friend’s Bill.
My Lords, I wish to put on record that although my noble friend and I have very different views, as a matter of principle I defend her right to make her views known, and I hope she will understand why I respectfully disagree with her. I absolutely agree with the noble Baroness, Lady Kennedy of The Shaws: she is spot on. This Bill is part of a far wider anti-gender, anti-LGBT attack on human rights, a campaign which is international and largely but not exclusively put forward by national Conservatives and Christian nationalists.
The noble Lord, Lord Moylan, in his introduction said two things, both of which I think have subsequently been shown to be not true. This Bill is neither modest, nor not about abortion. It is far from that. It is unprecedented government interference in the ethics and practices of abortion care. It seeks to circumvent expert clinical guidelines, not because of another body of clinical evidence but because of an ideological disagreement with the conclusions of the work of the Royal College of Obstetricians and Gynaecologists. I should say to noble Lords opposite that the RCOG is duty-bound to provide evidence-based clinical guidelines, and to think that it would do so without talking to anaesthetists and other relevant professionals is to do that college a great disservice.
This Bill is focused solely on the foetus and says nothing about the rights of women. It is from the same stable that has brought similar legislation about in American states such as Arizona, Kansas and North Carolina, and it absolutely is a precursor to further legislation which will limit and outlaw abortion in full. Setting up a committee in this way, which has no remit to consider the rights of women or their experiences and healthcare, speaks volumes about the real motivation behind this legislation. I have to say to noble Lords opposite, and on the Cross Benches, who have repeatedly drawn parallels with the use of analgesia in animal scientific experimentation that they have ignored the fact that in this Bill we are talking about foetuses that are carried in the bodies of women—who are sentient beings capable of expressing not only their own healthcare needs but those of others.
This has been presented as being a method by which we can get to objective evidence. It is nothing of the sort. This is about setting up a committee to consider selective evidence—evidence that, I put it to the noble Lord, will inevitably lead towards a diminution of women’s rights. Far from being humane, the Bill has considerable scope for unintended consequences. The threats to women, not just during pregnancy but during childbirth, were this to go ahead, are considerable. We have already seen that throughout the United States, in states where these sorts of measures have been introduced.
I put it to you that this Bill does pretty well the opposite of what has been claimed for it. It is actually about picking and choosing selective evidence in order to lead down a path, as has happened in Alabama, towards the complete abolition of abortion. It is a Trojan horse. I really hope that we will not be fooled, and that we will put this in the context of that wider campaign against women’s rights and human rights.
How does my noble friend account for the disparity between the views of the BMA and of the Royal College of Obstetricians and Gynaecologists?
It is not uncommon for health professionals to have different views and for their views to develop over time. However, I would much rather listen to either of those than to a hand-picked political committee making political decisions on what should really be a health matter.
This is a Trojan horse, and I really hope we will see through it. I thank the noble Lord, Lord Moylan, for unveiling, yet again, a little bit more of this wider campaign against women’s rights and human rights. He has done us a service.
My Lords, I refer noble Lords across the House to the Companion at 4.18, where it states clearly that we address each other as “noble Lord”. We do not use the word “you”, and there is a good reason for that, which is that that actually makes us a politer House. Standing up, even in impassioned debates on subjects about which people feel strongly, and saying “you” will lead to people pointing, which is not acceptable, and there is a reason for this. I have been in this House for 26 years, and there are some things that are wise, and this is one of those.
My Lords, I heartily endorse what the noble Baroness has just said about how we address each other. Does she think that stating quite clearly that those who disagree with you are either in receipt of “dark money” or are “innocent dupes” meets the standards of the House?
The noble Lord will note that my noble friend made all her remarks within the guidelines of the House on how we address each other. He may not enjoy what she had to say, and he may disagree with her—some of us do agree with her—however, she did it within the rules of the House.
First, I would like to congratulate—
I am sorry to interrupt the noble Baroness. I do not think I have ever misused the procedures of the House and I do not intend to start now. I respect the noble Baroness and we have made common cause on my occasions. Does she think it is within the rules of the House to talk about other noble Lords as if they are dupes or as if they are in receipt of money from outside that has been undeclared?
If the noble Lord reads Hansard, I am not sure that that is actually what my noble friend said. However, she is perfectly capable of defending herself.
I want to start my remarks by congratulating the noble Lord, Lord Moylan, on introducing the Bill with such clarity. He called it “modest”, but I beg to differ: this is not a modest Bill. It is short, which definitely helps, but it is not modest. I also need to start by stating that Labour’s policy is that abortion is an essential part of healthcare. We support a woman’s right to choose and we believe that access to safe, legal abortion should be available throughout the UK.
We need to be clear about the true intentions of this proposal: it seeks to chip away at the Abortion Act and change how we govern abortion law. The noble Lord, Lord Moylan, may have said that this is not about abortion or the Abortion Act, but the fact that so many of his supporters have said exactly the opposite—that this is indeed about abortion—shows that that is what the Bill is actually about. We can be clear that that is the intention behind the Bill.
The topic of foetal sentience is under constant review by the Royal College of Obstetricians and Gynaecologists, and its last review found no evidence of a foetus experiencing pain before 24 weeks. It is best that we trust expert medical bodies and scientists, not a Government-appointed committee, to say what is the case and how we should proceed. We need to be clear that the Bill seeks to circumvent expert clinical guidance because it has an ideological disagreement with its conclusions. I was looking at the list of participants on the committee of the royal college, and I suggest that noble Lords do the same because it is a truly impressive medical and scientific body that takes its job very seriously. One noble Lord said they had changed their view between 2010 and 2022. In a way, that proves the point: the point of that committee is to do that review.
Has there ever been a time when a Bill has been brought to this House asking the Government to set up a committee to analyse the medical evidence for, for example, coronary heart disease or endometriosis? No, because we trust the relevant expert medical bodies to do that job for us. We believe the Bill represents a dangerous move to politicise the way that we make decisions about healthcare, and for that reason I will not be supporting it if it moves forward.
The review of foetal awareness of pain reception undertaken by the Royal College of Obstetricians and Gynaecologists found in 2010 that the cerebral cortex is necessary for pain perception, and that connections from the periphery to the cortex are not intact before 28 weeks. It was therefore concluded that a foetus cannot experience pain in any sense before that stage. In the light of that, I ask noble Lords to ask why we would vote to set up a committee on that issue, unless that evidence is not considered robust.
I note that, if the Bill were to pass, the remit of this government committee would not extend to the health and well-being of pregnant women, as the noble Baroness said. The comments about sentience in fish, animals and so on make one question where the supporters of the Bill place women’s health, well-being and reproductive rights on the scale of animals, fish and so on. One has to question where that is coming from.
No other area of healthcare is subjected to a dedicated government committee designed to limit access to its treatment. The Bill would leave a woman’s right to access to care at the whim of a committee focused solely on the foetus, with no remit to consider women’s experience, needs or rights. I will certainly not support the Bill as it progresses.
I thank my noble friend Lord Moylan for introducing this Private Member’s Bill. I am grateful for the contributions by all noble Lords to the debate, which has proven more than ever that there are some deeply held personal views. That is because the Bill itself raises issues of profound sensitivity on a topic on which, as we see, there is a wide range of views.
As the noble Lord said, the main purpose of the Bill is for the Secretary of State to
“establish and thereafter maintain a committee called the Foetal Sentience Committee”
to provide
“evidence-based, scientific expertise on the sentience of the human foetus in the light of developments in scientific and medical knowledge”.
The Government have expressed reservations over the Bill as we do not believe that legislation is needed. The aims of the Bill can be achieved through alternative routes, thereby rendering legislation unnecessary. The Government must uphold the duty of care not to legislate where other reasonable processes are available. Also, the House can decide, if it wishes, whether it wants to set up a such a committee to scrutinise the matter. I fear that, if the Government were to set up such a committee, we would immediately get into issues of who should be on it, its composition and whether it goes one way or the other. That would inevitably lead to the politicisation of it all, and I think we all agree that that would be a regrettable step.
Before I turn to the points raised in the debate, let me first remind noble Lords of the history of abortion legislation in Great Britain and the Government’s long-standing position on matters of abortion policy. Abortion in Great Britain is governed by the Abortion Act 1967, which clearly defines grounds under which an abortion may be carried out. With the exception of emergencies where it is necessary to perform an abortion to save the life of the woman, two doctors must certify that, in their opinion, which must be formed in good faith, a request for an abortion meets at least one ground set out in the Act, and they should be in agreement about which ground this is.
The current gestational limits of abortion in this country are based on the gestation at which a foetus is considered viable, not on foetal awareness. Foetal viability is the ability of a foetus to survive outside the womb. The link between viability and the gestational limit for abortion was made in the 1990 amendments to the Abortion Act, when the gestational time limit for most abortions was changed from 28 to 24 weeks following a change in widespread medical consensus.
An important feature of abortion legislation is that Parliament decides the circumstances under which abortion can be legally undertaken, not the Government. The Government take a neutral stance on changing existing law relating to abortion. Any change to the law in this area is rightly a matter of conscience for individual parliamentarians, rather than for the Government.
Over the last 50 years, the Abortion Act has contributed to a significant reduction in maternal mortality and enabled lawful access to abortion, which is an important area of women’s healthcare. The department remains committed to ensuring that women have access to safe, legal abortions on the NHS, including taking abortion pills at home where eligible, in accordance with the Act.
According to our most recent data, most abortions take place in the early stages of pregnancy, with 93% up to and including 12 weeks’ gestation. Abortions at 20 weeks and beyond are very infrequent. The percentage performed at 20 weeks and over was 1% in 2020 and 2021, and 41% of these were under ground E of the Abortion Act, which states that, if the child were born, there would be
“a substantial risk … it would suffer from such physical or mental abnormalities as to be seriously handicapped”.
The decision to proceed with an abortion due to foetal abnormality is very difficult for parents. In 1990, when the grounds for abortion were last amended, Parliament decided that doctors are best placed to make these decisions with the women and their families.
A few noble Lords raised issues using the example of the Animal Welfare (Sentience) Act, which legislates for the creation of animal sentience committees. This legislation reflects that the Department for Environment, Food and Rural Affairs sought independent advice specifically on animal welfare, as it is a topic on which it sets policy. The Government do not set policy on foetal awareness. When we consider matters as sensitive as that of foetal awareness, it is right that clinical policy is reached through medical consensus among the professional bodies that set clinical guidelines.
We must recognise that the prevention and relief of unnecessary pain is a primary concern in clinical practice. There is no doubt that there have been medical advances over recent decades in in utero surgery and in the study of pain perception. Clinicians who are experts in this field have undertaken a balanced study of the evidence. It was on this basis that, recently, the Royal College of Obstetricians and Gynaecologists undertook a comprehensive review and published its foetal awareness evidence review in December 2022.
This review concluded that evidence to date indicated that the possibility of pain perception before 28 weeks of gestation was unlikely. As an independent organisation responsible for producing clinical guidelines and setting standards for high-quality women’s healthcare, the RCOG’s clinical expertise on this matter is recognised by the Government. In response to questions raised, my understanding is that analgesia is used more to immobilise the foetus for its safety when operations are taking place.
In conclusion, the Government have expressed reservations about this Private Member’s Bill, as a number of non-legislative routes exist through which a committee could be created to consider this matter. I recognise the sensitivity of this topic, as well as the diverse and deeply held views across the House. I thank all those for taking the time to attend and participate in this important and sensitive debate.
My Lords, I am grateful to those who have spoken in the debate. I am not proposing to answer them individually, but I shall make some comments, if I may, about the extraordinary speech made by the noble Baroness, Lady Kennedy of The Shaws. The first thing is that nobody, certainly not I, made any deprecatory remarks about the Royal College of Obstetricians and Gynaecologists. The idea that we were, or I was, holding it in institutional contempt is simply not borne out by anything that was said. All that was said was that other professional bodies of equal reputation have reached different views, and that a forum for bringing them together so that something could be worked out that might have a more robust character was something that could be recommended. It was complete fantasy and totally unfair to claim that we had said, or I had said, anything deprecatory about the Royal College of Obstetricians and Gynaecologists.
The second thing that I feel I have to say is that, given an opportunity, as the noble Baroness was, to state that she did not think that I was in receipt of dark money, or any money, in relation to this, her only answer was to accuse me of being some dupe. Without making any judgment, I will say that I have never heard anything like that said in your Lordships’ House, in the admittedly short time I have been here.
I shall only repeat, in a way, what I said earlier, in response to the noble Baroness, that the right to an abortion—any right that depends on blanking out developing scientific knowledge—cannot be regarded as a very robust right.
The noble Baroness, Lady Barker, suggested that somehow the evidence before this committee was going to be selected. I have really no idea where this idea comes from or who it is she thinks is going to do the selection. But that brings me to another point—one, I am sure, of genuine misunderstanding—the fault for which I have to attribute to myself.
There was a suggestion by some noble Lords, in particular the noble Baroness, Lady Thornton, that the committee would be full of politicians or politically appointed persons. That was never my intention. I thought that I had made it clear, and perhaps it should have been made clear in the Bill—that is something that could happily be addressed by an amendment—that the membership of the committee was to be made up of experts with scientific knowledge. That is how it would generate scientific knowledge and examine the research. Of course, leading among those experts, I would expect appropriately chosen representatives of the relevant royal colleges and other professional bodies, not politicians at all. I do not think that the Animal Sentience Committee, to take an example that provides a parallel, is stuffed with politicians or political appointees. I think that it has members who know something about animals and how they respond to pain. But that point may be a genuine misunderstanding, and one that I would be happy to address, as I say, in Committee.
As for the Minister’s response, I am grateful for his tone but very sorry to hear his content and the fact that he feels that he cannot agree. In effect, as another noble Lord pointed out—I think that it was the noble Baroness, Lady Smith of Newnham—he is rejecting an opportunity to make policy-making more robust and evidence-based. There were some very clever but totally unpersuasive words about the Animal Sentience Committee. The Minister said, in effect, that the Government’s view was that crustaceans deserve higher esteem and regard than the human foetus. Neither position, in my view, is sustainable. With that, I beg to move.
(9 months ago)
Lords ChamberMy Lords, I am grateful for the opportunity to propose, for your Lordships’ consideration, a statutory mandate for the prevention of and response to genocide and atrocity crimes. Instances of mass atrocity violence—war crimes, crimes against humanity, genocide and ethnic cleansing—are not just rising but are spiralling around the world.
I am the director of the International Bar Association’s Human Rights Institute, and I have spent time with, and campaigned alongside, survivors of atrocity violence, from Yazidi women in Syria and the Uighur communities in exile from China, to the women and human rights defenders of Afghanistan, and many others. Their stories are a glaring testament to the collective failure to stand resolute in the face of atrocity crimes and hold accountable those who continue to perpetrate this kind of identity-based violence.
I have been helped in all my work by a wonderful team at the International Bar Association’s institute, particularly by Dr Ewelina Ochab, who helped in the drafting of this Bill. I have also been assisted by another great group of people, led by Dr Kate Ferguson, who runs an operation called Protection Approaches, which is concerned with foreign affairs.
Of today’s major and emerging foreign policy crises, the vast majority—from Ukraine, Sudan, Syria, Israel and Palestine to Myanmar and Xinjiang—are driven by violent targeting of civilian groups based on their identities. If left unchecked, the global propellants of prejudice and inequality, climate collapse, the retreat from liberal democracy, and the great changes in technology, as we see in social media and so on, mean that identity-based mass atrocity crimes will multiply over the next decade. Of that I am sure. We are already seeing it happening.
At the same time, growing disregard for international law, for the Universal Declaration of Human Rights and our collective responsibilities to prevent and protect, has ushered in an age of impunity. We have failed, time and again, in the face of these grave crimes, and as a consequence our world—indeed, our nation—is less safe and becoming less so. Impunity begets impunity.
Regrettably, these crimes have deep consequences. Perpetrators commit genocide and crimes against humanity because they work; they fulfil the dreadful political objectives of their architects. It is not a nice fact, but it is a true one. It is past time that we, and our Government, accept it. For too long, the reluctance to do so has created a strategic and moral deficit in government policy.
This Bill would move towards filling that gap. Anchored by a statutory mandate and, importantly, bolstered by political leadership and strategic vision, properly prioritising atrocity prevention could see the UK lead the world on preventing violence and protecting civilians. It would re-energise commitments to international humanitarian law and rehabilitate Britain’s battered reputation on the global stage, which has happened as a result of our pulling away from our international obligations.
It is commonly said that armed conflicts are a precursor to the commission of mass atrocity crimes, but in fact it is not always that way round. Indeed, during the many human rights crises of the modern age, mass atrocities often came first and caused armed conflict to break out. For example, mass atrocities drove armed conflict in Yugoslavia in the 1990s and failures to adequately respond to mass atrocities against the Rohingya in Myanmar in 2017 emboldened the Tatmadaw, contributing to their seizure of power in February 2021 and the ensuing civil war. The current conflict between Hamas and Israel follows decades of terrible conduct, by both the IDF and Hamas, before, during and after 7 October. We are now seeing the consequence of that in the current crisis in Gaza.
Members of this House and the other place have stood together in outrage, time and again, and I see those of your Lordships who have often supported those of us who have sought to put amendments into legislation. There has been a strong sense of outrage, but it is not sufficient. Outrage does not help to protect innocent civilians from deliberate attack, arbitrary detention, summary execution, sexual violence and torture, or forced starvation.
This Bill seeks to address this fact head on and focuses on what can be done. In recent years, the Government have made welcome progress, recognising mass atrocity prevention as a new foreign policy priority. That started in 2021, and I pay tribute to the Government for doing that. Following the much-needed inquiry by the International Development Committee on this very issue, a mass atrocity prevention hub was created in Whitehall, tasked with co-ordinating the UK’s approach to these crimes. Here I pause to pay tribute to organisations such as Protection Approaches and the UK atrocity prevention working group, and indeed the institute which I have the fortune of directing, all of which have worked on making changes. Despite the steps forward, it is difficult to see their impact in the inconsistent and insufficient policy responses of government to widespread, systemic and systematic violations.
This Bill’s first purpose would provide a statutory basis to elevate and leverage the important work of the mass atrocity prevention hub. It also includes the monitoring of the steps that take people, and Governments, on a trajectory towards genocide. The hub is forecasting global atrocity risks and making country- specific risk assessments, along with the development of early-warning indicators and policy-making efforts. All of that is good, but it is not being adequately supported, so we press for the changes of the Bill.
The second provision of the Bill addresses and seeks to enshrine the need for senior political leadership and ownership of the UK’s moral and legal obligations to prevent and protect. The noble Lord, Lord Ahmad, is in his place, and he holds ministerial responsibility for UK atrocity prevention. I know he shares my deep concerns about these matters, but I would welcome his thoughts as to what can be done to repair our damaged reputation, as a state that has strayed from the bounds of international law in recent months under this Government.
Thirdly, the Bill addresses the urgent need to support and train embassies and country teams on the dynamics and warning signs of modern atrocities, and the trajectory towards genocide in some cases. The Government have already committed to doing this, but are yet to deliver on it. UK country teams in fragile or violent states have to be properly resourced to embed atrocity prevention thinking and strategy within their policy and programming.
The Bill is ambitious. I make no pretence about the fact that we want to see a growth in accountability for upholding and delivering on this mandate. As your Lordships will see, there is a section in the Bill which requires a Minister to lay an annual report before Parliament. Such a report would enable proper scrutiny of the United Kingdom’s contributions to prevent, protect and punish, and allow us all to advise on their development.
One of the final provisions in the Bill seeks to establish a fund with ring-fenced budget lines that guarantee consistent resourcing for mass atrocity early-warning systems, strategic policy-making and effective implementation.
This is not an expensive set of changes, but I urge them on the Government. The hub that exists is full of really good people doing good work, but it needs to be strengthened. We need proper leadership from politicians and from the Secretary of State. We want to see some real building on this, in the way that has taken place within the State Department in the United States on atrocity crimes and genocide prevention.
It is evident that any meaningful development of a strategic approach to preventing and responding to mass atrocities must bring together senior representatives of government departments—No. 10 itself, the intelligence agencies and multilateral representatives, from the UN to NATO. Atrocity prevention has been a core national security interest for the United States since 2011, supported by a clear atrocity prevention strategy launched in 2022. I knew and was a huge admirer of Elie Wiesel, the Holocaust survivor, who was very much at the heart of persuading the American State Department to take these steps and to create a hub that was about genocide prevention and atrocity crime prevention.
I hope that there will be support in government for this Bill. I know that the hub exists, but we need to strengthen it and put it on a statutory footing. I beg to move.
My Lords, it is a pleasure to speak after the noble Baroness, Lady Kennedy, and I congratulate her on her work and initiative. The Bill, as I understand it, is an important step forward which would boost His Majesty’s Government’s capabilities to implement their obligations under the genocide convention and is in line with the Government’s duty to prevent genocide.
If the Government are to implement their duty to prevent genocide, they must have comprehensive mechanisms to enable them to monitor early-warning signs and risk factors of atrocities to come. The issue of early-warning signs and risk factors of atrocities to come has been discussed in this House on many occasions. Time after time, we have raised the issue that the Holocaust did not start with the gas chambers—a message that should be ingrained in HMG’s laws and policies on genocide and atrocity crimes. However, sometimes this message is ignored, so I shall repeat it again: the Holocaust did not start with gas chambers. It started with hate speech; it started with dehumanising of Jews; it started with policies and laws that discriminated against Jews. It started with attacks on Jews—their places of worship, shops and places of work. It started with impunity for such acts. It started with all these warning signs and risk factors that may have been seen as irrelevant, but they were not—early-warning signs are never irrelevant.
How shocking it is that, today, on Friday 22 March 2024, we see on page 13 of the Daily Mail the following two headlines. The first is, “Jewish Boy mistreated by pro-Palestine nurses on NHS hospital ward in Manchester”. Secondly, next to a picture, is the caption: “Was this terrifying house blaze in east London an anti-Semitic attack?”. I repeat that this was on 22 March 2024.
Let us look at the harm that misinformation can bring about. Sadly, social media and mainstream news outlets, including elements of the UK Government, could be complicit because of the spread of lies about what is happening in Gaza. I shudder to think what Joseph Goebbels would have done with social media.
In this Chamber, I have raised the issue of genocide several times. I did so in relation to the genocide against the Tutsi in Rwanda, with the 30th anniversary coming up on 7 April. Would it not be appropriate if those perpetrators who are living freely in the UK are either tried immediately here or sent back to Rwanda for trial?
I have spoken about the situation of the Uighurs. The atrocities seen in recent years did not start with the forced indoctrination camps for whole populations; they started with narratives presenting the Uighurs as extremists. It started with things such as the Xinjiang regulations and blatant discrimination against members of the community, which flourished without impunity. As the early warning signs of the atrocities against the Uighurs were circulating in international media, we did not have a hub on atrocity crime, but even if we did it would not have had the capacity or resources to conduct the kind of monitoring of early warning signs that would be needed to enable us to prevent them.
All these issues can be rectified once and for all by this Bill. I end by referring to the late, great Lord Sacks. As many noble Lords will know, we will celebrate Purim tomorrow night. Purim was to be the first genocide; the whole Jewish population was to be murdered. In looking for what I wanted to say, I found a “Thought for the Day” on BBC Radio 4 from 22 February 2002 by Jonathan Sacks. If I may, I will share his teachings about Purim with the House:
“It’s a joyous day. We have a festive meal; we send presents to our friends; and gifts to the poor, so that no one should feel excluded. Anyone joining us on Purim would think it commemorates one of the great moments in Jewish history, like the Exodus from slavery or the Revelation at Mount Sinai. Actually though, the truth is quite different. Purim is the day we remember the story told in the book of Esther, set in Persia in pre-Christian times. It tells of how a senior member of the Persian court, Haman, got angry that one man, Mordechai, refused to bow down to him. Discovering that Mordechai was a Jew, he decided to take revenge on all Jews and persuaded the King to issue a decree that they should all—young and old, men, women and children—should be annihilated on a single day”.
That is the day of Purim that we celebrate. He went on:
“Only the fact that Esther, Mordechai’s cousin, was the King’s favourite allowed her to intercede on behalf of her people and defeat the plan. Purim is, in other words, the festival of survival in the face of attempted genocide. It wasn’t until way into adult life that I realised that what we celebrate on Purim is simply the fact that we’re alive; that our ancestors weren’t murdered after all. Like many of my generation born after the Holocaust, I thought antisemitism was dead; that a hate so irrational, so murderous, had finally been laid to rest. So, it has come as a shock”—
this was in 2002—
“To realise in recent months that it’s still strong in many parts of the world, and that even in Britain yesterday a cleric appeared in court charged with distributing a tape calling on his followers to kill Jews. What is it about Jews—or black people, or Roma, or foreigners—that causes them to be hated? The oldest explanation is probably the simplest: because we don’t like the unlike. As Haman”—
the wicked figure in the story—
“put it, ‘Their customs are different from those of other people.’ And that’s why racial or religious hate isn’t just dangerous. It’s a betrayal of the human condition. We are different. Every individual, every culture, every ethnicity, every faith, gives something unique to humanity. Religious and racial diversity are as essential to our world as biodiversity. And therefore, I pray that we have the courage to fight prejudice, of which antisemitism is simply the oldest of them all. Because a world that can’t live with difference is a world that lacks room for humanity itself”.
My Lords, as we have heard, having ratified the UN convention against genocide, the UK has a treaty obligation to prevent genocide wherever and whenever it is threatened. However, too often this does not happen. It is worth while examining the reasons why and seeking answers.
As it stands, this admirable Bill has only a faint chance of being adopted by the Government. Here, I pay tribute to the noble Baroness, Lady Kennedy, for her unceasing efforts to uphold human rights. The Bill asks for considerable resources, and touches on economic and diplomatic interests of states parties to the convention. It puts forward some clear and doable mechanisms to detect, acknowledge and act upon the early indicators of genocide which are, by now, well researched; it is cost effective, certainly in terms of saving human lives.
It is, to say the least, disingenuous to believe that Governments are unaware of the potential for genocide or the early warning signs. Going back to the Rwanda massacres in April 1994 and Srebrenica in July 1995, there were clear indications. For example, in the case of Rwanda, the widely popular Mille Collines radio station virtually spelt out its genocidal plans in lightly coded messages, including references to the Hutus as “cockroaches”. Furthermore, genocidal tribal attacks had occurred with depressing regularity in that region of Africa. In Srebrenica, the rounding up of 750,000 Muslim men and boys and the sudden departure of the UN forces made massacres inevitable, but events leading up to this terrible development were obvious.
The UK, like many other countries, has been deeply reluctant to act. It is said that the US officials in Rwanda were ordered not to use the term “genocide”, precisely because to do so immediately implied the obligation to act. The UK Government have consistently referred any threat of genocide to the courts to determine the application of the genocide convention. More than anything else, Governments are fearful of stepping out alone, or being seen as stepping out alone, in the absence of strong support from allies and member states.
Perhaps the way forward might include the setting up of, or greater co-ordination between, existing early warning mechanisms and units across Europe and North America. The specific task of these networked systems would be to both monitor signs and issue timely alerts to all participating member states, with a view to concerted action. Difficult as it might be to get countries to agree on such vital actions, a scheme such as this might reduce the paralysing reluctance to declare the risk of genocide and to act according to the obligations of the treaty.
The mechanisms and the tasks of a proposed genocide monitoring team set out in the Bill provide an excellent blueprint for other similar units. The UK human rights community, which has steadfastly pursued the prevention of genocide around the world, is well placed to encourage such an international network and achieve its ultimate aim.
My Lords, it is a great pleasure to follow the noble Baroness. I congratulate the noble Baroness, Lady Kennedy of The Shaws, on her speech and her tireless efforts in the area. She shines a light and that it is very important. I pay tribute to the Minister, who is also no slouch in this area; I know he makes considerable efforts to do what he can. I hope we are going to hear some very good Foreign Office reasons as to why we are going to take the legislation forward rather than why we are not going to take it forward, and I look forward to his speech.
I hope the Minister can be persuaded that this small but significant piece of legislation—small in length and minimal in cost—will help provide a massive boost to the prevention of atrocities and genocide. It will provide a laser-like focus on the efforts of His Majesty’s Government, which have consistently provided a powerful lead on such matters, as is consistent with our history, our leading international role, our status as a permanent member of the Security Council and as a leading player on the world stage.
In the US, there is a similar provision. In her opening speech, the noble Baroness referred to Elie Wiesel and the Elie Wiesel Genocide and Atrocities Prevention Act of 2018; that US legislation is very similar to what the noble Baroness is suggesting we have here in the UK.
That legislation has helped identify likely atrocities in a host of countries, working alongside the UK on occasion—for example, in Ukraine and in Myanmar. It has also provided the ability to highlight atrocities in the People’s Republic of China, northern Ethiopia, South Sudan and so on. The United States is committed to promoting respect for human rights and atrocity prevention, and we should be doing the same as a core national interest. Surely we can take up that baton.
My personal interest in this policy area comes from when I was Minister for Faith in what is now the levelling up department. I took an active role in this policy area, for example, in honouring the Holocaust memorial—I pay tribute to my noble friend Lord Polak for his powerful speech today—but also later, as president of Remembering Srebrenica. Taking up that role, alongside Dr Waqar Azmi, who provided inspirational leadership in this area—and still does—I recall a seminal visit to Sarajevo and Bosnia-Herzegovina, which demonstrated to me that genocide does not just suddenly happen; its roots are deep. This is important, as is the essence of prevention and getting in early to do something.
I recall the momentous moment I met a doctor who had been a young man at the start of the conflict. Before the conflict, he was to have been a doctor—I suppose like a GP in our own country—working in a quiet rural community called Srebrenica. He looked forward to his new life, an almost idyllic life. Then came the conflict, the war—the genocide—and his life altered. He was called on to do things that a doctor is not normally called on to do, and his life changed. He became a hero when he had wanted a relatively quiet, ordinary life.
I met many other people who talked to me of friendships they had across religion in Sarajevo and Bosnia-Herzegovina: people they had grown up with, living next door to them, who suddenly disappeared, leaving the flats that they lived in to go and live in another community. They never saw these people again. They had been lifelong friends until this moment, and suddenly this community was split, divided, and what had been perfect harmony led to conflict and genocide—yet these people had been living together in perfect harmony for generations.
It is disappointing that against this background, the Minister cut back funding for Remembering Srebrenica. That is regrettable. We should be encouraging and promoting the Bill. It is in our country’s and the world’s interests that we do something on this. I commend the work that the noble Baroness has done on this and look forward to the Minister wanting to take this forward, to ensure that Britain’s role is highlighted and that we can do something powerful as a leading member of the world community.
My Lords, before I begin, I offer my great thanks to the noble Baroness, Lady Kennedy of The Shaws, for introducing this piece of legislation, which is quite admirable. Given the brickbats that were being directed at her in the last debate, I hope that my words of thanks will offer some help in that moment, and also my word of congratulations on the signal honour she received last week.
I speak in support of this Bill as one deeply scarred by my experience as Britain’s Permanent Representative on the UN Security Council during the periods of the Rwanda and the Srebrenica genocides. The UN—and we, an important participant in that body—failed to do anything effective then to prevent those genocides, although we did set up the tribunals that brought to justice their perpetrators. I pay tribute to the noble Lord, Lord Bourne, for what he has done in recent years to ensure that the horrible experience of Srebrenica is not forgotten. Whatever one says about those two events, we really must do better now.
The Bill before us does not attempt to name any genocides, either those already perpetrated or those at risk of being so. That, in my view, is extremely wise. The term “genocide” is at some risk of being sprayed around indiscriminately, at the cost of being devalued and even discredited. Look only at Russia’s claim of the genocide of ethnic Russians living in Ukraine for an example of that. In debating this Bill, I hope we can avoid citing too many explicit examples and concentrate rather on future prevention, which is what the Bill does in a non-discriminatory way—in all directions, in fact. I hope the Government will feel able to throw their weight behind the Bill.
One possible impediment—the often deployed and long-discredited argument that it is for only courts and not Governments to identify and name genocides—is no longer the obstacle it was. Otherwise, how could the Government—rightly, if belatedly—have decided to join the International Court of Justice case brought by Gambia against Myanmar in respect of the Rohingya Muslims before the court has ruled on the matter? In the case of the Yazidis killed in a genocide by Islamic State, while there is a court ruling, the Government have again—quite rightly, in my view—treated it as genocide, even though the court in question was a German one and not an international court; it was what the Government in a different context might have called a foreign court. Since the Government are no longer as attached as they were to their earlier argument, it would surely be better to systematise the process of reaching a prima facie determination of genocide. That is what the Bill would provide the instruments to achieve.
Britain cannot on its own prevent an act of genocide, of course. It can act only as part of an international collective effort to do so. The Bill, which largely replicates what is already being done by the US and which also could be followed, if we give a lead, by the EU and its member states, would be a significant step in that direction. I hope that, at the end of this debate, we will hear from both the Government and Opposition Front Benches that they will support this effort.
My Lords, my noble friend Lord Hannay has reminded us of the searing experience of the Rwanda genocide and the failure of the international community to act in time. The tribute he paid to the noble Baroness for the recent honour she received was well made. She was made one of 16 members of the Order of the Thistle. I note that its motto is “Nemo me impune lacessit”—no one harms me with impunity. Those words sum up the motives that lie behind this Bill, as it seeks to end the culture of impunity and the lethal harm caused by genocide.
In parenthesis, I also thank the noble Lord, Lord Polak, for reminding us of the late Lord Sacks. I was privileged once to chair a lecture he gave in Liverpool. During the course of it, he said that no one should ask, “Where was God at Auschwitz?”; they should ask, “Where was man?”. It is about what men and women can do to prevent these atrocities occurring.
The noble Lord, Lord Polak, who comes from Liverpool, cited the experience of Esther at the time of Purim. She is one of the great figures in the Bible. She is told, “You have come into this world for such a time as this”. It reminds us that sometimes unlikely people who have no great power can do extraordinary things. Each of us who has the privilege to serve in this place has the chance to do extraordinary things and to be a sign of contradiction.
As the noble Lord said, the word “genocide” should not be used as a slogan or devalued. It is different from war crimes and crimes against humanity. The duty to prevent genocide is one of the most neglected duties under international law. In 2022, with Dr Ewelina Ochab, who has been mentioned in this debate, I published State Responses to Crimes of Genocide, which the Minister has a copy of. Although I know he welcomes the establishment of the hub on atrocity crimes, I am sure that he will agree that progress is slow. We are still a long way from implementing our obligations set out in Raphael Lemkin’s 1948 convention on the crime of genocide.
On other occasions, I have spoken about the Rohingya, the Yazidis, Armenia, Nigeria and the Uighurs. Today, in my brief few minutes I will focus on three particular cases that underline why a Bill of this kind is needed: Tigray, the Hazaras and Darfur.
In September 2023, the APPG on International Law, Justice and Accountability published its Tigray report. Our inquiry received an unprecedented amount of data, including testimonies from victims and witnesses from Tigray. We found evidence of atrocities, including mass killings, sexual violence, and starvation, which continue to this day and for which no one has been brought to justice. On numerous occasions, I have brought the dire situation of the Tigrayans to the attention of the Government. There are more than 100 references in Hansard, and letters and emails to the FCDO. I asked for a JACS—joint analysis of conflict and stability—assessment. Close to two years after the beginning of the war in Tigray, the Government finally commissioned a JACS for Ethiopia, but they have refused to make it available to Parliament. Why on earth are parliamentarians denied the right to see information that is crucial to our duty to prevent genocide?
Afghanistan’s Hazaras were referred to in our own International Relations and Defence Committee report on Afghanistan in 2021. Later that year, I was approached by Hazara human rights defenders concerned about the lethal targeting of their community. With colleagues, I established the Hazara inquiry. Our report, launched here by me and the noble Baroness, found that Hazaras, as a religious and ethnic minority, are at serious risk of genocide at the hands of the Taliban and Islamic State Khorasan Province. Under the genocide convention and customary international law, this finding should have engaged the responsibility to prevent—but it did not. The return to power of the Taliban has included brutal acts of violence against the Hazaras and a return of terror, including the bombings of Hazara schools, places of worship and other centres—atrocities that continued throughout 2023 and now into 2024. On 18 December 2023 in an Oral Question, I asked whether a JACS report could be initiated, not least because Pakistan had begun mass deportations back to Afghanistan—I have never had a reply.
Finally, I will mention Darfur, which I visited during the genocide 20 years ago. Some 18 months ago, people on the ground warned that a new genocide was likely. In response, all I have received from the FCDO are statements about deadlines for transitional justice being met and that progress was being made. Dissatisfied with those assurances, the APPG on Sudan and South Sudan decided to establish the Darfur inquiry, which I chaired, and we collected evidence from victims, survivors and experts.
In 2023, as the situation in Khartoum deteriorated, we published our Darfur report warning about the very clear early warning signs of atrocities to come and the danger of yet another genocide. These warnings were not listened to and were not acted on. The catastrophic situation in Sudan has led to 9 million displaced people, thousands dead and now an impending famine. In Darfur, the RSF continued the genocide begun by the Janjaweed. Who is being brought to justice or held to account?
The work of monitoring early warning signs cannot be left to parliamentarians and ad hoc inquiries—that is why the Bill is necessary. The FCDO has the capacity and resources needed to do this work well. We have a Minister, the noble Lord, Lord Ahmad, who understands that and works hard on these issues. The prevention of genocide and atrocity crimes is a duty that the noble Baroness’s Bill might ensure is treated with the gravity and urgency it deserves, and I support it.
My Lords, I support the Bill. I have been able to visit some countries that have been discussed: Bosnia-Herzegovina, Darfur in Sudan and many other areas where there has been evidence of genocide and human losses taking place at a large scale in the past. I have noted many other areas where there are concerns about genocide taking place. Britain is a member of the UN Security Council, which is well placed to help to prevent genocide whenever there are chances of it happening.
One of the areas which I want to draw to the attention of Members is India. Gregory Stanton, chairman of Genocide Watch, who predicted genocide in Rwanda five years before it happened, is calling for the world to take note of genocide in the making in India, and he particularly mentions Kashmir. Genocide does not happen overnight. It is a process over a length of time, and steps are taken when genocide happens. Over the past couple of decades, Kashmir has been the biggest army camp, with nearly 1 million Indian soldiers there since 1990. There are widespread reports from renowned international human rights agencies such as the UN Commission on Human Rights, Amnesty International, and so on, that the Indian Army is involved in killing, rape, torture, missing persons, and so forth.
In 2019, India withdrew the special status that Kashmir had within the Indian constitution. It has taken a lot of rights back from the local people. Since 1990, there have been reports that more than 100,000 people have been killed and there are thousands of people still languishing in prisons. We talk about human rights in other places, and we have human rights champions whom we celebrate who have fought for people’s rights. We have people such as Shabir Shah, who has been in prison for more than 30 years in Kashmir. These are the signs which prove that Gregory Stanton may not be far from the truth in what he is saying. We must take this seriously. Britain is well placed. We have strong links with India and must use them to prevent genocide taking place in Kashmir.
My Lords, it is pleasure to rise from these Benches to support the Private Member’s Bill in the name of the noble Baroness, Lady Kennedy of The Shaws. It is also something of a relief that the debate on this Private Member’s Bill has been somewhat more consensual than that on the previous Bill, in which I found myself in the unusual position as being on the opposite side from the noble Baroness, Lady Kennedy of The Shaws, which was a slightly uncomfortable position to be in.
This is a Private Member’s Bill to which we have heard no opposition from any part of your Lordships’ House. We heard the Minister’s noble friend Lord Bourne of Aberystwyth say that he hopes that the Minister will bring some words of comfort from His Majesty’s Government. I have been in your Lordships’ House for nearly a decade. I have rarely heard from the Government Front Bench words that lead us to think that a Private Member’s Bill is going to be warmly accepted, but on this topic, I very much hope that the Minister will be able to give some positive responses.
Over many years the noble Baroness, Lady Kennedy, and the noble Lord, Lord Alton of Liverpool, have spent much of their time in your Lordships’ House, in ad hoc committees and in other places arguing that we need to take the crime of genocide seriously, calling on His Majesty’s Government to look at particular cases and acknowledge that they are, or could be considered, genocide. Although the present Bill is not about genocide determination, the House of Lords Library briefing for today reminds noble Lords of the words of the Minister, the noble Lord, Lord Ahmad of Wimbledon, in previous debates.
We have heard many times that the Government are not able to act because the issue of genocide is for courts to determine—yet, as the present Bill and the Library briefing both make clear, under the genocide convention the Government have a duty to prevent genocide. It is not simply that we need to say, “We are not happy with this”; we have a duty to prevent and punish the crime of genocide. As the noble Lord, Lord Alton, pointed out, parliamentarians cannot do that—we cannot individually prevent or punish genocide—but His Majesty’s Government and other sovereign Governments are in a much better place, precisely because of their embassies and high commissions, to understand what is going on on the ground. The Bill, which I suggest is not as modest as some Private Members’ Bills—it is very ambitious—would pave the way for the Government to be able to do what the UK needs to do in performing its duties under the convention.
We have heard from the noble Lord, Lord Polak, a reminder that the Holocaust did not start with the gas chambers. The same has been true of other genocides. Something does not happen at the point where hundreds of thousands or millions of people are being killed or potentially fleeing for their lives; there is a much more insidious process. Recently, for our debate for Holocaust Memorial Day, the Holocaust Memorial Day Trust reminded Members, in a very helpful briefing, of the stages of genocide.
By the time your Lordships’ House talks about genocide, it is usually at a point where we are saying that there already is or has been genocide—in Darfur, of the Uighurs or of the Yazidis. We need to raise issues and find a vehicle for exploring the potential for genocide before it happens—before it is too late. We heard from my noble friend Lord Hussain that His Majesty’s Government need to look at the situation in Kashmir, and maybe the Foreign Secretary, for example, should be talking to his opposite number in New Delhi. We need to be thinking and exploring issues ahead of time, and the Bill gives us and the Government the opportunity to do that.
We have heard from the noble Lord, Lord Alton, about the situation in Darfur and how he has been told that there is further potential for a new genocide there. If one goes to Bosnia and Herzegovina, one finds that “remember Srebrenica” is not just a slogan; it is an everyday injunction. There is still concern there about Republika Srpska and concern on the ground about the situation. We should never be complacent as a Parliament or as a country.
The Bill offers His Majesty’s Government the opportunity to act, and it would hopefully empower the noble Lord, Lord Ahmad, to do many of the things from the Front Bench that he has often said he wished he was able to do—but these things were for courts to decide and for other people to do. I am not sure I expect the Minister to accept the Bill as it is enshrined today, but perhaps he could give us some suggestion of the Government bringing forward their own proposals that would have the same purpose as this eminently welcome Private Member’s Bill.
My Lords, my noble friend’s Bill introduces mechanisms to ensure that the United Kingdom’s Government are better equipped to prevent and respond to genocide and other atrocities. It is a welcome piece of legislation. My noble friend highlighted that the problem with the current generic responsibility across all embassies of examining where genocide might be in the offing is that it often results in a situation where, when everyone is doing it, no one is doing it. That is clearly a problem.
The solution in the Bill is absolutely vital. It is to put on a statutory footing this special hub within the Foreign, Commonwealth and Development Office, which will monitor and evaluate processes and keep in touch with developments taking place and research being done. As my noble friend and the noble Lord, Lord Bourne, highlighted, we have seen what the US has done in putting its legislation on a statutory footing within the Department of State.
We should not forget, as I am sure the Minister will say, that the UK has a positive record of contributing to international efforts to gather evidence of alleged genocide and war crimes. This includes the example of the ICJ case of the persecution of the Rohingya in Myanmar. More recently, it includes Russia’s conduct during its invasion of Ukraine, and the ongoing case at the ICC. I hope the Minister will take the opportunity to give us an insight into what kind of staff resource is required for this work and what kind of processes are in place to ensure international co-operation in a manner that builds capacity and avoids unnecessary duplication.
We welcome the proposal to put the responsible team on a statutory footing, whether it is the existing mass atrocity prevention hub or another team, to come up with recommendations for enhancing the Government’s work to mitigate atrocity and genocide risks. I certainly agree with my noble friend that three staff working in the atrocity prevention hub seems too few.
We share the Government’s view that determinations of genocide must result from a legal rather than a political process—this has been touched upon slightly. That does not mean that we shy away from saying that matters need to be investigated if there is sufficient evidence to require an investigation, but we certainly agree that determinations of genocide must result from a legal rather than a political process. This morning, the noble Lord, Lord Alton, kindly sent me the letter he received from Minister Mitchell, setting out a number of the jurisdictions where the British Government have responded to those determinations. I will not read them out today.
As the noble Lord, Lord Hannay, said, there are clearly a number of steps to be taken before those cases are brought. That is why I referenced the efforts of the Government and previous Governments to collate the evidence and make sure that it is not lost, because, sadly, far too often in these terrible cases, the evidence is got rid of. I hope that the Minister can give us a better idea about who decides—whether at the FCDO or in individual embassies across the world—which allegations to investigate, how much resource is devoted to evidence gathering, what domestic or international legal cases the UK becomes a party to and at which stage it might do so. Obviously, we have had discussions in the Chamber about that.
My noble friend raised current events in Gaza, which clearly continue to cause grave concern. We are clear about the need to avoid a Rafah offensive, and instead to secure an immediate humanitarian ceasefire. I know what efforts we have taken at the United Nations; I hope the Minister can give us an up-to-date report. We discussed this week that Gaza is on the brink of famine, and I have repeatedly stressed that Israel must comply with the ICJ’s interim measures. I hope the Minister can provide us with an update on the status of the negotiations that we know are carrying on at the moment.
We have also made reference during the debate to reports by UN bodies and others that suggest very clearly that China has serious questions to answer about the treatment of the Uighurs. Again, I hope that the Minister can respond positively on that.
As to the proposal to have a Minister with the responsibilities set out in the Bill, clearly we need to look at how these very serious matters are overseen in government, at whether and how there is parliamentary accountability for the work of the FCDO’s unit and the UK embassies, and at whether the current set-up is the appropriate model.
We cannot prevent every atrocity or genocide, but, as has been made clear in this debate, we absolutely must do more to mitigate atrocity and genocide risks around the world, and to integrate this work into our foreign policy, making it a clear priority. We certainly welcome the Bill’s focus on better monitoring risks in a way that joins up our country presences with Whitehall-based expertise.
As the noble Lord, Lord Alton, highlighted, the Government’s approach in Sudan shows, frankly, how badly prepared we were. We failed to listen to civil society groups warning us about the risk of impending violence. We know that the Government put too much focus on bargaining with elites who had little interest in stepping back from power. If we had done the long-term work of supporting inclusive peacebuilding in Sudan, Sudanese civil society might now be in a stronger position to take part in the transition negotiations that we are all hoping for. We must learn from our mistakes. We certainly welcome the Government’s decision to support the work of the ICC and the UN OHCHR in investigating and documenting the atrocities taking place in Sudan, and their support for the Centre for Information Resilience.
Labour has consistently called for attention and action on these atrocities, and will continue to highlight the need for further and better co-ordinated action on this crisis. Our sanctions against those fuelling the violence in Sudan have not gone far enough and came too slowly. I hope the Minister will agree that we must do more to hold those actors responsible for these atrocities to account.
We are also concerned that the Integrated Security Fund, formerly the Conflict, Stability and Security Fund, which has a domestic and international remit, could see the important work of mitigating atrocity risks abroad deprioritised. I hope the Minister can offer some reassurance in this regard.
I hope I have made clear in my response to this debate that I think the Minister and I have been at one in wanting to ensure that we prioritise this work, and that we take seriously the measures highlighted by my noble friend. We have to work together to make sure we can deliver on it.
My Lords, I join others in thanking the noble Baroness, Lady Kennedy, for tabling this Bill. I think it was the noble Baroness, Lady Smith, who talked of the incredible work that the noble Baroness, Lady Kennedy, and the noble Lord, Lord Alton, do in this area, and have done over many years. I would say to the noble Baroness, Lady Smith, that a fair bit of that is done in my office, with both the noble Baroness and noble Lord ever-present. I am sure they both recognise the deep affection that I have for both of them in the challenge that they provide—but it is not just a challenge. As we see from the tabling of this Bill, it is also about making practical suggestions on how we can move forward.
I concur with the noble Lord, Lord Collins. I think there are many across your Lordships’ House who genuinely put the importance of human rights at the heart of their work, in our diplomacy and development activities. That is an important attribute to continue. I shall be honest in saying that it is a challenge, particularly when we look at the global world as it is today, but we should not give up this important flame of hope and humanity.
In thanking the noble Baroness, I thank all noble Lords for their contributions. My noble friend Lord Polak struck a very poignant note about Purim, and the history behind it. I totally appreciate and associate myself with the important principle of survival. It is something to celebrate. Anyone who has met a survivor of an atrocity, as I have had the honour to do in meeting survivors of sexual violence in conflict—as I know other noble Lords have—gains incredible inspiration from their courage not just to survive the most atrocious of ordeals but to have the courage and conviction and become advocates on how change can be effected.
My noble friend Lord Polak was described by the noble Lord, Lord Alton of Liverpool, as being from Liverpool. The only claim I can make is that I am a Liverpool fan, although after last weekend’s events I am feeling rather sore, so we will park that one there.
This is a very important debate. The UK Government remain absolutely committed to preventing and responding to genocide and other atrocities taking place around the world. I totally agree with the noble Lord, Lord Hannay, that we should be learning, and that experience is important. While we are doing work, there is so much more to be done.
My noble friend Lord Bourne talked about Srebrenica, and paid tribute to many—apart from himself. Let me put on record the important work that he did when he was the Minister responsible for communities and faith, particularly in relation to the shocking events that took place in Srebrenica—again, on the lack of intervention and prevention. For anyone who has been to Srebrenica, or to Auschwitz-Birkenau, as I have, the chilling effect of what you see remains with you and, I think, strengthens your own conviction in these areas. The noble Lord, Lord Hannay, talked about Rwanda. Again, anyone who goes to the memorial in Kigali cannot but be moved by the thousands and thousands of lives that were taken at that time, and have a real conviction to prevent that happening again.
The provisions of this Bill are highly commendable, and many of them are very much aligned with the activities of the Government that we are planning or which are already in place. I agree that we need to be very focused. The noble Baroness, Lady D’Souza, rightly said that there was great care in the Bill being put forward and many doable mechanisms, as she described them. I say at the outset that, in this instance, I would be delighted to meet the noble Baroness to discuss what the UK is currently doing to prevent atrocities and look at the specific provisions of the Bill to see how they can best be taken forward.
I also miss Lord Sacks. Anyone who met him could not but be inspired by his example. Perhaps when we look across the world, and particularly at the Middle East, we are reminded that his engagement and involvement are very much missed at this important time.
The noble Lord, Lord Hussain, said that atrocities do not happen overnight. I give him a reassurance that our relationship with India is such—it is strong and one of friendship—that it allows us, both ways, to bridge issues of importance, as I did recently with Home Secretary Bhalla on the issue of human rights in India. We will continue to do this in a candid, constructive way.
With the challenging outlook we currently face, with conflicts and crises continuing and worsening, my noble friends and all noble Lords will recognise the need for prioritisation and making the best use of resources. So I say from the outset that the Government agree with many of the provisions of the Bill—the question is how best to take them forward. I was scribbling during the debate and I think the noble Baroness, Lady Smith, was right to say that, while I cannot give it total endorsement and agreement, I want to very much examine the provisions of the Elie Wiesel Act to see how we can best adapt. I am going to be very up front in saying that there are issues of training and cost within the provisions of the Bill that need to be considered: those are two of the main considerations for the Government.
For example, the Bill proposes to establish a genocide monitoring team. We recognise, as all noble Lords have said, that robust early warning and monitoring mechanisms and early response are key to preventing atrocities. The noble Lord, Lord Collins, reminded us that we cannot stop every atrocity, but we can certainly look to see how we can focus on mitigation. That is why the FCDO has integrated risk analysis into global horizon scanning. We are continuously looking to improve our forecasting capabilities through forging new partnerships and harnessing innovative, data-driven approaches.
The Bill would also provide for training for civil servants. The noble Baroness, Lady Smith, talked about the sometimes disjointed nature of this, as did the noble Lord, Lord Collins, and the noble Baroness when she introduced the Bill. We have got better at the FCDO and it is certainly my intention, as the Minister responsible, to ensure that any diplomats deployed into defined conflict zones are fully versed in the importance of the training they receive. But again, as a way of moving forward constructively, I am very keen to understand how we can strengthen that training. This is an open invitation to the noble Baroness and others to see how we can integrate more professionalised training and more insights that are country-specific, to enhance the training that our civil servants and those being deployed into conflict zones receive, and to ensure that it is tailored to the country in question.
The enhanced offer that we are developing will also enable staff to recognise the very early warning systems that my noble friend Lord Polak and the noble Lord, Lord Hannay, talked about, and understand the levers available when preventing and responding to atrocities, recognising that there is still more to do—I fully recognise that. We need to build further capacity and we intend to explore further training options, both internally and with external experts, as I have said, to ensure that not just diplomats but our most senior officials, who are the key decision-makers and provide advice directly to Ministers, are also versed in this. We will continue to learn from experience.
The Bill also calls for the Government to report to Parliament on atrocity risks. All noble Lords present know that, at times, information can be highly sensitive. That said, we have, based on the contributions I have heard and the advocacy of the noble Lord, Lord Alton, the noble Baroness, Lady Kennedy, and others, defined within our Human Rights and Democracy Report a specific element on atrocity prevention and human rights. It has been expanded to now include the responsibility to protect. Again, I encourage suggestions and recommendations on how we can improve that further, with that ambition.
I apologise for interrupting, but I asked the noble Lord specifically about the joint assessments on conflict and stability which the Foreign Office undertakes. Why can they not be shared with parliamentarians? Even if it cannot be right across the piece in both Houses, why not to the relevant Select Committees, the Foreign Affairs Select Committee of another place and our International Relations and Defence Select Committee? JACS assessments are crucial in recognising what signs are emerging.
Again, I will take that back. The noble Lord and I have had discussions on that. Previous answers we provided related to the sensitivity of that information, but I will certainly take back the practical suggestion he makes on particular committees to the FCDO to see whether there is more we can do in that area.
The outstanding provisions would also appoint a Minister for genocide prevention and response. I like that idea, specifically as it is described, rather than encompassed within my current role as Human Rights Minister. That is something to be thought through again in the discussion that I hope I will be able to have with the noble Baroness. This is very much cross-government. I have been discussing with officials—in preparation not just for this debate but generally on the issue—how to make it cross-government. The Ministry of Justice, for example, would have a key role. We have worked well together in this respect.
With my experience as the Minister for Human Rights and as the Prime Minister’s Special Envoy on Preventing Sexual Violence in Conflict, I assure your Lordships’ House that preventing and responding to atrocity remains a priority for me and for the Government. Prompted by this Bill, we will also look at how we can make that specific element, as suggested by the noble Baroness’s Bill, a key ministerial responsibility.
On the provision of funds, as raised by the noble Lord, Lord Collins, the noble Baroness, Lady Smith, and others, this is always a challenge for government. There are provisions in the Bill on this which are probably my key reservation—if I can put it that way—and would need to be considered. However, it is my clear view that we need to ensure that by addressing the prevention element, we will have a medium- to long-term impact on the costs of dealing with the end product of these awful, abhorrent atrocities.
A number of noble Lords made points about our embassies and high commissions across the globe. I can assure the House that—based on some of the central initiatives that we are taking—they have been implementing programmes to target the risk factors that can lead to atrocities, as well as to strengthen reporting and improve accountability mechanisms. These will be a critical part of our commitment to atrocity prevention.
On specific actions, I thank the noble Lord, Lord Collins, for recognising the work that we are doing with the ICC. UK funding amounting to £6.2 million since the invasion of Ukraine has helped to train more than 100 judges and deploy 30,000 forensic medical kits for police officers. In respect of this shocking and illegal invasion, the core group that we are part of to ensure criminal accountability for Russia’s aggression is also adding to the mechanisms that we are putting in place, not for after the conflict but during it, to deal with this.
On Myanmar, as has been recognised, we have now joined with Canada, Denmark, France, Germany and the Netherlands. The UK has also filed a declaration of intervention at the International Court of Justice in Gambia’s case against Myanmar. The UK is clear that there must be accountability for atrocities committed. Again, we have put money behind this, providing over £600,000 to the UN Independent Investigative Mechanism for Myanmar. We have also established Myanmar Witness, a programme to collect and preserve evidence of human rights violations for future prosecutions. The culture of impunity in Myanmar must end. I have seen this directly during my visits to meet survivors of those atrocities in Cox’s Bazar in Bangladesh.
The Sudan was mentioned, most notably by the noble Lord, Lord Alton. Atrocity prevention is one of the key pillars of our Sudan strategy. We have enhanced our atrocity risk monitoring work in Sudan, including on conflict-related sexual violence. Our work with open-source investigations—the noble Lord, Lord Collins, talked about civil society in this regard—continues to play a vital role in amplifying the voices of victims and survivors. Again, however, I accept that we need to do more.
We are supporting the Office of the High Commissioner for Human Rights in Sudan in monitoring and reporting on human rights violations. As part of these actions, marking one year since the start of the current conflict, my right honourable friend the Minister of State for Development and Africa will be visiting the region shortly.
I am conscious of time. China was also raised. In this regard, the noble Lords, Lord Alton and Lord Collins, will know of the long-standing work that has been done. The OHCHR’s assessment found possible crimes against humanity. We should take robust action. As noble Lords will know, the UK has led international efforts to hold China to account for its human rights violations in Xinjiang. Indeed, we were the first country to lead the joint statement on China’s human rights in Xinjiang at the UN. We continued to advocate during the recent UPR in January as well.
The noble Lord, Lord Collins, asked for an update on the situation in Gaza. I assure the House that our priorities remain that the fighting must stop now. This is the only way that we will get the return of the hostages. I met the families of the hostages again this week, as did the Foreign Secretary. Irrespective of their view on this conflict, no one can fail to be moved by the devastating nature of the humanitarian crisis unfolding in Gaza.
The latest update is that there has been a lot of diplomacy. Secretary Blinken has embarked on a tour of the Middle East, partly in conjunction and in parallel with UN Security Council resolutions. As I came into this Chamber, a lot of work had been done overnight to get countries in the right place. Unfortunately, the resolution by the United States calling for an immediate ceasefire was vetoed by Russia and China. We must continue to find a way to get agreement in this space. Noble Lords will be aware of Secretary Blinken being in Cairo. He is in Israel today. I will be travelling to Egypt next week as part of our continuing diplomatic efforts not only to bring an end to the immediate conflict but for a resolution based on peace, justice and equity for Israelis and Palestinians alike. All noble Lords have expressed views on the importance of the two-state solution for Israel and Palestine side by side in peace and justice.
In thanking the noble Baroness, I have not given a ringing endorsement—
My Lords, I think the Minister is coming to an end, but I just wanted to raise one point that he has not covered. He covered extremely fully the ground which has been covered by the noble Baroness in her Bill, but I heard nothing about making an annual or regular report to Parliament specifically about genocide and the risk of genocide. It is quite important. The FCDO does an annual report on human rights, but it is all too easy for things to become somewhat fuzzy in such a report as to whether what you are talking about are the many breaches of human rights or specifically a precursor to, or a risk of, genocide.
Some countries will be shameless, but if the Foreign Office produced a report about the risk of genocide and the precursors, some countries would do an awful lot not to get into it. I think the FCDO would find that report quite a useful tool.
I thank the noble Lord for his prompt. Two lines down I was going to address that issue as my penultimate comment, but I will take it now.
I mentioned the human rights report. I have asked officials to see what our options are to cover the aspects that the noble Lord highlights—for example, a quarterly statement or a WMS. I cannot give a definitive answer because those options are being worked up. I say to the noble Baroness, Lady Kennedy, that it will be helpful to have this level of engagement to ensure that we get something which is acceptable and the right product for Parliament to allow for the analysis that the noble Lord, Lord Hannay, has once again highlighted.
I hope that in the qualified support for the provisions of the Bill the noble Baroness recognises that we respect and appreciate her constant advocacy on these important issues. As she rightly acknowledged, there is support for many of the principles within this Private Member’s Bill. It is ambitious, as the noble Baroness, Lady Smith, said, but the Government believe in the priorities stated in the Bill. I am grateful to all noble Lords who have participated today. The UK is working with other partners in preventing and responding to human rights violations and atrocity risk. I look forward to listening to, learning from and working with noble Lords from across your Lordships’ House to further strengthen our aspirations and our delivery on these important issues and mitigations. If I was to provide a sense of where I am on this, whenever I talk to anyone, I say that we must put humanity at the heart of our policy-making.
My Lords, I thank the Minister for such a comprehensive response to the Bill and for being generally supportive to it. I know that, when it comes to Private Members’ Bills, there is a reticence about wholehearted support, but I feel that he has dealt with so many of the issues that are of concern here. Again, I pay tribute to the way in which the Minister has such a complete and deep understanding of these issues. He is a great champion of human rights and his command of his brief is exemplary. Our shadow Foreign Minister does a pretty good job as well, in that he too is so knowledgeable about all these issues, and I pay tribute to my noble friend for all that he does and advances in relation to human rights.
I am always in awe of this House when it comes to discussions on these issues of genocide, atrocity crimes and human rights globally. There are so many voices of people with such great and deep experience; I always come away having learned things. I pay tribute to the noble Lord, Lord Hannay, whose many years of experience enrich this House in what he can contribute. My dear friend the noble Lord, Lord Alton, is the voice of a great conscience in this House about the horrors that take place in our world. His advocacy for steps to be taken in relation to genocide have been without comparison.
All the people who have spoken today, including the noble Lords, Lord Polak and Lord Bourne, are those whose experience—here and in other debates, too—has always left me with a strong sense that, by coming together, we can make a difference. We could reclaim our position as the nation that has the loudest and clearest voice when it comes to the rule of law and respect for human rights. I want that to be reclaimed, as we have gone through a rather low period recently with regard to our commitment to international law. This is really important. Britain is respected around the world and, with leadership, we can make an enormous difference.
I felt the general sense about the Bill was that putting this on a statutory footing has support. I think it would have support in our political parties, so I am going to press on with it because it is so important. One thing that was mentioned repeatedly was that whole business of the logjam that there was about genocide—of always saying that it is a matter for courts and not for Parliaments to decide whether a genocide is happening. One of the refrains which people must become tired with is that of the noble Lord, Lord Alton, and myself in saying that it is not about waiting until a genocide happens; it is about what has to be done to prevent genocide, which is so embedded in the genocide convention.
The noble Lord, Lord Polak, described so well, as did the noble Lord, Lord Bourne, the way in which atrocity crimes start with lesser horrors and then rise in the significance and then the gravity of what takes place. That takes us along the road of atrocity crime on a trajectory that goes towards genocide, and a full understanding of that within our embassies and those who assess our security internationally is so important.
It was great to see a hub on atrocity crimes created within the Foreign Office, and they are wonderful people, but it is not properly resourced and it is beleaguered in the efforts it has to make. I know there is great pride in the standards of our diplomats, but it is not enough for us to say that every diplomat has his eye on this ball; there are too many other things to consider. I reinforce what was said by the noble Lord, Lord Collins: that because everyone has responsibility for this, sometimes nobody has responsibility for it. That is why it is so important that the hub is looking; not only should it deal with atrocity crimes but genocide should be named on it as part of its remit.
I want to thank the noble Baroness, Lady D’Souza, because I see that she is at the Bar. She has been one of my heroines in her great advocacy for human rights over many years, and having her participate in the debate today was so important to me.
Members on all Benches, including the noble Baroness, Lady Smith, and the noble Lord, Lord Hussain, were in support of this Bill. I know that a Bill will follow from the noble Lord, Lord Alton, where he will seek to create a way in which it could be a court of law that helps us decide whether a genocide is in the offing and for it to be evidentially based.
I am grateful to the Minister for all that he has said about the things that can be taken out of the Bill. I hope that he might at some point be able to persuade his colleagues in the Foreign Office that it would be an advantage to have this legislation and that it would be strengthened by having the hub and this work placed on a statutory basis. I was very interested in his acceptance of many of the suggestions made in the Bill, and I hope to take up his invitation to go and see him.