(1 month ago)
Lords ChamberMy Lords, I congratulate the noble Baroness, Lady Stowell, and the Communications and Digital Committee on their very thorough and comprehensive report. It points out the very considerable benefits that generative AI and large language models can deliver for this country, and the human race in general. The report declares that large language models put us on the brink of epoch-defining changes, comparable to the invention of the internet, and I have no doubt about the truth of that prediction.
However, what price will we pay for these benefits? I am deeply worried about the great risks that are inherent in the breakneck pace at which this technology is being developed, without any meaningful attempts to regulate it—with the possible exception of the EU. The report identifies a plethora of potential areas of risk, from minor through to catastrophic, covering a non-exhaustive list of areas, including multiplying existing malicious capabilities, increasing the scale and pace of cyberattacks, enabling terrorism, generating synthetic child sexual abuse material, increasing disinformation via hyper-realistic bots, enabling biological or chemical release at pandemic scale, causing critical infrastructure failure or triggering an uncontrollable proliferation of AI models. I will not go on with the list, because anyone who has read the report will know what I am talking about. These are the consequences of malicious, or perhaps merely careless, uses of the technology, and they could have a very significant—perhaps catastrophic—impact on the citizens of this country, or even worldwide.
The report states in paragraph 140:
“There are … no warning indicators for a rapid and uncontrollable escalation of capabilities resulting in catastrophic risk”.
It then tries to reassure us—without much success, in my case—by saying:
“There is no cause for panic, but the implications of this intelligence blind spot deserve sober consideration”.
That is putting it very mildly.
However, this is not my main concern about the risks presented by AI, and I speak as one who had slight interaction with embryonic AI in the 1980s. The risks I have mentioned so far arise out of the probable misuse of this technology, either deliberately or accidentally. They might be mitigated by tight international regulation, although how we can prevent bad actors operating in regions devoid of regulation, I do not know. These enterprises are so competitive, so globalized and so driven by commercial pressure that anything that can be done, will be done, somewhere.
My main concern, and one to which I cannot see an obvious answer, is not what happens when the technology is misused. What worries me is the risk to humans if we lose control of the AI technology itself. The report does mention this risk, saying:
“This might occur because humans gradually hand over control to highly capable systems that vastly exceed our understanding; and/or the AI system pursues goals which are not aligned with human welfare and reduce human agency”.
That is a very polite way of saying that the AI systems might acquire greater intelligence than humans and pursue goals of their own: goals that are decidedly detrimental to the human race, such as eliminating or enslaving it. Before any noble Lords conclude that I am off with the fairies, I direct them to paragraph 154 of the report, which indicates a “non-zero likelihood”—that apparently means a remote chance—of existential risks materialising, but not, the report says, in the next three years. That is not very reassuring for those of us who hope to live longer than three years.
Some months ago, I had a conversation with Geoff Hinton—here in this House, as it happens—who is widely recognised to be one of the godfathers of AI and has just been awarded a Nobel prize. He resigned from Google to be free to warn the world about the existential risks from AI, and he is not alone in those views. His very well-informed view is that there is a risk of humans losing control of AI technology, with existential consequences. When I asked him what the good news was, he thought about it and said, “It’s a good time to be 76”. My rather flippant response was, “Well, at least we don’t have to worry about climate change”.
Seriously, the thing about existential risks is that we do not get a second chance. There is no way back. Even if the probability is very low, the consequence is so catastrophic for mankind that we cannot simply hope it does not happen. As the noble Lord, Lord Rees, the Astronomer Royal, said 10 years ago in a TED talk when discussing all cataclysmic risks:
“Our earth has existed for 45 million centuries, but this”
century
“is special—it’s the first where one species, ours, has the planet’s future in its hands … We and our political masters are in denial about catastrophic scenarios … But if an event is potentially devastating, it is worth paying a substantial premium to safeguard against it”,
rather like
“fire insurance on our house”.
The committee’s report devotes seven paragraphs out of 259 to the existential risks of the technology turning the tables on its human masters. This would suggest the committee did not take that risk all that seriously. Indeed, it says in paragraph 155:
“As our understanding of this technology grows … we hope concerns about existential risk will decline”.
I am not happy to rely on hope where existential risk is concerned, so I ask the Minister for some reassurance that this matter is in hand.
What steps are the Government taking, alone and with others, to mitigate the specific risk—albeit a small one—of humans losing control of AI systems such that they wipe out humanity?
(10 months, 1 week ago)
Lords ChamberTo ask His Majesty’s Government what steps they are taking to protect freedom of expression in the course of their work on combating disinformation.
My Lords, the noble Lord, Lord Strasburger, is participating remotely.
My Lords, I draw the attention of the House to my role as chair of Big Brother Watch and beg leave to ask the Question standing in my name on the Order Paper.
Preserving individuals’ rights to freedom of expression underpins all the Government’s work on tackling disinformation. This right is upheld by the Online Safety Act, which protects freedom of expression by addressing only the most egregious forms of disinformation, ensuring that people can engage in free debate and discussion online. Under the Act, when putting in place safety measures to fulfil their duties, companies are also required to consider and implement safeguards for freedom of expression.
I thank the Minister for his reply. Last year, Big Brother Watch exposed worrying overreach by the Counter Disinformation Unit in its attempts to prevent legitimate criticism of the Government by MPs, journalists and academics. Following the Government’s apology, could the Minister tell the House what, if anything, has changed, apart from the unit’s name? Could he please explain why the Government refuse to allow the Intelligence and Security Committee to oversee the work of what is now called the National Security Online Information Team?
First, the Counter Disinformation Unit has indeed changed its name to the National Security Online Information Team, to better reflect its role. I am not aware of the apology to which the noble Lord refers, but I will look into it. I have not heard of it. The NSOIT, as it is now called, does not target individuals, particularly not politicians or journalists. It does not even go after individual pieces of content but looks for trends across all items of content online. I will look into this case for an apology, but I am surprised by it because I am not aware of it.
(1 year, 5 months ago)
Lords ChamberTo ask His Majesty’s Government what assessment they have made of the work of the Counter Disinformation Unit and its impact on freedom of speech.
My Lords, I beg leave to ask the Question standing in my name on the Order Paper. In so doing, I draw the House’s attention to the fact that I chair Big Brother Watch, which recently reported on the Counter Disinformation Unit.
The role of the Counter Disinformation Unit—CDU—is to understand disinformation narratives and attempts to manipulate the information environment. This has included disinformation threats relating to the Covid pandemic and the Russian invasion of Ukraine. Freedom of speech and expression are important principles that underpin the work of the CDU, including the fact that it does not monitor individuals or political debate, or refer content from politicians, political parties or journalists to social media companies.
My Lords, I thank the Minister for that reply. Research by Big Brother Watch has revealed that Members of both Houses of Parliament, including prominent Conservatives, have been included in the dossiers of the Counter Disinformation Unit and the rapid response unit for doing nothing more than criticising the Government and their policies. Does the Minister agree that the CDU’s monitoring of political dissent, under the cover of countering disinformation, has serious ramifications for freedom of expression and our democracy more broadly?
I thank the noble Lord for that question but do not accept the characterisation that he gives. I am indeed familiar with the Big Brother Watch report that he refers to. The CDU does not monitor individuals or politicians. It does not refer politicians, journalists or elected officials to social media companies. It looks instead for overall narratives that attempt to interfere with or pollute our information environment.