(4 months, 3 weeks ago)
Lords ChamberThat an humble Address be presented to His Majesty as follows:
“Most Gracious Sovereign—We, Your Majesty’s most dutiful and loyal subjects, the Lords Spiritual and Temporal in Parliament assembled, beg leave to thank Your Majesty for the most gracious Speech which Your Majesty has addressed to both Houses of Parliament”.
(7 months, 2 weeks ago)
Lords ChamberI thank my noble friend for the question. The first thing to say is that the independence of universities is absolutely critical to the quality of their research. While the integrated review refresh has of course indicated a great many concerns about working closely with China, and necessitated a reduction of academic collaboration with China, I hope our recent reassociation to the Horizon programme, and a number of other third countries also considering or being very close to associating with Horizon, will go some way towards providing a new pool of collaboration partners in academic research.
My Lords, I am sure that all of us agree with the noble Lord, Lord Young, that we need to protect scientific development from malign actors. But is there not a real problem here—that new technology and advances in scientific knowledge not only require international collaboration, on a scale hitherto unknown, but that most of it, ever since the bow and arrow, is dual-purpose? In other words, it can be used for benevolent or malign reasons. How do the departments charged with this responsibility distinguish between these two, so that in protecting us from the misuse of scientific advances, they are not smothering scientific research as a whole?
The noble Lord is absolutely right in his analysis of the problem, which I agree with wholeheartedly. The most powerful tool we have at our disposal in this is RCAT—the Research Collaboration Advice Team—which provides hundreds of individual items of advice in these areas, where it can actually be quite subtle whether something is dual or single-use or has a military or defence application. It is not something that can be very easily defined up front, and does require a certain wisdom and delicacy of advice to provide that.
(1 year, 5 months ago)
Lords ChamberMy noble friend is absolutely right that the potential benefits of AI are extremely great, but so too are the risks. One of the functions of our recently announced Foundation Model Taskforce will be to scan the horizon on both sides of this—for the risks, which are considerable, and for the benefits, which are considerable too.
My Lords, I differ from the noble Lord, Lord Kirkhope, who said that we must develop AI to the maximum extent. There are benefits, but does the Minister accept that we ignore the dangers of AI to the great peril of not only ourselves but the world? The problem is that, despite the advantages of artificial intelligence, within a very short period it will be more intelligent than human beings but it will lack one essential feature of humanity: empathy. Anybody or anything without empathy is, by definition, psychopathic. It will achieve its ends by any means. Therefore, the noble and right reverend Lord, Lord Harries, is correct to say that, despite the difficulty of competition between states, such as the US and China, and within states, such as between Google, Microsoft and the rest, it is essential that we get an ethical regulatory framework before technology runs so far ahead of us that it becomes impossible to control this phenomenon.
The risks have indeed been well publicised and are broadly understood as to whether and when AI becomes more intelligent than humans. Opinions vary but the risk is there. Collectively and globally, we must take due account of the risks; if not, I am afraid that the scenario that the noble Lord paints will become reality. That is why bilateral and multilateral engagements globally are so important, so as to have a single interoperable regulatory and safety regime, and to have AI that the world can trust to produce some of the extraordinary benefits of which it would be capable.