(1 year, 4 months ago)
Lords ChamberMy Lords, I congratulate the Prime Minister on the initiative he has taken in the field of AI, but I have very grave concerns about the framework that has been put in place for monitoring it. At the moment, it is far too vague, and, with its stress on innovation, there is a real danger of some of the ethical concerns simply being sidelined.
One major danger of advanced AI is the way it could increase the amount of misinformation in the public realm, as the noble Viscount, Lord Chandos, emphasised. Society exists only on the assumption that most people most of the time are telling the truth, and the Government have the assent of the people only on the basis that what they put forward is, basically, to be trusted. In recent years, as we know, the issues of truth and trust have become critical. People have talked about a post-truth age, in which there is only your truth and my truth. We have conspiracy theorists, with false information being fed into our communication system. I worry, as the noble Lord, Lord Anderson, raised, about forms of artificial intelligence that can mimic public authorities or reputable sources of information; people of ill will could infiltrate all kinds of systems, from government departments to think tanks and university research departments. That is only one danger; there are of course many others.
The Government have stressed that they are taking a pro-innovation approach to AI and do not want to set up a new regulatory body. There is indeed a good reason for that: AI operates very differently in different fields. Obviously, its use in medical diagnosis or research, as the noble Lord, Lord Kakkar, emphasised, is very different from its use in military targeting, as the noble and gallant Lord, Lord Houghton, emphasised. However, what the Government intend to put in place at the moment is far too ill-defined and vague.
The Government have moved to dissolve the AI Council. They have said that it will be replaced by a group of expert advisers, together with the new Foundation Model Taskforce, led by the technology entrepreneur Ian Hogarth, which will, they say, spearhead the adaptation and regulation of technology in the UK. It seems to me that the first and most important function of the task force should be to monitor what is happening and then to alert government about any potential issues.
I am so glad that the noble Baroness, Lady Primarolo, mention the HFEA, of which I was once a member, because it provides an interesting and suggestive model. It too deals with far-reaching scientific advances that raise major ethical questions. To grapple with them, it has a horizon-scanning group composed of leading scientists in the field, whose job is to be aware of developments around the world, which are then reported to a committee to consider any legal and ethical implications arising from them.
In his excellent and well-informed opening speech, my noble friend Lord Ravensdale suggested that research and regulation belong together. I will nuance that slightly by suggesting that, although they of course have to be kept very closely together, they are in fact separate functions. I believe that the new AI task force must, first, have a horizon-scanning function on research, and, in addition, have the capacity to reflect on possible ethical implications. Although the details of that would then have to be put out to the relevant sectors where there are already regulatory regimes, they themselves will need to know what is going on right across the different fields in which AI operates, and then they will need to be able to highlight ethical concerns. My concern is that the pro-innovation approach to AI might lead to the neglect of those functions. To avoid that, we need a clearly set-up central body, the clear focus of which is different from that of innovation and adaptation; it would be to monitor developments and then to raise any ethical concerns.
Such a central body would not, at this stage, need a regulator. However, that time might indeed come. The noble Lord, Lord Fairfax, and many leading figures in the industry feel that the time has come for a new regulator—something perhaps along the lines of the International Atomic Energy Agency. For the moment, I hope the Government will at least give due thought to giving the task force a much clearer remit both to monitor developments across the field and to raise potential ethical concerns.
(1 year, 5 months ago)
Lords ChamberTo ask His Majesty’s Government what steps they are taking in co-operation with international partners to reach a global agreement on the regulation of advanced forms of artificial intelligence.
The Government are co-operating with international partners both bilaterally and multilaterally to address advanced AI’s regulatory challenges, including via our autumn global AI safety summit. The AI regulation White Paper recognises the importance of such co-operation, as we cannot tackle these issues alone. As per the G7 leaders’ communiqué, we are committed to advancing international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI aligned with shared democratic values.
I thank the Minister for his Answer and commend the Prime Minister for his initiatives in this area. Clearly, advanced AI is epoch-making for the future of humanity and international co-operation is essential. Can the Minister say, first, whether there has been any response from China to the Prime Minister’s initiatives? Secondly, would he agree that one possible role model is the International Atomic Energy Agency as a way of monitoring future developments?
We must recognise that China is ranked number two in AI capabilities globally, and we would not therefore envisage excluding China from any such discussions on how to deal best with the frontier risks of AI. That said, in the way we approach China and involve it in this, we need to take full cognisance of the associated risks. Therefore, we will engage effectively with our partners to assess the best way forward.