AI: Cross-sector Legislation Debate
Full Debate: Read Full DebateLord Vallance of Balham
Main Page: Lord Vallance of Balham (Labour - Life peer)Department Debates - View all Lord Vallance of Balham's debates with the Department for Science, Innovation & Technology
(1 day, 17 hours ago)
Lords ChamberTo ask His Majesty’s Government whether they plan to introduce cross-sector legislation on artificial intelligence in 2025.
The Government remain committed to bringing forward AI legislation to realise the enormous benefits of this technology in a safe manner. We are continuing to refine our proposals to deliver this, ensuring that they are proportionate and incentivise investment and innovation. We will launch a consultation later this year. Most AI systems are already regulated at the point of use by the UK’s existing regulators. In response to the AI action plan, the Government are committed to working with regulators to support them in boosting their capabilities.
My Lords, last month I published a report making the case for cross-sector AI legislation. Is it not clear that AI is already impacting, positively and otherwise, cross-sector, cross-society and cross-economy? If we do not have a cross-sector approach through legislation, how will we enable the clarity, the certainty and the consistency of approach which will bring forward the confidence to enable innovation and investment—good for citizen, good for consumer, good for creative, good for British business and good for our country?
I thank the noble Lord for his question and enjoyed reading his report very much. There are three ways in which this cross-government AI approach will be looked at. First, as I say, the existing regulators will regulate their own areas. They will also be brought together more. The digital forum already brings together regulators around AI and has been given more money to ensure that the regulators can join up on this. Secondly, the development of assurance tools, which has been outlined in the AI Opportunities Action Plan, will allow us to understand that the actual use of AI is using tools that are validated. There will be a market in making sure that the validation system grows and becomes an important way of assuring users. Thirdly, the consultation around the newer models —advanced general intelligence and superintelligence, as it arrives—will require a cross-cutting general piece of work, which is where the consultation starts later this year.
My Lords, is it not the case that there has to be balance between AI and big tech and the creative industries? Do we not need to make sure that one of our major industries, the creative industries, are protected by any changes in the legislation?
I am sure that the noble Lord is aware that the creative industries are some of the greatest users of AI. Of course, it is important that creativity is protected. That is why a consultation has been put out around the copyright issue, which has been discussed many times in this Chamber. In all walks of life, it is important that we understand what AI brings and where it must be controlled in order to allow other things to happen. That is true not only in the creative industries but in many other areas.
My Lords, the Government failed to sign up to the declaration signed by 60 other countries at the recent Paris AI Action Summit. How much confidence can that now give us that any new AI Bill will prioritise a requirement for AI, in the words of the declaration, to be
“open, inclusive, transparent, ethical, safe, secure … trustworthy”
and sustainable? Given that the Government did sign up to the Seoul communiqué last year and hosted the Bletchley Park summit, are they now going backwards in this respect?
I can assure the noble Lord that the Government are most certainly not going backwards in this respect. I can also assure him that the AI Security Institute which has been set up has driven much of this across the world. It is linked to similar units elsewhere; it is undertaking work on many models that are evolving; and it is making its own work open, including the approach it takes. There is a very robust system being developed to make sure that the UK is at the forefront of this, not in the following stream.
It is very encouraging that the Government’s AI opportunities action plan is proceeding, and I very much welcome it. The Minister just referred to the precautions—including the AI Security Institute, which clearly needs resources—that we need to take to protect interests of various kinds, and to regulators, where it was admitted by the Government that capabilities needed much enhancement. Has the Minister anything further that he can say to give reassurance to those who are concerned?
Yes, regulation is clearly important, and that is why we formed the Regulatory Innovation Office, which is looking at AI, among other areas, including AI in healthcare. There are a number of actions being taken to boost regulator capability; that is one of the things that the Regulatory Innovation Office is working on. The regulators’ pioneer fund is also relevant to increasing and boosting the ability of regulators to undertake this. Development of capabilities takes place through the DRCF, the forum of the digital regulators that I have referred to, and there will be more in that area. In the SR, regulators have been encouraged to put in bids relating to boosting capability in AI.
My Lords, one of the things that the creative industries are seeking—perhaps the most immediate priority—is the transparency of information held by tech companies. Is that going to happen?
As the noble Earl is aware, transparency is one of the key issues in the consultation at the moment. We know that transparency of use of and output from AI systems is possible and should be encouraged. It requires technological advances to do that fully, but it is exactly what needs to happen to be sure what is being used, how it is being used and how the output relates to the input.
My Lords, I do not know whether my noble friend knows but, this very afternoon, the University of Liverpool, in conjunction with the Parliamentary and Scientific Committee, is holding a meeting here in the House about AI and the law. I wonder whether, in preparation for the cross-sectoral legislation about which the Minister spoke, he can assure the House that the Government are in close touch with the legal profession, because the effect of AI in areas such as the law will be just as great as it is in other areas.
I thank my noble friend. I am unaware of absolutely everything that is going on in the House this afternoon, and I am afraid that I was not aware of that. However, he is right to point out that the professions will be greatly affected by AI and the legal profession is certainly one of those. There is an enormous amount of work that could be done by AI, just as an enormous amount of work can be done with AI across the Civil Service. That is why there is a big push at the moment to adopt AI across the Civil Service. I think the same will happen in other professions, including medicine, law, architecture and many other areas.
I note what the Minister said about remaining committed to AI legislation, but the uncertainty for everybody affected by AI, whether in the tech industry or elsewhere, is a real challenge. Can the Minister flesh out, in some small way, the scope, timing and purpose of planned AI legislation?
I can certainly give the noble Viscount an indication of the scope. As I have said clearly, this is not going to deal with regulation that can be done by existing regulators. The use of AI in existing areas is something for the regulators that are specialists in those areas. It will not deal with the AI assurance tools, which will be developed separately, but it will look at artificial general intelligence and the emergence of new, cutting-edge AI—the things that we know will cut right the way across other areas and require particular attention.
My Lords, perhaps the Minister could tell us why the UK did not sign the Paris declaration and which words the Government wanted removing from that declaration to make it acceptable.
I am very happy to write to the noble Baroness and give her the precise details of that. However, I reinforce that the UK has been at the forefront of this, and the AI Security Institute is one of the most prominent actors in this space around the world.
I am grateful. I draw the House’s attention to my register of interests. Is it the Government’s intention to use the powers in the Product Regulation and Metrology Bill, when enacted, to bring in product requirements based on ISO 42001 relating to AI governance, as a mechanism to bring us some degree of AI assurance through regulation?
I referred to assurance tools, and that will be part of those. The noble Lord is quite right to raise the important area of standards, because they are critical here, and the UK is well linked to all the national and international standards bodies.