Artificial Intelligence: Regulation Debate
Full Debate: Read Full DebateLord Browne of Ladyton
Main Page: Lord Browne of Ladyton (Labour - Life peer)Department Debates - View all Lord Browne of Ladyton's debates with the Department for Science, Innovation & Technology
(1 year, 1 month ago)
Lords ChamberI am pleased to reassure the noble Lord that I am not embarrassed in the slightest. Perhaps I can come back with a quotation from Yann LeCun, one of the three godfathers of AI, who said in an interview the other week that regulating AI now would be like regulating commercial air travel in 1925. We can more or less theoretically grasp what it might do, but we simply do not have the grounding to regulate properly because we lack the evidence. Our path to the safety of AI is to search for the evidence and, based on the evidence, to regulate accordingly.
My Lords, an absence of regulation in an area that holds such enormous repercussions for the whole of society will not spur innovation but may impede it. The US executive order and the EU’s AI Act gave AI innovators and companies in both these substantial markets greater certainty. Will it not be the case that innovators and companies in this country will comply with that regulation because they will want to trade in that market, and we will then be left with external regulation and none of our own? Why are the Government not doing something about this?
I think there are two things. First, we are extremely keen, and have set this out in the White Paper, that the regulation of AI in this country should be highly interoperable with international regulation—I think all countries regulating would agree on that. Secondly, I take some issue with the characterisation of AI in this country as unregulated. We have very large areas of law and regulation to which all AI is subject. That includes data protection, human rights legislation, competition law, equalities law and many other laws. On top of that, we have the recently created central AI risk function, whose role is to identify risks appearing on the horizon, or indeed cross-cutting AI risks, to take that forward. On top of that, we have the most concentrated and advanced thinking on AI safety anywhere in the world to take us forward on the pathway towards safe, trustworthy AI that drives innovation.