Artificial Intelligence (Regulation) Bill [HL] Debate
Full Debate: Read Full DebateLord Young of Cookham
Main Page: Lord Young of Cookham (Conservative - Life peer)Department Debates - View all Lord Young of Cookham's debates with the Department for Science, Innovation & Technology
(9 months ago)
Lords ChamberMy Lords, one of the advantages of sitting every day between my noble friends Lord Holmes and Lord Kirkhope is that their enthusiasm for a subject on which they have a lot of knowledge and I have very little passes by a process of osmosis along the Bench. I commend my noble friend on his Bill and his speech. I will add a footnote to it.
My noble friend’s Bill is timely, coming after the Government published their consultation outcome last month, shortly after the European Commission published its Artificial Intelligence Act and as we see how other countries, such as the USA, are responding to the AI challenge. Ideally, there should be some global architecture to deal with a phenomenon that knows no boundaries. The Prime Minister said as much in October:
“My vision, and our ultimate goal, should be to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe”.
However, we only have to look at the pressures on existing international organisations, like the United Nations and the WTO, to see that that is a big ask. There is a headwind of protectionism, and at times nationalism, making collaboration difficult. It is not helped by the world being increasingly divided between democracies and autocracies, with the latter using AI as a substitute for conventional warfare.
The most pragmatic approach, therefore, is to go for some lowest common denominators, building on the Bletchley Declaration which talks about sharing responsibility and collaboration. We want to avoid regulatory regimes that are incompatible, which would lead to regulatory arbitrage and difficulties with compliance.
The response to the consultation refers to this in paragraphs 71 and 72, stating:
“the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation. It may be the case that commercial incentives are not always aligned with the public good”.
It concludes:
“the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured”.
My noble friend’s Private Member’s Bill is a heroic first shot at what that legislation might look like. To simplify, there is a debate between top-down, as set out in the Bill, and bottom-up, as set out in the Government’s response, delegating regulation to individual regulators with a control function in DSIT. At some point, there will have to be convergence between the two approaches.
There is one particular clause in my noble friend’s Bill that I think is important: Clause 1(2)(c), which states that the function of the AI authority is to,
“undertake a gap analysis of regulatory responsibilities in respect of AI”.
The White Paper and the consultation outcome have numerous references to regulators. What I was looking for and never found was a list of all our regulators, and what they regulate. I confess I may have missed it, but without such a comprehensive list of regulators and what they regulate, any strategy risks being incomplete because we do not have a full picture.
My noble friend mentioned education. We have a shortage of teachers in many disciplines, and many complain about paperwork and are thinking of leaving. There is a huge contribution to be made by AI. But who is in charge? If you put the question into Google, it says,
“the DFE is responsible for children’s services and education”.
Then there is Ofsted, which inspects schools; there is Ofqual, which deals with exams; and then there is the Office for Students. The Russell group of universities have signed up to a set of principles ensuring that pupils would be taught to become AI literate.
Who is looking at the huge volume of material which AI companies are drowning schools and teachers with, as new and more accessible chatbots are developed? Who is looking at AI for marking homework? What about AI for adaptive testing? Who is looking at AI being used for home tuition, as increasingly used by parents? Who is looking at AI for marking papers? As my noble friend said, what happens if they get it wrong?
The education sector is trying to get a handle on this technological maelstrom and there may be some bad actors in there. However, the same may be happening elsewhere because the regulatory regimes lack clarity. Hence, should by any chance my noble friend’s Bill not survive in full, Clause 1(2)(c) should.