2 Lord Young of Cookham debates involving the Department for Science, Innovation & Technology

Universities: Sensitive Research

Lord Young of Cookham Excerpts
Tuesday 30th April 2024

(6 months, 3 weeks ago)

Lords Chamber
Read Full debate Read Hansard Text Watch Debate Read Debate Ministerial Extracts
Asked by
Lord Young of Cookham Portrait Lord Young of Cookham
- Hansard - -

To ask His Majesty’s Government what steps they are taking to protect sensitive research at universities from national security threats.

Viscount Camrose Portrait The Parliamentary Under-Secretary of State, Department for Science, Innovation and Technology (Viscount Camrose) (Con)
- View Speech - Hansard - - - Excerpts

The Government are implementing a range of legislative and non-legislative measures, including the Research Collaboration Advice Team, which provides advice to academia on national security risks in international collaboration. The integrated review refresh committed to review the effectiveness of existing protections. The Department for Science, Innovation and Technology is leading this review, and the Deputy Prime Minister announced last week that the Government will consult on the response in the summer.

Lord Young of Cookham Portrait Lord Young of Cookham (Con)
- View Speech - Hansard - -

I am grateful to my noble friend, but are our universities not compromising their independence by becoming overreliant on China? Some 25% of the students, or 10,000, at UCL are Chinese, which risks the infiltration of academic research and, in the words of the Deputy Prime Minister, coercion, exploitation and vulnerability. While I welcome the recent Statement, what steps will the Government take to replace lost Chinese funding for our universities, so that the UK remains at the forefront of technological research?

Viscount Camrose Portrait Viscount Camrose (Con)
- View Speech - Hansard - - - Excerpts

I thank my noble friend for the question. The first thing to say is that the independence of universities is absolutely critical to the quality of their research. While the integrated review refresh has of course indicated a great many concerns about working closely with China, and necessitated a reduction of academic collaboration with China, I hope our recent reassociation to the Horizon programme, and a number of other third countries also considering or being very close to associating with Horizon, will go some way towards providing a new pool of collaboration partners in academic research.

Artificial Intelligence (Regulation) Bill [HL]

Lord Young of Cookham Excerpts
Lord Young of Cookham Portrait Lord Young of Cookham (Con)
- View Speech - Hansard - -

My Lords, one of the advantages of sitting every day between my noble friends Lord Holmes and Lord Kirkhope is that their enthusiasm for a subject on which they have a lot of knowledge and I have very little passes by a process of osmosis along the Bench. I commend my noble friend on his Bill and his speech. I will add a footnote to it.

My noble friend’s Bill is timely, coming after the Government published their consultation outcome last month, shortly after the European Commission published its Artificial Intelligence Act and as we see how other countries, such as the USA, are responding to the AI challenge. Ideally, there should be some global architecture to deal with a phenomenon that knows no boundaries. The Prime Minister said as much in October:

“My vision, and our ultimate goal, should be to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe”.


However, we only have to look at the pressures on existing international organisations, like the United Nations and the WTO, to see that that is a big ask. There is a headwind of protectionism, and at times nationalism, making collaboration difficult. It is not helped by the world being increasingly divided between democracies and autocracies, with the latter using AI as a substitute for conventional warfare.

The most pragmatic approach, therefore, is to go for some lowest common denominators, building on the Bletchley Declaration which talks about sharing responsibility and collaboration. We want to avoid regulatory regimes that are incompatible, which would lead to regulatory arbitrage and difficulties with compliance.

The response to the consultation refers to this in paragraphs 71 and 72, stating:

“the intense competition between companies to release ever-more-capable systems means we will need to remain highly vigilant to meaningful compliance, accountability, and effective risk mitigation. It may be the case that commercial incentives are not always aligned with the public good”.

It concludes:

“the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured”.

My noble friend’s Private Member’s Bill is a heroic first shot at what that legislation might look like. To simplify, there is a debate between top-down, as set out in the Bill, and bottom-up, as set out in the Government’s response, delegating regulation to individual regulators with a control function in DSIT. At some point, there will have to be convergence between the two approaches.

There is one particular clause in my noble friend’s Bill that I think is important: Clause 1(2)(c), which states that the function of the AI authority is to,

“undertake a gap analysis of regulatory responsibilities in respect of AI”.

The White Paper and the consultation outcome have numerous references to regulators. What I was looking for and never found was a list of all our regulators, and what they regulate. I confess I may have missed it, but without such a comprehensive list of regulators and what they regulate, any strategy risks being incomplete because we do not have a full picture.

My noble friend mentioned education. We have a shortage of teachers in many disciplines, and many complain about paperwork and are thinking of leaving. There is a huge contribution to be made by AI. But who is in charge? If you put the question into Google, it says,

“the DFE is responsible for children’s services and education”.

Then there is Ofsted, which inspects schools; there is Ofqual, which deals with exams; and then there is the Office for Students. The Russell group of universities have signed up to a set of principles ensuring that pupils would be taught to become AI literate.

Who is looking at the huge volume of material which AI companies are drowning schools and teachers with, as new and more accessible chatbots are developed? Who is looking at AI for marking homework? What about AI for adaptive testing? Who is looking at AI being used for home tuition, as increasingly used by parents? Who is looking at AI for marking papers? As my noble friend said, what happens if they get it wrong?

The education sector is trying to get a handle on this technological maelstrom and there may be some bad actors in there. However, the same may be happening elsewhere because the regulatory regimes lack clarity. Hence, should by any chance my noble friend’s Bill not survive in full, Clause 1(2)(c) should.