All 1 Debates between Stephen Metcalfe and Sarah Owen

Advanced Research and Invention Agency Bill (Second sitting)

Debate between Stephen Metcalfe and Sarah Owen
Wednesday 14th April 2021

(3 years, 7 months ago)

Public Bill Committees
Read Full debate Read Hansard Text Read Debate Ministerial Extracts
Stephen Metcalfe Portrait Stephen Metcalfe
- Hansard - -

Q Good afternoon and thank you for joining us and for your excellent contribution. Anne, you made a very interesting point about the independence of ARIA, to avoid it being used potentially as pointing at political failure. If you are investing in high-risk, high-reward research, there will be failure—that is undoubtedly true. May we ask for your advice on how we should measure the metrics of an ARIA over the early years, before potentially there is any output that has demonstrated a transformational benefit to society? On top of that, could you give us some advice on advising project managers on how they should go about selecting projects to explore? Should it be just on the basis of interesting science, or should there be a vision of the commercialisation of that science at the end, to motivate them? We are only going to be able to fund a certain number of projects, and presumably applications will outstrip the funding fairly quickly.

Professor Glover: How we measure success in the early years is a very important question. I am not going to give you an exact answer, but what I might say is that maybe we should not try. That would be unusual, wouldn’t it? That is what I meant earlier about not just following the formula of, “You need to tick these boxes to demonstrate success.” Of course, you would hope that whoever is leading ARIA would have an idea of how you are developing the innovation ecosystem that will be supported by ARIA. They might have some ideas about numbers of applications, where they are coming from, and having a good look at and analysing that, and looking at the amount of interdisciplinary or multidisciplinary research that comes forward. That is always quite hard to fund. Historically, when I have been involved in such things, interdisciplinary research tends to get kicked around different agencies: “This is more for you.” “No, this is more for you.” Everybody is worried about their budget and thinks, “If you fund it, we won’t have to fund this from our budget.” Thinking about the number of applications that could come from a broad range of different disciplines—that would be good. I am not answering your question directly. I am just saying that it is very easy to say, “Let’s have a way of measuring success,” but sometimes that can be stifling.

It is a bit like—perhaps not in the years timescale of ARIA—how it is around the time of year when we plant seeds in our garden or wherever. If you want to measure how well a seed is germinating, if you keep pulling it up and having a look at it you are really going to set it back, so sometimes you just need to think, “I’m hoping that in four or five months’ time this is going to be a broad bean plant with broad beans on it. I just need to wait and see.” I know that that is difficult to do.

The second thing you asked is about commercialisation. I cannot for the life of me remember who said this, but someone once said that there are two types of research: applied research and research not yet applied. That is quite true. There might be some areas where you think that there is a very easy market for this, but if we look back and learn from experience we find that an awful lot of research has been developed. The whole area of medical diagnostics, for example, was pure research. There was no commercialisation; it was just a fundamental biological problem that was being investigated. Some of the outcomes of that research led to molecules called monoclonal antibodies. It is quite a beautiful specific diagnostic—supremely sensitive—that can pick out particular molecules of interest that might tell you if you have a particular disease or have been exposed to a particular compound or whatever.

In renewable energy or an area around that, you might understand that there will be a lot of potential commercial partners and opportunities. In some other areas, perhaps not. This might be an opportunity to think about what the relationships would be like between ARIA and existing research funding, because it might be part of an ecosystem. I would hope that there were distinct roles for UKRI and ARIA but very good communication between the two, as well as very many other stakeholders, in order to identify areas that might not be suitable for UKRI funding but that might have a strong commercial or development potential that ARIA would be much more adept at supporting.

Sarah Owen Portrait Sarah Owen (Luton North) (Lab)
- Hansard - - - Excerpts

Q It is a pleasure to serve under your chairmanship, Mrs Cummins. Anne, you talked about citizen buy-in. That would take an element of trust, so my two questions are around that. What could or would good transparency look like without stifling innovation, in both of your opinions? Secondly, if we do not have FOIs and we do not know precisely how this will be reported to us, do we need an ethical baseline to ensure that we are spending public money on the greater good?

Professor Glover: On the citizen buy-in, I think that would be reasonable to consider achieving. I do not think that it would be insurmountably difficult in many ways. If I give you the example of some of the grand challenges that were funded at European Commission level, it was getting down to three brilliant projects. Which one will we fund? If the European Commission made the decision about which one was going to be funded, inevitably different member states would complain: “Why is that getting funded in that member state? This other project was just as good.”

All sorts of problems can arise. Whereas, if you asked European Union citizens which one they would like to be funded, they would say what matters most to them. That is quite an interesting insight into the mind of the European citizen, or it would have been, in that particular instance.

I do not think you are in any way betraying confidences; you are talking about whether it is a project looking at delivering limitless amounts of sustainable energy, or a project in mapping the functioning of the human brain, so that you might be able to exploit that in other ways. You are not saying how you are going to do those things; you are not revealing confidences or information that would be inappropriate or undermining of those doing the research. I think we might be worrying needlessly about that.

As to the ethical baseline, of course this has to be ethical. Tabitha and I are probably agreeing too much with each other, or perhaps we are going back to the same thing. If you are not open and transparent, you will have problems. That is just not rocket science. For example, there are many agencies that are not part of Government but that might receive governmental funding. Scotland’s National Academy, the Royal Society of Edinburgh, is one of those. We are completely independent from Government. We get funding from the Scottish Funding Council, which gets its money from Government. We are not subject to FOI requests but we voluntarily behave as if we are. If we did not do that, people would say, “They’re being directed by Government, so the reports that come out of the RSE will be influenced by Government.”

If we say, “This is how we approach it,” and if somebody comes to us and asks for information, we behave as if it were an FOI. It has never been too onerous. The only onerous time for me with FOI requests was when I was chief scientific adviser to the President of the European Commission, when it became unrealistic, because I had such a small team and there was such a lot of FOI requests. Generally, that is the direction we should be moving in. You do not want to hobble a new agency by making it seem that any aspect of it is secretive. To be able to demonstrate ethical compliance, you need that transparency.

Tabitha Goldstaub: Ethical transparency is key, but we also have an opportunity with ARIA to set a robust, rigorous ethical review process that is fit for the AI era. We do not currently have that.

There has been a tremendous amount of attention on the public-facing ethical principles and frameworks for assessing AI products, but relatively little on the frameworks and practices for assessing research, or how to launch and manage a data science and AI ethics review board, in any way that would cut across disciplines, organisational, institutional or national boundaries, as ARIA would need to.

If ARIA can work with others, such as the Health Foundation, which is in collaboration with the Ada Lovelace Institute, or the Alan Turing Institute, on this problem, ARIA could achieve its mission responsibly, become a beacon for other ARPA-like programmes, and tolerate failure much more safely; because ultimately we need to break new ground and to do so with an ethics review, specifically with research that has anything to do with artificial intelligence. It would enable us to set real international standards, if we can get that right. It is both a risk and a huge opportunity for ARIA.