Advanced Research and Invention Agency Bill (First sitting) Debate
Full Debate: Read Full DebateAmanda Solloway
Main Page: Amanda Solloway (Conservative - Derby North)Department Debates - View all Amanda Solloway's debates with the Department for Business, Energy and Industrial Strategy
(3 years, 7 months ago)
Public Bill CommitteesQ
Tris Dyson: We put together a document in the summer, which we can share with you, that has examples both from Nesta Challenges and particularly from the United States of outcome-based challenge prize funding. That is obviously mainly the space that we occupy. There were some great examples of where it stimulates and creates whole new industries and sectors. There were also some examples of where there can be quite big mistakes, because you go off down the wrong course.
I know there has been quite a lot of inspiration from DARPA and from the US. One example would be the driverless car in the early 2000s. DARPA ran a series of challenge prizes in the desert around the development of driverless cars. It was literally an annual race where teams from universities would compete to develop vehicles that would outperform one another, and there was prize funding associated at the end of it. That is more or less where driverless cars began. The teams that came out of those universities and the individuals have now been picked up by Google, Uber, Apple and everybody else. It is why a lot of that frontier technology is now being developed on the west coast and the rest of the world is playing catch-up.
Another example would be the Ansari X prize, which was about building a privately funded spaceship that would carry two passengers. It had a very specific target about how high a sub-orbit it needed to reach within a two-week period. That created an enormous race for people to build privately funded spaceships, again in the early 2000s. You can see now what has happened in the private space flight industry in the US. The team that won that is now Virgin Galactic and we see every day in our newspapers what has happened to them.
We are a bit newer to this in the UK, but we also have some examples. We concluded a challenge prize just before Christmas that was looking at lower-limb paralysis. It was essentially saying that there have been dramatic improvements in the fields of artificial intelligence, robotics and sensory technology but why has the wheelchair not changed very much in the last 100 years, except for electrification? That was a global challenge in partnership with Toyota that resulted in some amazing breakthrough systems and products for people with lower-limb paralysis all around the world. A Scottish team called Phoenix Instinct won. They developed a wheelchair that moves with the user, anticipates movement using AI and sensory technology, and has a very lightweight alloy frame that is quite revolutionary from the perspective of a wheelchair user. Those are some examples.
Whether you do a challenge prize or not, I think you would need to do the same thing with ARIA, which has got to focus on areas where there is the most opportunity and where you have a decent hypothesis that technology pathways can be developed in order to solve that problem and encourage activity around that singular thing. That is the whole premise of missions or challenge prizes.
Q
Professor Leyser: Absolutely. I think that the kinds of examples that Tris has just talked about are quite illustrative from that point of view. Typically, the way the current system works is that we would put out a call for applications in a variety of contexts. It might be a completely open call; right across UKRI we run these so-called response-paid funding competitions where people with ideas about what they want to do can apply for funding to do them, whatever they might be. On the whole, those kinds of applications are the sort of bread and butter of really established research organisations: universities, institutes and, through Innovate UK, businesses. A lot of them are also collaborative with industry. It is that kind of grant application process that then goes through peer review, and we try to pick the projects that, as an overall portfolio, will best deliver what the UK needs, both in the short term and, absolutely, in the longer term, building that capacity and capability.
It tends to be established organisations that know the system and how to apply for those kinds of projects, and which have the structures available in their organisations to do that. With ARIA, however, I think there is the opportunity to test a much wider range of models, such as those kinds of competition-type prize approaches that Tris described—he is an expert in those. There is also a fairly well-established system called Kaggle for coding competitions, for example. That potentially reaches a much wider range of people. You do not have to apply; you do not have to have a system that can support that kind of application process. The funding flow is very different: it is a response to the results; it is the winner of the competition. As a result, it may be possible to reach a much wider range of people. In that coding space, for example, there are really extraordinary people working in their homes as freelance coders who would find it very difficult to access the classical UKRI and most of the funders that there are currently.
I very much hope that we would be able to tap into some of the talent right across the UK that is not in the more established places. That would be one really exciting outcome from this with that prize model. Where you have a really clear objective—so it is really clear who has won the money, so to speak—it is possible to do that in a way that does not automatically engage the kind of financial management systems that we have to use. For example, are we sure that this money is being spent on what the applicant said it would be spent on? If you are giving somebody the money for having done the research or having delivered the outcome—the car that goes across the desert—you are in a very different situation.
I do think there is a very interesting possibility for ARIA to reach those people who are talented and can contribute in ways that it is much harder to with the standard systems. I hope that we would learn from that and be able to import some of that expertise into the standard system when it was established and really clear that it was providing good value for money in a robust way.
Q
Tris Dyson: Well, more money is better. I think this money needs to be deployed intelligently, so being quite clear on the missions and the focuses is really important. It is even more important with a still significant amount of money but relatively smaller sums. Getting those areas right is really important. The examples that were just given about Kaggle and databased approaches are potentially a useful avenue for some of this, because the R&D investments and sunk costs are relatively low as opposed to building spaceships or something like that. That would be the sort of calculation you might need to make.
You can also use leverage. One of the areas that the UK has been pioneering is around regulatory sandboxes, for example, through the regulators’ pioneer fund, which is administered through UKRI. But some regulators, off their own backs, have also been setting up and developing sandboxes that allow innovators to play with datasets in an environment where the regulator is giving them a little bit more permission than they might have had otherwise. That in itself is an incentive, particularly when you are playing around with datasets.
You can think of examples where we have got significant strengths. One of the things we have talked about a lot during the pandemic—more recently, at least—is the UK’s strengths in genomics research. That means we have got an enormous range of data that could be made available to people through the likes of Genomics England, which in itself is an inducement or an encouragement above and beyond the financial. So being clever—boxing clever—with the money is important.
In terms of ruthlessness, part of this comes to the culture. The ARIA team will have to establish a culture where they trial things out, set targets and objectives and have constant reviews where they get together and decide whether to kill things off. That is clearer when you have defined missions or objectives that you are working towards. It is much harder when you are fostering lots and lots of different things—it is hard to compare X with Y.
Professor Leyser: From my point of view, the question I would ask is not so much how much money should ARIA have but what proportion of the public sector R&D spend should go into this way-out-there, high-risk, transformative research-type project and, of that, how much should be in ARIA. It is a proportionality question and, as Tris said at the beginning, at a time when there is an aim to drive up UK investment in R&D to 2.4%—hopefully beyond that, because 2.4% is the OECD average and I think we should aim to be considerably better than average—that is quite a stretch target for us. We do incredibly well—the quality and amount of research and innovation in this country is extraordinary—given that we currently invest only 1.7% of our GDP. So I think the opportunities to build that really high-quality inclusive knowledge economy, given how well we perform in the R&D sector with such a small proportion of R&D, are incredibly high.
On that rising trajectory, with us aiming for that 2.4% and beyond, I think spending a small proportion of that on this edge-of-the-edge research capacity and capability is the right thing to do. I would look at the budget in that context as a percentage of the overall R&D spend. People have been comparing the current ARIA budget with the budget of organisations such as DARPA, but if you look at it as a percentage, you get a very different number because, obviously, the US spends a much higher proportion of their—anyway—bigger budget on R&D than we do. That is the important question from that point of view.
How will we know that it has succeeded, and what would one expect the percentage failure to be? I agree with Tris that it is incredible difficult to predict. There is also serendipity and other things to factor in. If you set yourself a fantastic target of solving a particular problem or producing a particular new product and you fail to do that, none the less, along the way you might discover something extraordinary that you can apply in another field.
That high-risk appetite feeds into the question, again, of how much money or what proportion of the overall R&D portfolio should be invested in that way. One has to think about risk in R&D in that portfolio way. It is considered generally in investment markets that really high-performing investment portfolios are a portfolio. You invest in stuff that you know will deliver in an incremental sort of way, and then you invest in the really high-risk crash or multiply parts of the system. That is very much how one has to think about ARIA.
In that domain, where you have a very high probability of failure—that is what high risk means—but also an extraordinary probability of amazing levels of transformative success, it is a dice roll. The total number of projects will be relatively small, so it is very hard to predict an absolute number or proportion that one would expect, and one should not need to—that is what high risk, high reward means.
Q
Professor Bond: I would probably have a board and another structure. Certainly one of the super-important things that works in the US ARPA is that the programme managers are challenged in a sort of dragons’ den. It is a friendly dragons’ den, but they have to convince very capable, technical people that they can do what they do. That is one structure that would need to be slotted into place.
As for the board, I think you could have a slightly unusual board. I do not think it needs to be big; it could be very small. It could be less than 10 people, for sure, but you could also expand it a little bit with something that is a bit like a non-executive director, or NED—somebody from a different area with a rather different take on things. The balance will be important. You want a balance of people; I think you want some very radical thinkers in there, some people who know how things work in industry and some people who know how things work in academia, and so on.
As for the autonomy, I am personally a big believer in giving the chair and the director enormous amounts of autonomy. You pick people you are willing to bet on and then hand them a lot of trust. In fact, if you want to define the ARPA model at some level, it is this: it is a different model of trust. Bureaucracies occur because although we like to trust people, we have to throw up lots of rules and regulations to make sure that things work the way we feel they should work. What you are doing in creating this kind of model is handing trust to people. You want people with high integrity who are brilliant, and then you let them get on with it, and you trust that they will do something that reflects their character.
I do not think the board needs to be big; I think it needs to be very good. There should be a small number of outstandingly good people, who can tap into a broader network and bring in people to give a different vision and view from that which you will only ever get with a small number of people.
Before I go to Stephen Flynn, can I just have an indication of who wants to ask questions? I have got Sarah, Daniel, Aaron, Jane—okay. Thanks very much indeed.
Professor Mazzucato: Can I make one super-quick point on what Philip just mentioned?