Covid-19: Forecasting and Modelling Debate
Full Debate: Read Full DebateSteve Baker
Main Page: Steve Baker (Conservative - Wycombe)Department Debates - View all Steve Baker's debates with the Department of Health and Social Care
(2 years, 11 months ago)
Westminster HallWestminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.
Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.
This information is provided by Parallel Parliament and does not comprise part of the offical record
I beg to move,
That this House has considered forecasting and modelling during covid-19.
It is a pleasure to speak under your chairmanship, Sir Edward. I speak not to bury science, but to praise it. During the covid pandemic, there has been some remarkable, wonderful science; I just question to what extent that includes the modelling and forecasts that have come from it. Thanks to some questionable modelling that was poorly presented and often misrepresented, never before has so much harm been done to so many by so few based on so little questionable and potentially flawed data.
I believe that the use of modelling is pretty much getting to be a national scandal. That is not just the fault of the modellers; it is how their work was interpreted by public health officials and the media—and yes, by politicians, including the Government, sadly. Modelling and forecasts were the ammunition that drove lockdown and created a climate of manipulated fear. I believe that that creation of fear was pretty despicable and unforgivable. I do not doubt that modelling is important or that there has been some good modelling, but too often it has been drowned out by hysterical forecasts. I am not, as Professor Ferguson implied, one of those with an “axe to grind”. I do, however, care about truth and believe that if someone influences policy, as the modellers and Imperial College London have done, they should be questioned. Frankly, they have not been questioned enough.
Above all, I want to understand why Government, parts of the media and the public health establishment became addicted to these doomsday scenarios, and then normalised them in our country with such depressing and upsetting consequences for many. I do not pretend to be an expert; I am not. I defended my own PhD at the end of last year, but it is not in epidemiology and I do not pretend to be particularly knowledgeable about that. But depending on time—I know others want to come in as well—I will quote from 13 academic papers and 22 articles authored by a total of approximately 100 academics.
This is a story of three scandals, and the first one took place 21 years ago. In 2001, we faced the foot and mouth emergency. We reacted drastically by slaughtering and burning millions of animals, and farmer suicides and bankruptcies followed. That policy was allegedly heavily influenced by Imperial College modelling and Professor Ferguson. Since foot and mouth, two peer-reviewed studies examined the method behind that particular madness. I quote from them now to show there are practical and ethical questions over modelling going back two decades.
In a 2006 paper, and I apologise for these wordy, long titles, titled “Use and abuse of mathematical models: an illustration from the 2001 foot and mouth disease epidemic in the United Kingdom”—they are not that catchy—the authors confirmed that Ferguson’s model
“probably had the most influence on early policy decisions”
and
“specifically, the introduction of the pre-emptive contiguous culling policy”.
That is the mass slaughter of animals near infected farms. The authors said that the consequences were “severe” and
“the models were not fit for the purpose of predicting the course of the epidemic”
—not a good start. They remain “unvalidated”. Their use was “imprudent” and amounted to
“the abuse of predictive models”.
Devastatingly, the authors wrote
“The UK experience provides a salutary warning of how models can be abused in the interests of scientific opportunism.”
It is difficult to find a more damning criticism of one group of scientists by another.
A 2011 paper, “Destructive tension: mathematics versus experience—the progress and control of the 2001 foot and mouth disease epidemic in Great Britain”—bit of a mouthful—by four academics said the models that supported the culling policy were “severely flawed” and based on flawed data with “highly improbable biological assumptions”. The models were
“at best, crude estimations that could not differentiate risk”.
That is not a very good “at best”. At worst, they were “inaccurate representations”.
Sadly, the paper said, impatience for results
“provided the opportunity for self-styled ‘experts’, including some veterinarians, biologists and mathematicians, to publicise unproven novel options.”
Some of the scientific work—some of it modelling, some of it not, with some modelling by Ferguson and some not—was cited as “unvalidated” and “severely flawed”, with inaccurate data on “highly improbable assumptions” leading to “scientific opportunism”. Is anybody reminded of anything more recent that would suggest the same?
I scroll forward 20 years. As with foot and mouth, with covid we had a nervous Government presented with doomsday scenarios by Imperial—the 500,000 dead prediction—that panicked them into a course of profound action with shocking outcomes. After the lockdown had gone ahead, Imperial publicised on 8 June a study by, I think, seven of them arguing the justification for lockdown. It claimed that non-pharmaceutical interventions saved more than 3 million lives in Europe. Effectively, Imperial marked its own homework and gave itself a big slap on the back.
That work is now being challenged. Because of time, I will quote only a small selection. In a paper entitled, “The effect of interventions on COVID-19”, 13 Swedish academics—Ferguson ain’t popular in Sweden, I can tell Members that much—said that the conclusions of the Imperial study were not justified and went beyond the data. Regensburg and Leibniz university academics directly refuted Imperial College in a paper entitled “The illusory effects of non-pharmaceutical interventions on COVID-19 in Europe”, which said that the authors of the Imperial study
“allege that non-pharmaceutical interventions imposed by 11 European countries saved millions of lives. We show that their methods involve circular reasoning. The purported effects are pure artefacts, which contradict the data. Moreover, we demonstrate that the United Kingdom’s lockdown was both superfluous and ineffective.”
I am not saying that this stuff is right; I am just saying that there is a growing body of work that is, frankly, taking apart Imperial’s. Remember, we spent £370 billion on lockdown that we will never get back. I could continue with other quotes, but I think Members get the flavour.
Moreover, a substantial number of other papers now question not Imperial per se but the worth generally of lockdowns. A pre-print article by four authors, “Effects of non-pharmaceutical interventions on COVID-19: A Tale of Three Models”, said:
“Claimed benefits of lockdown appear grossly exaggerated.”
In another paper, three authors found no clear, significant benefit of lockdowns on case growth in any country. Other papers continue that theme. I will quote one more, on adults living with kids. Remember: we shut schools because we were scared that kids would come home and infect older people, who would then die. This paper, in The BMJ, found
“no evidence of an increased risk of severe COVID-19 outcomes.”
We shut down society and schools just in case, doing extraordinary harm to people’s lives, especially young people. I am not a lockdown sceptic, as Ferguson casually describes some of his critics, but I am becoming so. Do you know why, Sir Edward? Because I read the evidence, and there is a growing body of it. In fact, there is one quote that I did not read out. There was a study of lots of countries that had lockdowns and lots that did not, and the data was inconclusive.
The third element of the scandal is the recent modelling. Swedish epidemiologists looked at Imperial’s work and compared it with their own experience. Chief epidemiologist Anders Tegnell said of Imperial’s work that
“the variables…were quite extreme…We were always quite doubtful”.
Former chief epidemiologist Johan Giesecke said Ferguson’s model was “almost hysterical”. In the House of Lords, Viscount Ridley talked of a huge discrepancy and flaws in the model and the modelling. John Ioannidis from Stanford University said that the “assumptions and estimates” seemed “substantially inflated”.
There was a second example last summer. In July 2021, the good Professor Ferguson predicted that hitting 100,000 cases was “almost inevitable”. He told the BBC that the real question was whether we got to double that or even higher. That is where the crystal ball starts to fail: we got nowhere near 200,000, and we got nowhere near 100,000. There was nothing inevitable about Professor Ferguson’s inevitability, and his crystal ball must have gone missing from the start. In The Times, he blamed the Euros for messing up his modelling because—shock horror—people went to pubs a lot to watch the games during the competition. When the tournament finished—shock horror—they did not. That seems to be the fundamental problem: where reality comes up against models, reality steamrollers them because models cannot cope with the complexity of real life. To pretend that they can and even that they are useful, when so many of them have proved not to be, is concerning.
Ferguson is only one of many people in Independent SAGE especially, but also SAGE, who did not cover themselves in glory. Raghib Ali—a friend of my hon. Friend the Member for Wycombe (Mr Baker), who I am delighted is present—is one of the heroes of covid. He noted that many left-wing SAGE members
“repeatedly made inaccurate forecasts overestimating infections”.
Very often, they were falsely described on the BBC.
I am grateful to my hon. Friend for mentioning my friend and constituent Raghib Ali, who has indeed been one of the absolute heroes of this pandemic—not only in his advice to us all, including online, but through his service in hospitals. I hope my hon. Friend will not mind my saying that I do not think any of us can speak for Raghib about his opinion of modelling, and I know my hon. Friend is not trying to.
I quite agree, and I thank my hon. Friend for that, but I am deeply grateful to Raghib and other people for speaking out. Just for the record, the communist Susan Michie, who is quoted quite often by the BBC, is not a medical doctor, a virologist or an epidemiologist. She is a health psychologist, so why on earth is she talking about epidemiology?
The third scandal took place this winter. Imperial, the London School of Hygiene and Tropical Medicine and others—I think they included Warwick—predicted 5,000 daily covid deaths, with 3,000 daily deaths as the best-case scenario. They were hopelessly inaccurate, and at this point the tide really begins to turn. Dr Clive Dix, a former vaccine taskforce head, said:
“It’s bad science, and I think they’re being irresponsible. They have a duty to reflect the true risks, but this just headline grabbing.”
As I say, the tide is turning. Oncology professor Angus Dalgleish describes Ferguson’s modelling as “lurid predictions” and “spectacularly wrong”. The great Carl Heneghan, another scientist who has emerged with great credit for his honesty and fairness of comment, says:
“it’s becoming clearer all that ministers see is the worst-case scenario”.
Professor Brendan Wren says:
“Dodgy data and flawed forecasts have become the hallmarks of much of the scientific establishment 2”—
what a damning quote!—
“which has traded almost exclusively in worst-case scenarios...this must stop now.”
I agree.
I will wind up in the next two to three minutes—I will speak for no longer than 15 minutes because other people wish to get in, and I am always mindful of that. What is the result of all this? The result, as UCL’s Professor Francois Balloux notes, is a
“loss of trust in government and public institutions for crying wolf.”
That is just it. We have had hysterical forecasts, models taken out of context, and worst-case scenarios normalised.
In the Army, there is something called the most dangerous course of action, and there is something called the most likely course of action. To sum up in one sentence how we got this wrong, we have effectively taken the most dangerous course of action and collectively—the politicians, media, scientists and health professionals—presented that as the most likely course of action, but it was not. Why did politicians say, “Follow the science” as a way of shutting down debate, when we know that science is complex and that our outputs are dependent on our inputs? It was down to public-health types, whose defensive decision making would only ever cost other people’s jobs, other people’s health, other people’s sanity, other people’s education and other people’s life chances.
We know that the Opposition supported lockdown from the word go, but a little more opposing might have been helpful. The BBC and the Guardian have been salivating at state control and doomsday scenarios. Against this tsunami of hysteria and fear, thank God for The Spectator, The Telegraph and, yes, the Daily Mail for keeping alive freedom of speech and putting forward an alternative, which is now being increasingly scientifically vindicated. I accept that lockdown was understandable at first—I get that—but I believe the continuation of lockdown after that first summer was an increasingly flawed decision.
In wrapping up, I have some questions. To Professor Ferguson and the doomsday modellers: why are so many of your fellow academics disputing your work and your findings? To the BBC, as our state broadcaster: why did you so rarely challenge Ferguson, SAGE or Independent SAGE? Why did we misrepresent experts, and why did the BBC allow itself to become the propaganda arm of the lockdown state? To the Government: how could we have been so blinkered that we thought that following the science meant shutting down scientific debate? Why did we never use other datasets in contexts with the British people, or even in contexts in which these profound and enormous decisions were made? Why did we think that it was in our nation’s interests to create a grotesque sense of fear to manipulate behaviour? SAGE and Independent SAGE kept on getting it wrong. To the public health types, I quote from Professor Angus Dalgleish again:
“Flailing around, wallowing in hysteria, adopting impractical policies and subverting democracy, the Chief Medical Officer is out of his depth. He has to go if we are ever to escape this nightmare.”
He is not a journalist; he is an oncologist—a senior oncologist.
Twice in 20 years, we have made some pretty profound and significant errors of judgment, using modelling as a sort of glorified guesswork. I suggest respectfully to the Government that, after foot and mouth and covid, never again should they rely on dubious modelling, regardless of the source and regardless of the best intent. I am sure that Imperial and all these other people do the best that they can, and am very happy to state that publicly. However, why has so much of their work been described—and I will use the words of other academics—as “unvalidated”, “flawed”, “not fit for purpose”, “improbable”, “almost hysterical”, “overconfident”, “lurid”, “inflated”, “pessimistic”, “spectacularly wrong”, “fraudulent” and as “scientific opportunism”?
Thank you very much, Sir Edward. I begin by referring to the declarations that I have made in connection to the Covid Recovery Group.
I am a professional aerospace and software engineer—at least I was in my former life. I have an MSc in computer science, and am very interested in models. However, there is an old joke among engineers, which derives from a “Dilbert” cartoon, that the career goal of every engineer is not to be blamed for a major catastrophe. I wonder whether that spirit infuses not only expert advice but modelling in particular. We are all indebted to The Spectator for its data hub, which shows how data has worked out against models. As anyone can see by going to data.spectator.co.uk, it is the same story again and again: enormous great molehills of death projections, and underneath them the reality of much lower lines. I will leave it to people to look for themselves at the data, rather than trying to characterise the curves for Hansard.
There is a great deal to be done in terms of institutional reform of the way in which modelling is done and informs public policy. That is a very old problem; I found a great article in Foreign Affairs that goes back a long time, to the post-war era, about this problem. It is time we did something about it, through institutional reform. The situation is now perfectly plain: under the Public Health (Control of Disease) Act 1984, even our most basic liberties can be taken away with a stroke of a pen if a Minister has been shown sufficiently persuasive modelling—not even data—that tells them that there is trouble ahead.
I have put this on the record before, and I hope that my right hon. Friend the Prime Minister will not mind. Before we went into the 2020 lockdown, he called me; I was amazed to be at home and to have the Prime Minister of the UK call me. “Steve, I have been shown death projections—4,500 people a day and the hospitals overwhelmed.” I gave him two pieces of advice: “First, if you really believe that we are going to have 4,500 people a day die, you’d better do whatever it takes to prevent that from happening,” which is not advice that anyone would have expected me to give, but that is what I said, and, “Secondly, for goodness’ sake, go and challenge the advice—the data.”
That is why Carl Heneghan, Raghib Ali, Tim Spector and I, whether in person or virtually, were seen in Downing Street, and were there to challenge the data. By Monday, Carl Heneghan had taken the wheels off those death projections, by which the Prime Minister had, disgracefully, been bounced, using a leak, into the lockdown. That is absolutely no way to conduct public policy. However, the reason someone—we will not speculate who—bounced the Prime Minister is that they had been shown those terrifying death projections, which could not possibly be tolerated. Those projections were wrong.
It is monstrous that millions of people have been locked down—effectively under house arrest—have had their businesses destroyed and have had their children prevented from getting an education. Any of us who visit nursery schools meet children, two-year-olds, who have barely socialised. We cannot even begin to understand the effects on the rest of their lives. It is not the modellers’ fault, and I do not wish to condemn modellers. They are technical people, doing a job they are asked to do. We have to ask them to do a different and better job—one which does not leave them, like the old joke about engineers, afraid of being responsible for a major catastrophe.
As my friend Professor Roger Koppl said in his book “Expert Failure”, experts have all the incentives to be pessimistic because if they are pessimistic and events turn out better, they are not blamed. I am sorry: I am not blaming them personally, but I am blaming the whole system for allowing this to arise. The extraordinarily pessimistic models plus the bouncing of a Prime Minister did so much harm.
We need to conduct institutional reform. In relation to models, Mike Hearn, a very senior software engineer, has published a paper available on my website. It is a summary of methodological issues in epidemiology. There are about seven points—an extraordinary set of arguments: things such as poor characterisation, statistical uncertainty and so on, which I have no time to get into. The fundamental point is that we must now have an office of research integrity. The job of that office would be to demand—to insist—that the assumptions going into models and the nature of the models themselves were of a far higher quality.
Finally, to go back to an area of my own expertise, I encourage any software engineer to look at the model code that was released.
I think it should be in the Cabinet Office, because we see that scientific advice applies right across Government.
The code quality of the model that was released was really not fit for a hobbyist. The irony is that the universities that do modelling will overwhelmingly have computer science departments. For goodness’ sake, I say to modellers, go and talk to software engineers and produce good quality code. For goodness’ sake, stop using C++. People are using, as they so often do, the fastest computer programming language, but also the most sophisticated and dangerous. As a professional software engineer, the first thing I would say is, “Don’t use C++ if you don’t have to. Models don’t need to; they can run a bit slower. Use something where you can’t make the desperately poor quality coding errors that were in that released model code”. That is really inexcusable and fulfils all the prejudices of software engineers against scientists hacking out poor quality code not fit for hobbyists. As I think people can tell, I feel quite strongly about that, precisely because these poor modellers have had unacceptable burdens placed on them. All the incentives for them to be pessimistic can now be seen in the data. This all has to be changed with an office of research integrity.
I congratulate my hon. Friend the Member for Isle of Wight (Bob Seely) on securing this very important debate and making an excellent speech. I have no wish to repeat the brilliant research that he recited, but he did highlight the repeated failures of modelling throughout the pandemic, not just the modelling but how it is being used. The models have not been out by just a few per cent, as he said, but often by orders of magnitude. The way that the models have been used has had life-changing impacts on people across the country.
Before I was a politician, I was a science teacher. One of the joys of teaching science to teenagers is conducting practical experiments in the lab. Once the teacher has ensured that they are not going to burn down the lab, it is important to teach them how to conduct an experiment properly and write it up. The first thing is to create a hypothesis. They must write a statement of what they think will happen and why, using the scientific knowledge they have and some assumptions, then carry out the experiment, write up the research and, crucially, evaluate. They must look at the hypothesis and at what they have observed, and decide whether they match. If they do match, they go back to their assumptions and see why they were correct. If they do not match, if what has happened in the lab and been recorded does not match the hypothesis, they need to ask why—“What assumptions did I make that did not bear out in real life, that did not happen in the lab?”
It seems to me that those are the questions that have not been asked throughout this crisis. Perhaps we can understand why assumptions had to be made quickly the first time, for the first lockdown—assumptions that turned out not to be true. My hon. Friend said that perhaps we are repeating history of 20 years ago, and that there is not that excuse. However, during subsequent waves and restrictions, why were those assumptions not questioned? There were assumptions about how likely the different scenarios were, about people’s behaviour and fatality rates.
Even in December, when plan B was voted through, some of the assumptions could have been declared wrong in real time—the assumption that omicron was as severe as delta, and that the disease would escape the vaccine. Some of the figures were almost plucked out of the air and given no likelihood. Those assumptions should have been challenged earlier and we need to ask why.
I picked up on one assumption following an interview with Dr Pieter Streicher, a South African doctor. He suggested that SAGE models have always assumed that infection rates do not reach a peak until about 70% of the population have had the disease, whereas the real-world data suggest that the infection rates start to slow at around 30% of the population. That makes more sense from a social science point of view, because we know that people are not equally sociable.
Studies by sociologists such as Malcolm Gladwell, who wrote the best-selling “The Tipping Point”, describe the law of the few, where very few people are extremely sociable and pass on a virus, idea or whatever, to many people. Many more people do not socialise as much and are not as good at transmitting. Perhaps we should have looked a lot more at social science, at behaviour and people’s interactions, rather than pure virology and what might happen in a lab. Of course, we do not exist in labs and cannot model the interactions of human beings that easily.
The tragedy is that this was not a paper exercise. This is not an experiment that happened in a lab where one can go back and repeat until valid results are achieved. These models, and particularly the weight they have been given, have caused serious destruction of lives and livelihoods. Who was modelling the outcomes for education, child abuse and poverty? Who was modelling the impact on loneliness, despair and fear? We have to ask why those assumptions were not interrogated.
My hon. Friend the Member for Wycombe (Mr Baker) has made some excellent points about the need for institutional reform. I completely agree with him, but we also need to look at the impact on free speech. At the beginning of this crisis, the mainstream media took on the idea that lockdown was the only strategy.
My hon. Friend spoke earlier about the repeatability of scientific experiments with hypotheses. One of the reasons I talked about C++ is that by using multithreading, it is possible to end up with code that does not produce repeatable outputs. Does she agree that it is very important that when models are run, they produce consistent and coherent outputs that can be repeated?
Thank you for calling me, Sir Edward. My first thought is, thank goodness that health is devolved. It will surprise no one to learn that I will not be joining the libertarian pile-on against scientists led by people who, even in these circumstances in a Chamber this small, still do not use face coverings.
No, I will not. The libertarian right have had enough of a kick at the ball in this debate. [Interruption.] No, I will not give way. At least half of those who have spoken today are not wearing face coverings.
I know that it is customary at this point to thank the Member who secured the debate but, in a break from tradition, I will start by thanking the scientists––the analysts, the medical professionals, the health experts, the clinicians and everyone else who stopped what they were doing two years ago and dedicated their lives to trying to work out and predict where the global pandemic might go and the impact that it could have on us. Two years ago, when tasked with working out this brand-new virus, every step that they took was a step into the unknown. There was no textbook to chart the route of this pandemic and every decision that they took was a new decision. They knew that every piece of advice they gave could have serious consequences for the population. The pressure of doing real-time covid-19 analysis must have been enormous. I, for one, really appreciate that scientists erred on the side of caution in the midst of a global pandemic in which tens of thousands of people were dying when there were no vaccines or booster protection. To all the SAGE officials, scientists, medical staff and public health experts who have done a remarkable job in keeping us safe, I say a huge and unequivocal thank you.
We know and can accept that forecasting and modelling during a pandemic are not an exact science but based on the best available evidence and a series of scenarios, presented from the best to the worst case. As Professor Adam Kucharski of the London School of Hygiene and Tropical Medicine said,
“a model is a structured way of thinking about the dynamics of an epidemic. It allows us to take the knowledge we have, make some plausible assumptions based on that knowledge, then look at the logical implications of those assumptions.”
As the much-maligned Professor Ferguson told the Science and Technology Committee,
“Models can only be as reliable as the data that is feeding into them.”
Of course such models have their limitations. They are not forecasting modelling but mathematical projections based on the data available to modellers. If the tests are not being done, or tests are not being registered as positive, for example, the data modelling and forecasting can be affected. It is important to remember, however, that while the hon. Member for Isle of Wight (Bob Seely) was telling anyone who would listen that modelling predictions were a national scandal, Professor Chris Whitty was telling the Science and Technology Committee that
“a lot of the advice that I have given is not based on significant forward modelling. It is based on what has happened and what is observable.”
Advice on lockdown and other public health measures was given by SAGE and others on the basis of observable data, not on forecasting modelling alone. I put it to the hon. Member for Isle of Wight that he was quite wrong when he told GB News that
“So much of what’s happened since with…inhuman conditions that many of us struggled with”
was
“built on some really questionable science.”
Professor Whitty said clearly that he did not base his advice on that; rather, he based it on what he could see around him.
The primary purpose of modelling is simply to offer a sense of the impact of different restrictions. A report by researchers for the journal Nature found that the first lockdown saved up to 3 million lives in Europe, including 470,000 in the UK. The success of disease modelling was in predicting how many deaths there would have been if lockdown had not happened. SAGE officials, scientists and medical staff have done a remarkable job to keep us all safe, and many people across these islands owe their lives to them. I believe that the work that those people have done under enormous pressure should be applauded and appreciated, not undermined by the far-right libertarian Tories we have today.
Thank you, Sir Edward; it is a pleasure to serve under your chairship. I congratulate—I think—the hon. Member for Isle of Wight (Bob Seely) on securing the debate, because I welcome impartial and honest interrogation of the science, as well as decisions made over the last two years that have been important for our country. I also welcome extreme scepticism about some of the decisions made by the Government. This debate has not been an honest and independent inquiry into the science, however. It clearly comes with an ideological bent, so it has to be taken in that light.
I also begin by paying tribute to our public servants and Government scientists.
The hon. Gentleman has not even heard what I have to say yet.
The hon. Lady said that we have made points that require an ideological bent. I invite her to look at what I said and identify at least three points that required any kind of ideological justification. Contrary to the point made by the hon. Member for Argyll and Bute (Brendan O'Hara), nothing that I said required libertarian political philosophy.
That was another speech. I have never been in a room with so many software engineers who are also MPs. I begin by paying tribute to our public servants, our Government scientists, epidemiologists, and the scientific community who have worked tirelessly and put everything on the line to keep the public safe. That is what they have been trying to do over the past two years: keep people safe and save lives—and they have. They have shouldered the fear, anguish and hope of an entire nation that was experiencing deep trauma. They have, magnificently, been prepared to put their head on the block, if needs be. I hope the Minister will agree with me that it is very disappointing to hear them come under attack today from certain colleagues, despite everything that they have done.
I would remind those who seek to attack SAGE and our Government scientists that, while they were looking forward, planning and working hard on the evidence of what the virus might throw at us next, it was freshers week in Downing Street. They are not the enemy here. In fact, had a bit more attention been paid to their models, had there been more modelling before the start of the pandemic and had more action been taken in February and March 2020, thousands of lives could have been saved. It is not modelling that is the intrinsic problem here—it is decision making.
Modelling is a hugely important tool for managing epidemics that is tried and tested, with constant efforts to improve it. I agree with earlier comments that there should be more models; there should be models about the impact on mental health, education, poverty and models to learn from other countries in order to inform our decisions. As Graham Medley, one member of SAGE, explains very clearly, models are not predictions and are not meant to be seen as such; they are the “what ifs” that can be used by Governments to inform decisions and guide them as to what they might need to prepare for, which should include the worst-case scenarios—that is a crucial distinction. Accurate predictions cannot be made with such an unpredictable virus, when individual behaviour is also unpredictable, so models and scenarios are the best tools to give us the parameters for the decisions that will be made. As Graham Medley said, SPI-M—the Scientific Pandemic Influenza Group on Modelling—the sub-committee of SAGE that he chairs, produced
“scenarios to inform government decisions, not to inform public understanding directly. They are not meant to be predictions of what will actually happen and a range of scenarios are presented to policymakers with numerous caveats and uncertainties emphasised.”
Who would want it any other way?
My question to the sceptical Members present here today is: what is the alternative? We need to have those parameters. The alternative is guessing without parameters and knowledge.
I am going to move on. I do not want another speech from the hon. Member, given the time constraints. I am waiting for the Minister to answer my questions.
The Public Administration and Constitutional Affairs Committee also had problems with the communication of the modelling. It is there that I might have some common ground with the hon. Members who have spoken earlier. The Committee said in its report last March that communication has not always been transparent enough, and accountabilities have been unclear. I agree with this. If the time is not taken carefully to explain what modelling actually is to the public and media, and instead room is allowed for scenarios to be interpreted as predictions, inevitably the practice of modelling and forecasting will be rubbished and scoffed at and Government scientists blamed as doom-mongers. Not communicating the data and models properly creates more uncertainty and misery for small businesses, who have been asked enough as it is, as we saw over the Christmas period.
I am conscious that I need to leave time at the end, but I will endeavour to get through my speech and take interventions.
It is not, however, and never can be, a crystal ball, regardless of who is doing the modelling. Models cannot perfectly predict the future, and modellers would not claim they do so. Contrary to how they may be presented in the media, modelling outputs are not forecasts, nor do they focus only on the most pessimistic outcomes. Model advice to Government is not simply a single line on a graph.
There is always uncertainty when looking into the future: uncertainty from potential policy changes, the emergence of new variants, or people’s behaviour and mixing and the changes that that brings. Central to modelling advice is an assessment of this uncertainty, what factors drive the uncertainty and how the results might change if the model’s inputs and assumptions change as new evidence emerges. As such, the modellers look at a wide range of possibilities and assumptions in order to advise policy makers on principles, not to attempt to say exactly what will happen..
I am grateful to the Minister for giving way. She heard what I said about my conversation with the Prime Minister—it is, of course, a true account of what happened. The reality is that the Prime Minister was shown a terrifying model that subsequently proved to be wildly incorrect, but he took away freedoms from tens of millions of people on that basis. The Minister must surely agree that that does not accord with the very sensible words that she is saying. That is not what actually happened. The Prime Minister was bounced on the basis of profoundly wrong models.