Covid-19: Forecasting and Modelling Debate

Full Debate: Read Full Debate
Department: Department of Health and Social Care

Covid-19: Forecasting and Modelling

Bob Seely Excerpts
Tuesday 18th January 2022

(2 years, 3 months ago)

Westminster Hall
Read Full debate Read Hansard Text Read Debate Ministerial Extracts

Westminster Hall is an alternative Chamber for MPs to hold debates, named after the adjoining Westminster Hall.

Each debate is chaired by an MP from the Panel of Chairs, rather than the Speaker or Deputy Speaker. A Government Minister will give the final speech, and no votes may be called on the debate topic.

This information is provided by Parallel Parliament and does not comprise part of the offical record

Bob Seely Portrait Bob Seely (Isle of Wight) (Con)
- Hansard - -

I beg to move,

That this House has considered forecasting and modelling during covid-19.

It is a pleasure to speak under your chairmanship, Sir Edward. I speak not to bury science, but to praise it. During the covid pandemic, there has been some remarkable, wonderful science; I just question to what extent that includes the modelling and forecasts that have come from it. Thanks to some questionable modelling that was poorly presented and often misrepresented, never before has so much harm been done to so many by so few based on so little questionable and potentially flawed data.

I believe that the use of modelling is pretty much getting to be a national scandal. That is not just the fault of the modellers; it is how their work was interpreted by public health officials and the media—and yes, by politicians, including the Government, sadly. Modelling and forecasts were the ammunition that drove lockdown and created a climate of manipulated fear. I believe that that creation of fear was pretty despicable and unforgivable. I do not doubt that modelling is important or that there has been some good modelling, but too often it has been drowned out by hysterical forecasts. I am not, as Professor Ferguson implied, one of those with an “axe to grind”. I do, however, care about truth and believe that if someone influences policy, as the modellers and Imperial College London have done, they should be questioned. Frankly, they have not been questioned enough.

Above all, I want to understand why Government, parts of the media and the public health establishment became addicted to these doomsday scenarios, and then normalised them in our country with such depressing and upsetting consequences for many. I do not pretend to be an expert; I am not. I defended my own PhD at the end of last year, but it is not in epidemiology and I do not pretend to be particularly knowledgeable about that. But depending on time—I know others want to come in as well—I will quote from 13 academic papers and 22 articles authored by a total of approximately 100 academics.

This is a story of three scandals, and the first one took place 21 years ago. In 2001, we faced the foot and mouth emergency. We reacted drastically by slaughtering and burning millions of animals, and farmer suicides and bankruptcies followed. That policy was allegedly heavily influenced by Imperial College modelling and Professor Ferguson. Since foot and mouth, two peer-reviewed studies examined the method behind that particular madness. I quote from them now to show there are practical and ethical questions over modelling going back two decades.

In a 2006 paper, and I apologise for these wordy, long titles, titled “Use and abuse of mathematical models: an illustration from the 2001 foot and mouth disease epidemic in the United Kingdom”—they are not that catchy—the authors confirmed that Ferguson’s model

“probably had the most influence on early policy decisions”

and

“specifically, the introduction of the pre-emptive contiguous culling policy”.

That is the mass slaughter of animals near infected farms. The authors said that the consequences were “severe” and

“the models were not fit for the purpose of predicting the course of the epidemic”

—not a good start. They remain “unvalidated”. Their use was “imprudent” and amounted to

“the abuse of predictive models”.

Devastatingly, the authors wrote

“The UK experience provides a salutary warning of how models can be abused in the interests of scientific opportunism.”

It is difficult to find a more damning criticism of one group of scientists by another.

A 2011 paper, “Destructive tension: mathematics versus experience—the progress and control of the 2001 foot and mouth disease epidemic in Great Britain”—bit of a mouthful—by four academics said the models that supported the culling policy were “severely flawed” and based on flawed data with “highly improbable biological assumptions”. The models were

“at best, crude estimations that could not differentiate risk”.

That is not a very good “at best”. At worst, they were “inaccurate representations”.

Sadly, the paper said, impatience for results

“provided the opportunity for self-styled ‘experts’, including some veterinarians, biologists and mathematicians, to publicise unproven novel options.”

Some of the scientific work—some of it modelling, some of it not, with some modelling by Ferguson and some not—was cited as “unvalidated” and “severely flawed”, with inaccurate data on “highly improbable assumptions” leading to “scientific opportunism”. Is anybody reminded of anything more recent that would suggest the same?

I scroll forward 20 years. As with foot and mouth, with covid we had a nervous Government presented with doomsday scenarios by Imperial—the 500,000 dead prediction—that panicked them into a course of profound action with shocking outcomes. After the lockdown had gone ahead, Imperial publicised on 8 June a study by, I think, seven of them arguing the justification for lockdown. It claimed that non-pharmaceutical interventions saved more than 3 million lives in Europe. Effectively, Imperial marked its own homework and gave itself a big slap on the back.

That work is now being challenged. Because of time, I will quote only a small selection. In a paper entitled, “The effect of interventions on COVID-19”, 13 Swedish academics—Ferguson ain’t popular in Sweden, I can tell Members that much—said that the conclusions of the Imperial study were not justified and went beyond the data. Regensburg and Leibniz university academics directly refuted Imperial College in a paper entitled “The illusory effects of non-pharmaceutical interventions on COVID-19 in Europe”, which said that the authors of the Imperial study

“allege that non-pharmaceutical interventions imposed by 11 European countries saved millions of lives. We show that their methods involve circular reasoning. The purported effects are pure artefacts, which contradict the data. Moreover, we demonstrate that the United Kingdom’s lockdown was both superfluous and ineffective.”

I am not saying that this stuff is right; I am just saying that there is a growing body of work that is, frankly, taking apart Imperial’s. Remember, we spent £370 billion on lockdown that we will never get back. I could continue with other quotes, but I think Members get the flavour.

Moreover, a substantial number of other papers now question not Imperial per se but the worth generally of lockdowns. A pre-print article by four authors, “Effects of non-pharmaceutical interventions on COVID-19: A Tale of Three Models”, said:

“Claimed benefits of lockdown appear grossly exaggerated.”

In another paper, three authors found no clear, significant benefit of lockdowns on case growth in any country. Other papers continue that theme. I will quote one more, on adults living with kids. Remember: we shut schools because we were scared that kids would come home and infect older people, who would then die. This paper, in The BMJ, found

“no evidence of an increased risk of severe COVID-19 outcomes.”

We shut down society and schools just in case, doing extraordinary harm to people’s lives, especially young people. I am not a lockdown sceptic, as Ferguson casually describes some of his critics, but I am becoming so. Do you know why, Sir Edward? Because I read the evidence, and there is a growing body of it. In fact, there is one quote that I did not read out. There was a study of lots of countries that had lockdowns and lots that did not, and the data was inconclusive.

The third element of the scandal is the recent modelling. Swedish epidemiologists looked at Imperial’s work and compared it with their own experience. Chief epidemiologist Anders Tegnell said of Imperial’s work that

“the variables…were quite extreme…We were always quite doubtful”.

Former chief epidemiologist Johan Giesecke said Ferguson’s model was “almost hysterical”. In the House of Lords, Viscount Ridley talked of a huge discrepancy and flaws in the model and the modelling. John Ioannidis from Stanford University said that the “assumptions and estimates” seemed “substantially inflated”.

There was a second example last summer. In July 2021, the good Professor Ferguson predicted that hitting 100,000 cases was “almost inevitable”. He told the BBC that the real question was whether we got to double that or even higher. That is where the crystal ball starts to fail: we got nowhere near 200,000, and we got nowhere near 100,000. There was nothing inevitable about Professor Ferguson’s inevitability, and his crystal ball must have gone missing from the start. In The Times, he blamed the Euros for messing up his modelling because—shock horror—people went to pubs a lot to watch the games during the competition. When the tournament finished—shock horror—they did not. That seems to be the fundamental problem: where reality comes up against models, reality steamrollers them because models cannot cope with the complexity of real life. To pretend that they can and even that they are useful, when so many of them have proved not to be, is concerning.

Ferguson is only one of many people in Independent SAGE especially, but also SAGE, who did not cover themselves in glory. Raghib Ali—a friend of my hon. Friend the Member for Wycombe (Mr Baker), who I am delighted is present—is one of the heroes of covid. He noted that many left-wing SAGE members

“repeatedly made inaccurate forecasts overestimating infections”.

Very often, they were falsely described on the BBC.

Steve Baker Portrait Mr Steve Baker (Wycombe) (Con)
- Hansard - - - Excerpts

I am grateful to my hon. Friend for mentioning my friend and constituent Raghib Ali, who has indeed been one of the absolute heroes of this pandemic—not only in his advice to us all, including online, but through his service in hospitals. I hope my hon. Friend will not mind my saying that I do not think any of us can speak for Raghib about his opinion of modelling, and I know my hon. Friend is not trying to.

Bob Seely Portrait Bob Seely
- Hansard - -

I quite agree, and I thank my hon. Friend for that, but I am deeply grateful to Raghib and other people for speaking out. Just for the record, the communist Susan Michie, who is quoted quite often by the BBC, is not a medical doctor, a virologist or an epidemiologist. She is a health psychologist, so why on earth is she talking about epidemiology?

The third scandal took place this winter. Imperial, the London School of Hygiene and Tropical Medicine and others—I think they included Warwick—predicted 5,000 daily covid deaths, with 3,000 daily deaths as the best-case scenario. They were hopelessly inaccurate, and at this point the tide really begins to turn. Dr Clive Dix, a former vaccine taskforce head, said:

“It’s bad science, and I think they’re being irresponsible. They have a duty to reflect the true risks, but this just headline grabbing.”

As I say, the tide is turning. Oncology professor Angus Dalgleish describes Ferguson’s modelling as “lurid predictions” and “spectacularly wrong”. The great Carl Heneghan, another scientist who has emerged with great credit for his honesty and fairness of comment, says:

“it’s becoming clearer all that ministers see is the worst-case scenario”.

Professor Brendan Wren says:

“Dodgy data and flawed forecasts have become the hallmarks of much of the scientific establishment 2”—

what a damning quote!—

“which has traded almost exclusively in worst-case scenarios...this must stop now.”

I agree.

I will wind up in the next two to three minutes—I will speak for no longer than 15 minutes because other people wish to get in, and I am always mindful of that. What is the result of all this? The result, as UCL’s Professor Francois Balloux notes, is a

“loss of trust in government and public institutions for crying wolf.”

That is just it. We have had hysterical forecasts, models taken out of context, and worst-case scenarios normalised.

In the Army, there is something called the most dangerous course of action, and there is something called the most likely course of action. To sum up in one sentence how we got this wrong, we have effectively taken the most dangerous course of action and collectively—the politicians, media, scientists and health professionals—presented that as the most likely course of action, but it was not. Why did politicians say, “Follow the science” as a way of shutting down debate, when we know that science is complex and that our outputs are dependent on our inputs? It was down to public-health types, whose defensive decision making would only ever cost other people’s jobs, other people’s health, other people’s sanity, other people’s education and other people’s life chances.

We know that the Opposition supported lockdown from the word go, but a little more opposing might have been helpful. The BBC and the Guardian have been salivating at state control and doomsday scenarios. Against this tsunami of hysteria and fear, thank God for The Spectator, The Telegraph and, yes, the Daily Mail for keeping alive freedom of speech and putting forward an alternative, which is now being increasingly scientifically vindicated. I accept that lockdown was understandable at first—I get that—but I believe the continuation of lockdown after that first summer was an increasingly flawed decision.

In wrapping up, I have some questions. To Professor Ferguson and the doomsday modellers: why are so many of your fellow academics disputing your work and your findings? To the BBC, as our state broadcaster: why did you so rarely challenge Ferguson, SAGE or Independent SAGE? Why did we misrepresent experts, and why did the BBC allow itself to become the propaganda arm of the lockdown state? To the Government: how could we have been so blinkered that we thought that following the science meant shutting down scientific debate? Why did we never use other datasets in contexts with the British people, or even in contexts in which these profound and enormous decisions were made? Why did we think that it was in our nation’s interests to create a grotesque sense of fear to manipulate behaviour? SAGE and Independent SAGE kept on getting it wrong. To the public health types, I quote from Professor Angus Dalgleish again:

“Flailing around, wallowing in hysteria, adopting impractical policies and subverting democracy, the Chief Medical Officer is out of his depth. He has to go if we are ever to escape this nightmare.”

He is not a journalist; he is an oncologist—a senior oncologist.

Twice in 20 years, we have made some pretty profound and significant errors of judgment, using modelling as a sort of glorified guesswork. I suggest respectfully to the Government that, after foot and mouth and covid, never again should they rely on dubious modelling, regardless of the source and regardless of the best intent. I am sure that Imperial and all these other people do the best that they can, and am very happy to state that publicly. However, why has so much of their work been described—and I will use the words of other academics—as “unvalidated”, “flawed”, “not fit for purpose”, “improbable”, “almost hysterical”, “overconfident”, “lurid”, “inflated”, “pessimistic”, “spectacularly wrong”, “fraudulent” and as “scientific opportunism”?

None Portrait Several hon. Members rose—
- Hansard -

--- Later in debate ---
Steve Baker Portrait Mr Steve Baker (Wycombe) (Con)
- Hansard - - - Excerpts

Thank you very much, Sir Edward. I begin by referring to the declarations that I have made in connection to the Covid Recovery Group.

I am a professional aerospace and software engineer—at least I was in my former life. I have an MSc in computer science, and am very interested in models. However, there is an old joke among engineers, which derives from a “Dilbert” cartoon, that the career goal of every engineer is not to be blamed for a major catastrophe. I wonder whether that spirit infuses not only expert advice but modelling in particular. We are all indebted to The Spectator for its data hub, which shows how data has worked out against models. As anyone can see by going to data.spectator.co.uk, it is the same story again and again: enormous great molehills of death projections, and underneath them the reality of much lower lines. I will leave it to people to look for themselves at the data, rather than trying to characterise the curves for Hansard.

There is a great deal to be done in terms of institutional reform of the way in which modelling is done and informs public policy. That is a very old problem; I found a great article in Foreign Affairs that goes back a long time, to the post-war era, about this problem. It is time we did something about it, through institutional reform. The situation is now perfectly plain: under the Public Health (Control of Disease) Act 1984, even our most basic liberties can be taken away with a stroke of a pen if a Minister has been shown sufficiently persuasive modelling—not even data—that tells them that there is trouble ahead.

I have put this on the record before, and I hope that my right hon. Friend the Prime Minister will not mind. Before we went into the 2020 lockdown, he called me; I was amazed to be at home and to have the Prime Minister of the UK call me. “Steve, I have been shown death projections—4,500 people a day and the hospitals overwhelmed.” I gave him two pieces of advice: “First, if you really believe that we are going to have 4,500 people a day die, you’d better do whatever it takes to prevent that from happening,” which is not advice that anyone would have expected me to give, but that is what I said, and, “Secondly, for goodness’ sake, go and challenge the advice—the data.”

That is why Carl Heneghan, Raghib Ali, Tim Spector and I, whether in person or virtually, were seen in Downing Street, and were there to challenge the data. By Monday, Carl Heneghan had taken the wheels off those death projections, by which the Prime Minister had, disgracefully, been bounced, using a leak, into the lockdown. That is absolutely no way to conduct public policy. However, the reason someone—we will not speculate who—bounced the Prime Minister is that they had been shown those terrifying death projections, which could not possibly be tolerated. Those projections were wrong.

It is monstrous that millions of people have been locked down—effectively under house arrest—have had their businesses destroyed and have had their children prevented from getting an education. Any of us who visit nursery schools meet children, two-year-olds, who have barely socialised. We cannot even begin to understand the effects on the rest of their lives. It is not the modellers’ fault, and I do not wish to condemn modellers. They are technical people, doing a job they are asked to do. We have to ask them to do a different and better job—one which does not leave them, like the old joke about engineers, afraid of being responsible for a major catastrophe.

As my friend Professor Roger Koppl said in his book “Expert Failure”, experts have all the incentives to be pessimistic because if they are pessimistic and events turn out better, they are not blamed. I am sorry: I am not blaming them personally, but I am blaming the whole system for allowing this to arise. The extraordinarily pessimistic models plus the bouncing of a Prime Minister did so much harm.

We need to conduct institutional reform. In relation to models, Mike Hearn, a very senior software engineer, has published a paper available on my website. It is a summary of methodological issues in epidemiology. There are about seven points—an extraordinary set of arguments: things such as poor characterisation, statistical uncertainty and so on, which I have no time to get into. The fundamental point is that we must now have an office of research integrity. The job of that office would be to demand—to insist—that the assumptions going into models and the nature of the models themselves were of a far higher quality.

Finally, to go back to an area of my own expertise, I encourage any software engineer to look at the model code that was released.

Bob Seely Portrait Bob Seely
- Hansard - -

Where does my hon. Friend believe that body should sit: within BEIS or a separate scientific institution?

Steve Baker Portrait Mr Baker
- Hansard - - - Excerpts

I think it should be in the Cabinet Office, because we see that scientific advice applies right across Government.

The code quality of the model that was released was really not fit for a hobbyist. The irony is that the universities that do modelling will overwhelmingly have computer science departments. For goodness’ sake, I say to modellers, go and talk to software engineers and produce good quality code. For goodness’ sake, stop using C++. People are using, as they so often do, the fastest computer programming language, but also the most sophisticated and dangerous. As a professional software engineer, the first thing I would say is, “Don’t use C++ if you don’t have to. Models don’t need to; they can run a bit slower. Use something where you can’t make the desperately poor quality coding errors that were in that released model code”. That is really inexcusable and fulfils all the prejudices of software engineers against scientists hacking out poor quality code not fit for hobbyists. As I think people can tell, I feel quite strongly about that, precisely because these poor modellers have had unacceptable burdens placed on them. All the incentives for them to be pessimistic can now be seen in the data. This all has to be changed with an office of research integrity.

--- Later in debate ---
Aaron Bell Portrait Aaron Bell (Newcastle-under-Lyme) (Con)
- Hansard - - - Excerpts

It is a pleasure to see you in the Chair, Sir Edward, and to follow all my hon. Friends, who I note have usually been in a different Lobby from me on most coronavirus measures. I am sure the Minister will be grateful to have somebody speaking from the Government Benches who has been supporting the Government on coronavirus throughout.

However, I too have issues with modelling, which is why I chose to speak in today’s debate. I have more sympathy with modelling, and I will be offering some sort of partial defence and explanation of it in my remarks, because before I was an MP, I was a modeller myself—a software engineer. I wrote in Visual Basic.NET, which is nice and simple: engineers can see what the code does. I worked for bet365, and I used to write models that worked out the chance of somebody winning a tennis match, a team winning a baseball game, or whatever. I had some advantages that Neil Ferguson and these models do not have, in that there are many tennis matches, and I could repeat the model again and again and calibrate it. If I got my model wrong, there were people out there who would tell me that it was wrong by beating me and winning money off me, so my models got better and better.

The problem we have with covid is that we cannot repeat that exercise—there is no counterfactual. We have heard the phrase “marking your own homework”.

Bob Seely Portrait Bob Seely
- Hansard - -

I am deeply impressed by all this stuff— I do not quite understand what my hon. Friends are talking about, but it sounds fantastic. However, there is a counterfactual. The counterfactual is when people say, “We are not going to follow the lockdown,” and hey presto! we do not get 3,000 or 5,000 deaths a day and all the people who predicted that are proved wrong. There is a counterfactual called real life.

Aaron Bell Portrait Aaron Bell
- Hansard - - - Excerpts

I thank my hon. Friend for his point, and I accept it, but the problem is that none of these models model changes in human behaviour. We discussed this issue during our debate on the measures that we brought in before Christmas, and as I said at the time, the reality was that people were not going to the pub, the supermarket or anything because they were changing their behaviour in the face of the virus. If the models do not take that into account, they cannot know where the peak will be. The models show what would happen if nobody changed their behaviour at all, but of course, the reality is that people do. We have not got good enough at modelling that, because we do not know exactly how people change their behaviour.

As a tangential point, behavioural science has had a really bad pandemic. We were told that people would not stand for lockdowns, but—to the chagrin, I am sure, of many of my hon. Friends—people did stand for them. Looking at the polling, they were incredibly popular: they were incredibly damaging, as colleagues have said, but people were prepared to live with lockdowns for longer than the scientists thought they would. There was initially an attempt to time the lockdown, because people would not last for that long. In reality, that is not what happened, so behavioural science also has a lot to answer for as a result of the pandemic.

I think that models still have value. My biggest concern arising from the experience of the pandemic is the bad parameters that have gone into those models at times—I will refer to two particular examples.

The time when I was nearest to following my colleagues into the Lobby was the extension to freedom day in June, because on that day we had a session of the Science and Technology Committee, which has taken excellent evidence throughout; it has a session on reproducibility in science tomorrow, where we will also look at this sort of thing. On the day of that vote, I was questioning Susan Hopkins and we were considering vaccine effectiveness. Public Health England had just produced figures showing that the actual effectiveness against hospitalisation of the Pfizer vaccine was 96%, yet the model that we were being asked to rely on for the vote that day said it was 89%. Now, 89 to 96 may not sound like a huge difference, but it is the difference between 4% of people going to hospital and 11%, which is three times higher. It was ludicrous that that data was available on that day but had not yet been plugged into the models. As I said to my hon. Friend the Member for Penistone and Stocksbridge (Miriam Cates), that was one of the reasons that I said in the Chamber that the case was getting weaker and weaker, and that if the Government tried to push it back any further, I would join my colleagues in the Lobby on the next occasion.

The other case is with omicron. Just before Christmas, we had these models that basically assumed that omicron was as severe as delta. We already had some evidence from South Africa that it was not, and since then we have discovered that it was even better than we thought. That feeds into what my hon. Friend was saying about the total number of people who are susceptible. The fact that omicron has peaked early is not because people have changed their behaviour but because the susceptible population was not as big as we thought: more people had been exposed, more people have had asymptomatic disease. There are all those sorts of problems there.

More philosophically, my models when I worked for a bookmaker were about probabilities. Too often we focus on a single line and too often that has been the so-called worst-case scenario. Well, the worst-case scenario is very black indeed at all times, but Governments cannot work purely on a worst-case scenario; they have to come up with a reasonable percentile to work with, whether it is 95% or 90%. Obviously, it must be tempered by how bad the scenario would be for the country. The precautionary principle is important and we should take measures to protect against scenarios that have only a 5% chance of happening or indeed a 2% chance, but we should do that only if the insurance price that we pay––the premium for doing that––is worth paying. That comes down to the fact that not many economic models have been plugged in, as my hon. Friend the Member for Wycombe (Mr Baker) has repeatedly said in the Chamber and elsewhere throughout.

Any Government must try to predict the course of a pandemic to make sensible plans and I believe that the best tool for that is still modelling, but we must learn the lessons of this pandemic. We must learn from shortcomings such as the failure to understand human behaviour properly, the failure to make code open source so that other people can interrogate a model and change the parameters, and the failure to enter the right parameters and update the model at the moment politicians are being asked to vote on it. For all those reasons, I am grateful for today’s debate and look forward to hearing the Opposition spokespeople and the Minister. I thank my hon. Friend the Member for Wycombe for today’s debate.

--- Later in debate ---
Brendan O'Hara Portrait Brendan O'Hara
- Hansard - - - Excerpts

No, I will not. The libertarian right have had enough of a kick at the ball in this debate. [Interruption.] No, I will not give way. At least half of those who have spoken today are not wearing face coverings.

I know that it is customary at this point to thank the Member who secured the debate but, in a break from tradition, I will start by thanking the scientists––the analysts, the medical professionals, the health experts, the clinicians and everyone else who stopped what they were doing two years ago and dedicated their lives to trying to work out and predict where the global pandemic might go and the impact that it could have on us. Two years ago, when tasked with working out this brand-new virus, every step that they took was a step into the unknown. There was no textbook to chart the route of this pandemic and every decision that they took was a new decision. They knew that every piece of advice they gave could have serious consequences for the population. The pressure of doing real-time covid-19 analysis must have been enormous. I, for one, really appreciate that scientists erred on the side of caution in the midst of a global pandemic in which tens of thousands of people were dying when there were no vaccines or booster protection. To all the SAGE officials, scientists, medical staff and public health experts who have done a remarkable job in keeping us safe, I say a huge and unequivocal thank you.

We know and can accept that forecasting and modelling during a pandemic are not an exact science but based on the best available evidence and a series of scenarios, presented from the best to the worst case. As Professor Adam Kucharski of the London School of Hygiene and Tropical Medicine said,

“a model is a structured way of thinking about the dynamics of an epidemic. It allows us to take the knowledge we have, make some plausible assumptions based on that knowledge, then look at the logical implications of those assumptions.”

As the much-maligned Professor Ferguson told the Science and Technology Committee,

“Models can only be as reliable as the data that is feeding into them.”

Of course such models have their limitations. They are not forecasting modelling but mathematical projections based on the data available to modellers. If the tests are not being done, or tests are not being registered as positive, for example, the data modelling and forecasting can be affected. It is important to remember, however, that while the hon. Member for Isle of Wight (Bob Seely) was telling anyone who would listen that modelling predictions were a national scandal, Professor Chris Whitty was telling the Science and Technology Committee that

“a lot of the advice that I have given is not based on significant forward modelling. It is based on what has happened and what is observable.”

Advice on lockdown and other public health measures was given by SAGE and others on the basis of observable data, not on forecasting modelling alone. I put it to the hon. Member for Isle of Wight that he was quite wrong when he told GB News that

“So much of what’s happened since with…inhuman conditions that many of us struggled with”

was

“built on some really questionable science.”

Professor Whitty said clearly that he did not base his advice on that; rather, he based it on what he could see around him.

The primary purpose of modelling is simply to offer a sense of the impact of different restrictions. A report by researchers for the journal Nature found that the first lockdown saved up to 3 million lives in Europe, including 470,000 in the UK. The success of disease modelling was in predicting how many deaths there would have been if lockdown had not happened. SAGE officials, scientists and medical staff have done a remarkable job to keep us all safe, and many people across these islands owe their lives to them. I believe that the work that those people have done under enormous pressure should be applauded and appreciated, not undermined by the far-right libertarian Tories we have today.

Bob Seely Portrait Bob Seely
- Hansard - -

Shame!

Brendan O'Hara Portrait Brendan O'Hara
- Hansard - - - Excerpts

Oh, I am glad that you shout, “Shame!”

Bob Seely Portrait Bob Seely
- Hansard - -

rose—

Edward Leigh Portrait Sir Edward Leigh (in the Chair)
- Hansard - - - Excerpts

Order. Bob, will you calm down, please? Will everybody calm down?

Bob Seely Portrait Bob Seely
- Hansard - -

I do not appreciate being called “far right”.

--- Later in debate ---
Fleur Anderson Portrait Fleur Anderson
- Hansard - - - Excerpts

No. The PACAC report makes it clear that no one in Government has taken responsibility for communicating the data. The report states:

“Ministerial accountability for ensuring decisions are underpinned by data has not been clear. Ministers have passed responsibility between the Cabinet Office and Department of Health and Social Care,”.

That is why, as a member of the shadow Cabinet, I am responding to this debate. There are questions about the use and communication of the data.

I want to come to why we needed to rely on modelling and forecasting. Significant mistakes made throughout the last 10 years of Conservative government are the problem. There could have been much better information, and we could have been much better informed, if there had been better pandemic and emergency preparedness.

Bob Seely Portrait Bob Seely
- Hansard - -

Will the hon. Member give way?

Greg Smith Portrait Greg Smith
- Hansard - - - Excerpts

Will the hon. Member give way?

--- Later in debate ---
Bob Seely Portrait Bob Seely
- Hansard - -

I think with one exception that was a very good debate. We all agree that we need good science, and we all agree that scientists have power, like politicians. We have the right, in the public interest, to question these people. It was fascinating listening to some of my hon. Friends—I am not quite sure what they were saying, but it sounded amazing. I am also delighted to agree with the hon. Member for Putney (Fleur Anderson) that, as part of the inquiry, we need to look into the use of modelling, so that if mistakes have been made—with great respect to those who try to say it—we can learn from that experience, we do not make those mistakes again, and the modelling works for the public good, as all good science and all good policy should do.