Monday 11 May 2020

The limits of modelling


The British government has made it clear throughout the Covid 19 crisis that it has been “following the science.” But at this relatively early stage of our understanding of the disease there is no single body of knowledge to draw on. There is a lot that epidemiologists agree on but there are also areas where they do not. Moreover, the science upon which the UK lockdown is based is derived from a paper published almost two months ago when our understanding of Covid was rather different to what we know now. I was thus fascinated by this BBC report by medical editor Deborah Cohen, who posed questions of the current strategy and interviewed experts in the field who expressed some reservations about how the facts are reported. Whilst the report gave an interesting insight into epidemiology, it also reminded me of the criticism directed at economic forecasting.

One of the most interesting issues to arise out of the discussion was the use of models to track the progression of disease. The epidemiologists quoted were unanimous in their view that models were only useful if backed up by data. As Dame Deirdre Hine, the author of a report on the 2009 H1N1 pandemic pointed out, models are not always useful in the early stages of a pandemic given the lack of data upon which they are based. She further noted that “politicians and the public are often dazzled by the possibilities that modelling affords” and that models often “overstate the possibilities of deaths in the early stages” of a pandemic due to a lack of data. As Hine pointed out, epidemiological models only start to become useful once we implement a thorough programme of tracing and tracking people’s contacts, for only then can we start to get a decent handle on the spread of any disease.

This approach has great parallels with empirical macroeconomics where many of the mainstream models used for analytical purposes are not necessarily congruent with the data. Former member of the Bank of England Monetary Policy Committee Danny Blanchflower gave a speech on precisely this topic back in 2007 with the striking title The Economics of Walking About. The objective of Blanchflower’s speech was to encourage policymakers to look at what is going on around them rather than uncritically accept the outcomes derived from a predetermined set of ideas, and to put “the data before the theory where this seems warranted.”

I have always thought this to be very sensible advice, particularly in the case where DSGE models are used for forecasting purposes. These models are theoretical constructs based on a particular economic structure which use a number of assumptions whose existence in the real world are subject to question (Calvo pricing and rational expectations to name but two). Just as in epidemiology, models which are not consistent with the data do not have a good forecasting record. In fact, economic models do not have a great track record, full stop. But we are still forced to rely on them because the alternative is either not to provide a forecast at all, or simply make a guess. As the statistician George Box once famously said, “all models are wrong, but some are useful.”

Epidemiologists make the point that models can be a blunt instrument which give a false sense of security. The researchers at Imperial College whose paper formed the basis of the government’s strategy might well come up with different estimates if, instead of basing their analysis on data derived from China and Italy, they updated their results on the basis of latest UK data. They may indeed have already done so (though I have not seen it) but this does not change the fact that the government appears to have accepted the original paper at face value. Of course, we cannot blame the researchers for the way in which the government interpreted the results. But having experienced the uncritical media acceptance of economic forecasts produced by the likes of the IMF, it is important to be aware of the limitations of model-driven results.

Another related issue pointed out by the epidemiologists is the way in which the results are communicated. For example, the government’s strategy is based on the modelled worst case outcomes for Covid 19 but this has been criticised for being misleading because it implies an event which is unlikely rather than one which close to the centre of the distribution. The implication is that the government based its strategy on a worst case outcome rather than on a more likely outcome with the result that the damage to the economy is far greater than it needed to be. That is a highly contentious suggestion and is not one I would necessarily buy into. After all, a government has a duty of care to all its citizens and if the lives of more vulnerable members of society are saved by imposing a lockdown then it may be a price worth paying.

But it nonetheless raises a question of the way in which potential outcomes are reported. I have made the point (here) in an economics context that whilst we need to focus on the most likely outcomes (e.g. for GDP growth projections), there are a wide range of possibilities around the central case which we also need to account for. Institutions that prepare forecast fan charts recognise that there are alternatives around the central case to which we can ascribe a lower probability. Whilst the likes of the Bank of England have in the past expressed frustration that too much emphasis is placed on the central case, they would be far more concerned if the worst case outcomes grabbed all the attention. The role of the media in reporting economic or financial outcomes does not always help. How often do we see headlines reporting that markets could fall 20% (to pick an arbitrary figure) without any discussion of the conditions necessary to produce such an outcome? The lesson is that we need to be aware of the whole range of outcomes but apply the appropriate weighting structure when reporting possible outcomes.

None of this is to criticise the efforts of epidemiologists in their efforts to model the spread of Covid 19. Nor is it to necessarily criticise the government’s interpretation of it. But it does highlight the difficulties inherent in forecasting outcomes based on models using incomplete information. As Nils Bohr reputedly once said, “forecasting is hard, especially when it’s about the future.” He might have added, “but it’s impossible without accurate inputs.”

2 comments:

  1. Several thoughts. First of all, I’m broadly sympathetic, so the rest of this is nit-picking.

    I don’t agree that “models are not always useful in the early stages of a pandemic given the lack of data upon which they are based.” All actions are based on “models.” Some are better than others. If “models often overstate the possibilities of deaths in the early stages,” they are just bad models.

    The Imperial College model was a conditional forecast, “What happens if we do nothing/x, y, z?” I think that the uncertainties around the outcomes were not well communicated/understood.

    One valid criticism of the model on its own terms, I think, is that the model did not include endogenous changes to its parameters; when people see infection rates rising, they start taking precautions not to infect others and become infected. It’s true that these reactions are also unknown, but not to include them is to assume they are zero.
    This is another aspect of the model that was not well communicated/understood.

    I have heard the criticism but am certainly not competent evaluate it, that the structure of the model does not make it possible/easy to change its structural parameters (which initially have to be “best guesses”) when new data become available and to indicate which data would be most important to update. It is also a criticism that they did not release the model that the famous paper was based on.

    I think the problem with DSGE models is not necessarily Calvo pricing or rational expectations but poorly specified or understood policy conditions. I therefore very much agree with “How often do we see headlines reporting that markets could fall 20% (to pick an arbitrary figure) without any discussion of the conditions necessary to produce such an outcome?” [My pet peeve right now are “forecasts” in the US about what will happen with opening up or not that do not take account of endogenous behavior (see above) and Fed policy.]

    Wasn’t it Yogi Berra not Niels Bohr that said that? 😊

    ReplyDelete
    Replies
    1. Thanks for taking the time to comment. I think maybe the epidemiologists were a bit harsh on the Imperial model but the criticism, which I did not bring out enough, was that it was not the only available model yet it formed the basis of the government's policy. So maybe there was a bit of professional jealousy involved. However, it is fair, I believe, to levy the criticism that it has not been continually updated in light of new data, as any decent Bayesian would have done.

      And although I mangled the quote slightly, it was supposed to be Bohr who said it first https://www.brainyquote.com/quotes/niels_bohr_130288 (though who knows whether he did)

      Delete