Sunday 15 April 2018

Fair weather forecasting

The economics profession has had to endure some bad press over the last decade in the wake of the global financial crisis which we failed to foresee and, in the UK case, the dire predictions in the aftermath of the Brexit referendum that were not borne out. But in a way these two examples are to entirely miss the point. Economics is not a predictive discipline – as I have noted countless times before – so criticising economists for failing to predict macroeconomic economic outcomes is a bit like criticising doctors for failing to predict when people will fall ill.

Another of the narratives which has become commonplace in recent years is the notion that economists predict with certainty. This fallacy was repeated again recently in a Bloomberg article by Mark Buchanan entitled Economists Should Stop Being So Certain. In fact, nothing could be further from the truth. The only thing most self-respecting economic forecasters know for certain is that their base case is more likely to be wrong than right. The Bank of England has for many years presented its economic growth and inflation forecasts in the form of a fan chart in which the bands get wider over time, reflecting the fact that the further ahead we forecast the greater the forecast uncertainty (chart). Many other institutions now follow a similar approach in which forecasts are seen as probabilistic outcomes rather one in which there is a single outcome.

Indeed, if there is a problem with certainty in economic forecasting, it is that media outlets tend to ascribe it to economic projections. It is after all, a difficult story to sell to their readers that economists assign a 65% outcome to a GDP growth forecast of 2%. As a consequence the default option is to reference the central case.

One of the interesting aspects of Buchanan’s article, however, was the reference to the way in which the science of meteorology has tackled the problem of forecast uncertainty. This was based on a fascinating paper by Tim Palmer, a meteorologist, looking back at 25 years of ensemble modelling. The thrust of Palmer’s paper (here) is that uncertainty is an inherent part of forecasting, and that an ensemble approach that uses different sets of initial conditions in climatic modelling has been shown to reduce the inaccuracy of weather forecasts. In essence, inherent uncertainty is viewed as a feature that can be used to improve forecast accuracy and not as something to be avoided.

In fairness, economics has already made some progress on this front in recent years. We can think of forecast error as deriving from two main sources: parameter uncertainty and model uncertainty. Parameter uncertainty is derived from the fact that although we may be using the correct model, it may be misspecified or we have conditioned it on the wrong assumptions. We can try and account for this using stochastic simulation methods[1] which subject the model to a series of shocks and gives us a range of possible outcomes which can be represented in the form of a fan chart. Model uncertainty raises the possibility that our forecast model may not be the right one to use in a given situation and that a different one may be more appropriate. Thus the academic literature in recent years has focused on the question of combining forecasts from different models and weighting the outcomes in a way which provides useful information[2], although it has not yet found its way into the mainstream.

Therefore in response to Buchanan’s conclusion that “an emphasis on uncertainty could help economists regain the public’s trust” I can only say that we are working on it. But as Palmer pointed out, “probabilistic forecasts are only going to be useful for decision making if the forecast probabilities are reliable – that is to say, if forecast probability is well calibrated with observed frequency.” Unfortunately we will need a lot more data before we can determine whether changes to economic forecasting methodology have produced an improvement in forecast accuracy and so far the jury is still out. Unlike weather forecasting which at least obeys physical laws, economics does not. But both weather systems and the macroeconomy share the similarity that they are complex processes which can be sensitive to conditioning assumptions. Even if we cannot use the same techniques, there is certainly something to learn from the methodological approach adopted in meteorology.

Economics suffers from the further disadvantage that much of its analysis cuts into the political sphere and there are many high profile politicians who use forecast failures to dismiss outcomes that do not accord with their prior views. One such is the MP Steve Baker, a prominent Eurosceptic, who earlier this year said in parliament that economic forecasts are “always wrong.” It is worth once again quoting Palmer who noted that if predictions turn out to be false, “then at best it means that there are aspects of our science we do not understand and at worst it provides ammunition to those that would see [economics] as empirical, inexact and unscientific. But we wouldn’t say to a high-energy physicist that her subject was not an exact science because the fundamental law describing the evolution of quantum fields, the Schrödinger equation, was probabilistic and therefore not exact.”

As Carveth Read, the philosopher and logician noted, “It is better to be vaguely right than exactly wrong.” That is a pretty good goal towards which economic forecasting should strive.







[2] Bayesian Model Averaging is one of the favoured methods. See this paper by Mark Steel of Warwick University for an overview

No comments:

Post a Comment