Saturday 3 February 2018

In defence of economic forecasts

It is becoming rather tiresome to hear the constant carping about the value of economic forecasts, particularly when the critics are responding to forecasts that do not accord with their pre-conceived views. Ed Conway weighed into the debate in The Times yesterday (here if you can get past the paywall), with his claim that “the job of a city economist is not to make accurate forecasts; they’re basically there to market their firms.” As a city economist who has spent a lot of time working with various models generating forecasts to meet client demand, I can say with total confidence that Conway is dead wrong. It’s a bit like saying that we should ignore the views of most economic columnists whose raison d’ĂȘtre is to offer clickbait for the masses.

But the most annoying comments of the week came from Eurosceptic MP Steve Baker who proceeded to denigrate the Treasury’s analysis of the negative economic consequences of Brexit. He noted in the House of Commons that “I’m not able to name an accurate forecast, and I think they are always wrong.” The second most annoying comments, and probably more serious set of allegations, came from Jacob Rees-Mogg who accused Charles Grant, the director of the Centre for European Reform, of suggesting that Treasury officials “had deliberately developed a model to show that all options other than staying in the customs union were bad, and that officials intended to use this to influence policy” (here). Grant denied any such implication and Baker was forced to apologise for providing support to what is an outrageous slur on the impartiality of the civil service.

What all this does illustrate, however, is that factual analysis is being drowned out by an agenda in which ideology trumps evidence. With regard to Baker’s claims that forecasts are “always wrong” it is worth digging a little deeper. No economic forecaster will be 100% right 100% of the time – we are trying to predict the unknowable – but there are acceptable margins of error. HM Treasury surveys a large number of forecasters in its monthly comparison of economic projections, which is a pretty good place to gather some evidence. Our starting point is the one-year ahead forecast for UK GDP growth, using the January estimate for the year in question (at this point, we do not have the full numbers for the previous year).

I took the data over the past five years, for which 34 institutions have generated forecasts in each year. The average error over the full five year period, using the current GDP vintage as a benchmark, is 0.63 percentage points. This is not fantastic, though if we strip out 2013, the figure falls to 0.51 pp.  For the record – and probably more by luck than judgement – the errors in my own forecasts were 0.48 pp over the full five year sample and 0.33 pp over 2014-17, so slightly better than the average. But there is a major caveat. GDP data tend to be heavily revised, due to changes in methodology and the addition of data which were not available initially. Thus, the data vintage on which the forecasts were prepared turns out to be rather different to the latest version. Accordingly, if we measure the GDP projection against the initial growth estimate, the margins of error are smaller (0.5 pp over the period 2013-17 and 0.4 pp over 2014-17).


Without wishing to overblow my own trumpet (but what the hell, no-one else will), my own GDP forecasts proved to be the most accurate over the last five years when measured against the initial growth estimate, with an average error of just 0.16pp over the past four years. More seriously, perhaps, the major international bodies such as the IMF and European Commission tend to score relatively poorly, lying in the bottom third of the rankings. These are the very institutions which tend to grab the headlines whenever they release new forecasts. A bit more discernment on the part of the financial journalist community might not go amiss when it comes to assessing forecasting records.

All forecasters know that they are taking a leap into the dark when making economic projections, and I have always subscribed to the view that the only thing we know with certainty is that any given economic forecast is likely to be wrong. But suppose we took the Baker view that forecasts are a waste of time because they are always wrong. The logical conclusion is that we simply should not bother. So what, then, is the basis for planning, whether it be governments or companies looking to set budgets for the year ahead? There would, after all, be no consensus benchmark against which to make an assessment. Quite clearly, there is a need for some basis for planning, so if economic forecasts did not exist it is almost certain that a market would be created to provide them.

As for the Treasury forecasts regarding the impact of Brexit on the UK (here) , it may indeed turn out that the economy grows more slowly than in the years preceding the referendum, in which case the view will be vindicated. There is, of course, a chance they will be wrong. But right now, we do not know for sure (although the UK did underperform in 2017). Accordingly, the likes of Baker and Rees-Mogg have no basis for suggesting that the forecasts are wrong, still less that the Treasury is fiddling the figures. I take it as a sign they are worried the forecasts are likely to prove correct that they have been forced to come out swinging.

No comments:

Post a Comment