Digging into the details, Smith’s methodology is based on an
assessment of five indicators – GDP growth, Q4 CPI inflation, current account,
unemployment rate and Bank Rate. Using the projection provided by each
forecasting group to HM Treasury’s compendium of economic forecasts in the previous January, he provides a scoring system to rank how each group
fared. The first caveat is that we do not yet have a full year of data for any
of the items except Bank Rate so the rankings may be subject to change once the
data for the remainder of the year are released. But I have always had a bigger
issue with the somewhat subjective way in which points are allocated (see
footnote of Table 1 for details). Moreover, there is a bias towards the growth
and inflation forecasts, each yielding a possible maximum of three points whereas
only one point is awarded to each of the unemployment rate, current account and
interest rate forecasts. And there is always a bonus question designed to
ensure that the theoretical maximum number of points sums to 10.
In my view, the trouble with this ranking is that it does
not sufficiently penalise those who get one of the forecast components badly
wrong – the worst that can happen is that they get zero points. Moreover, since
the competition is designed to look at all five components, my system imposes a
bigger handicap on those who do not provide a forecast for all of the
components (which may be a little harsh, as I discuss below). My ranking system
thus uses the same raw data and assumes the same outturn as Smith but measures
the results differently and (I hope) reduces the degree of arbitrariness in
allocating points. For growth, inflation and the unemployment rate, I measure
the absolute difference of each forecast from the outturn (in percentage
points). Assuming the same outturns as Smith, GDP growth in 2019 came in at
1.3%; the Q4 inflation rate at 1.5% and the unemployment rate at 3.8%. Thus a
GDP growth forecast of 1.5% is assigned an error value of 0.2 (=> ABS(1.3-1.5));
an inflation forecast of 2% results in a value of 0.5 (=>ABS(1.5-2)) and an
unemployment forecast of 4% produces a value of 0.2 (=>ABS(3.8-4)).
For quantities such as Bank Rate and the current account, I
apply different criteria. Interest rates normally change in steps of 25 basis
points so the forecast error is measured as the error in the number of interest
rate moves (again the sign of the error is irrelevant). For example, if the
forecast in January was for Bank Rate to rise to 1% but it in fact remained at
0.75%, the error value is one (=> ABS(0.75-1)/0.25). With regard to the current
account deficit, measured in billions of pounds, I assume forecast errors proportional
to each £10bn absolute error (i.e. independent of sign). The outturn is assumed
to be minus £90bn, so a group whose January forecast looked for a deficit of
£85bn is assigned a value of 0.5 (=> ABS(-85+90)/10).
Having summed up all the error points, I then subtract the
number from 10 to derive a value in the range 0 to 10 (the figure can technically
go negative, in which case I assume a lower value of zero). However, the astute
amongst you may already have spotted that the units involved in the current
account forecast are big, so that failure to provide a forecast will become a
problem. Indeed, those not providing a forecast are assumed to have input a
value of zero, giving them an error value of 9 points (=>ABS(0+90)/10). This
becomes a problem for the likes of HSBC, which finishes second in Smith’s
rankings but drop way down on the basis of the methodology outlined here, but
also Daiwa and Bank of America. This is unduly harsh and we need a better way
to take account of zero forecast entries whilst still putting them at a
disadvantage compared to those groups who provided an input. One option is
simply to assign an error of two points for each missing forecast, on the basis
that a group which fails to provide any input for each of the five categories
would score zero (10 - 5 x 2).
Having made this correction, we are in a position to look at
our revised rankings. Whereas in Smith’s original article, the Santander team
came out on top, my rankings give the accolade to Barclays Capital largely
because Santander’s current account forecast cost them two points whereas the
Barclays team lost only 0.75 points. Honourable mentions also go to Oxford
Economics and the EY ITEM team. Who lost out? One of the big losers is Schroders
Investment Management, which appear near the top of Smith’s rankings but the
fact that they predicted four interest rate hikes when there were no changes, cost
them four points. This strikes me as fair. Interest rate projections are a key
component of any macro forecast so it is only right that teams get penalised
for bigger errors. HSBC slip from second to eighth on the basis that they did
not provide a forecast for the current account, which is unfortunate but if it
is included in the assessment criteria we have to take account of this. Had the
current account been excluded, Santander and HSBC would have held onto their top
two places. The revised rankings also put the pro-Brexit Liverpool Macro
Research group at the bottom after a poor performance last year.
As for my own performance, I must confess that I did rather
better than in Smith’s original ranking, rising to fourth (last year, this
methodology would have put me second). I am not going to claim that there is no
element of self-justification in the rankings but I have always thought that
there was a better way of using the data to derive an ordinal ranking scale
over the interval 0 to 10. But perhaps a more important lesson to come out of the
analysis is that for all the criticisms of economic forecasting, those involved
in making projections are to be congratulated for putting themselves on the line
and being prepared to show their errors in public (equity and FX strategists take
note).
Moreover, despite criticisms from the likes of Eurosceptic MP Steve
Baker who once said in the House of Commons that “I’m not able to name an accurate forecast. They are always wrong”, we have done pretty well in the UK over the past 2-3 years. And whilst, like
football managers, forecasters are only as good as their last projection, I
will wager that growth in the UK next year will continue to underperform
relative to pre-referendum rates, whether or not Brexit is “done”.
No comments:
Post a Comment