One of the questions most frequently posed of me as an economist is “what will happen to …” where the object in question is the currency, interest rates, house prices or any other variable you might like to nominate. My stock answer to this question is that if I possessed such clairvoyant knowledge, I would use it to become rich. But I don’t and I’m not. So how have we become the business world’s equivalent of Cassandra?
Modern economic forecasting originates from the pioneering
work of Lawrence Klein and others in the late-1940s and 1950s, whose work in macro
modelling and forecasting appeared to have successfully cracked the problem of
how to predict swings in the economic cycle. As computing power improved, the
models became more complex, and the technocratic approach to planning which was
increasingly adopted from the 1960s onwards resulted in a huge increase in
demand amongst government and business for detailed analysis of future
prospects. The fact that such models suffered spectacular forecasting failures
during the 1970s and 1980s did not stop economists from pontificating on the future. Even
today, organisations like the OECD, IMF and European Commission devote considerable
resources to their forecasting units; numerous private sector forecast outfits continue
to make a comfortable living by selling projections to their clients and every
self-respecting financial institution provides an economic forecast.
The pot is often further stirred by a media which has a
fascination for pinning down economists for their views on a diverse range of
subjects, most of which we cannot possibly have had time to analyse properly
and we end up instead with a sound bite which can often backfire spectacularly.
One of my own favourite quotes, which I use to demonstrate the limits of
economic forecasting comes from the great American economist Irving Fisher,
who, days before the crash of 1929 opined in the New York Times that “stock
prices have reached what looks like a permanently high plateau.” He is, of
course, not alone. None of us has perfect foresight and every economist who has
ever made a forecast knows that reality can bite hard.
I was first awakened to the idea of probabilistic
forecasting some 25 years ago after reading Stephen Hawking’s classic book A Brief History of Time. Hawking
explained how “quantum mechanics does not predict a single definite result for
an observation. Instead it predicts a number of possible outcomes and tells us
how likely each of these is.” It sounded like an ideal way to present economic
forecasts, and the Bank of England was one of the first institutions to
formalise such analysis in the form of a fan chart which assigns probabilities
to the likelihood that variables will fall within a particular range.
The chart below shows the range of outcomes which the BoE assigned
to its May 2016 inflation forecast, with the cone-shape depicting the range of
outcomes that would be expected to occur with 90% probability. The darkest
shaded area around the centre of the chart represents the outcome which the BoE
would expect with a 30% probability and the lighter shaded areas represent a wider range of outcomes which encompass an even higher likelihood of occurring. But the higher
the probability you attach to an inflation outcome, the less precise you can be
about what the inflation rate will be. In effect this is akin to Heisenberg’s uncertainty
principle which states that “the greater the degree of precision assigned to
the position of a particle, the less precisely its momentum can be known (and vice
versa).” To say that in three years’ time, you would assign a 90% probability
to the likelihood that inflation will be between 0% and 5% may not be the kind
of analysis which people will pay good money for, but it is a far more accurate
representation of our ability to see into the future.
The BoE’s inflation
forecast fan chart, May 2016
Source: Bank of England
The events of recent weeks should by now have awakened us
all to the limits of our forecasting prowess. Most rational analysts believed
that Brexit was the least likely outcome, but now that the referendum result is
known we are all trying to figure out what it means for the economy. Consensus forecasts
suggest that UK GDP growth in 2016 and 2017 will come in around 1.6% and 0.7% respectively,
versus 2.0% and 2.2% before the referendum. This accords with pre-referendum
analysis indicating that the economy would suffer an outcome loss of around 2% in
the event of Brexit, relative to what would otherwise would have occurred. But
as we learned in the wake of the Lehman’s crisis, economic forecasts made right after big shocks can turn out to be highly inaccurate.
In truth, we do not really know how the economy will fare
over the next two years. If we are honest, we do not know whether Brexit will
actually take place at all, particularly in the wake of Theresa May’s recent suggestion
that she “won’t be triggering Article 50 until I think that we have a UK
approach and objectives for negotiations.” It is not a 100% probability event,
and I continue to hold the view that on the basis of current evidence, it’s a 60%
event. That view will continue to change as more information becomes available.
Those who ask economists to make black-and-white predictions regarding complex
events such as these should be met with the response “I don’t know” or at least
add the rider “… with any degree of certainty.” That of course is not what we
get paid for. So either we adhere to the Mark Twain view that it is better to
be thought a fool and stay quiet than open our mouths and prove it beyond doubt, or we should adopt
the Heisenberg doctrine that “an expert is someone who knows some of the worst
mistakes that can be made in his subject, and how to avoid them.” Perhaps by reminding
our interlocutors that if our answers are wrong, it may be as much to do with
the nature of their question.
No comments:
Post a Comment