Showing posts with label forecast uncertainty. Show all posts
Showing posts with label forecast uncertainty. Show all posts

Monday 11 May 2020

The limits of modelling


The British government has made it clear throughout the Covid 19 crisis that it has been “following the science.” But at this relatively early stage of our understanding of the disease there is no single body of knowledge to draw on. There is a lot that epidemiologists agree on but there are also areas where they do not. Moreover, the science upon which the UK lockdown is based is derived from a paper published almost two months ago when our understanding of Covid was rather different to what we know now. I was thus fascinated by this BBC report by medical editor Deborah Cohen, who posed questions of the current strategy and interviewed experts in the field who expressed some reservations about how the facts are reported. Whilst the report gave an interesting insight into epidemiology, it also reminded me of the criticism directed at economic forecasting.

One of the most interesting issues to arise out of the discussion was the use of models to track the progression of disease. The epidemiologists quoted were unanimous in their view that models were only useful if backed up by data. As Dame Deirdre Hine, the author of a report on the 2009 H1N1 pandemic pointed out, models are not always useful in the early stages of a pandemic given the lack of data upon which they are based. She further noted that “politicians and the public are often dazzled by the possibilities that modelling affords” and that models often “overstate the possibilities of deaths in the early stages” of a pandemic due to a lack of data. As Hine pointed out, epidemiological models only start to become useful once we implement a thorough programme of tracing and tracking people’s contacts, for only then can we start to get a decent handle on the spread of any disease.

This approach has great parallels with empirical macroeconomics where many of the mainstream models used for analytical purposes are not necessarily congruent with the data. Former member of the Bank of England Monetary Policy Committee Danny Blanchflower gave a speech on precisely this topic back in 2007 with the striking title The Economics of Walking About. The objective of Blanchflower’s speech was to encourage policymakers to look at what is going on around them rather than uncritically accept the outcomes derived from a predetermined set of ideas, and to put “the data before the theory where this seems warranted.”

I have always thought this to be very sensible advice, particularly in the case where DSGE models are used for forecasting purposes. These models are theoretical constructs based on a particular economic structure which use a number of assumptions whose existence in the real world are subject to question (Calvo pricing and rational expectations to name but two). Just as in epidemiology, models which are not consistent with the data do not have a good forecasting record. In fact, economic models do not have a great track record, full stop. But we are still forced to rely on them because the alternative is either not to provide a forecast at all, or simply make a guess. As the statistician George Box once famously said, “all models are wrong, but some are useful.”

Epidemiologists make the point that models can be a blunt instrument which give a false sense of security. The researchers at Imperial College whose paper formed the basis of the government’s strategy might well come up with different estimates if, instead of basing their analysis on data derived from China and Italy, they updated their results on the basis of latest UK data. They may indeed have already done so (though I have not seen it) but this does not change the fact that the government appears to have accepted the original paper at face value. Of course, we cannot blame the researchers for the way in which the government interpreted the results. But having experienced the uncritical media acceptance of economic forecasts produced by the likes of the IMF, it is important to be aware of the limitations of model-driven results.

Another related issue pointed out by the epidemiologists is the way in which the results are communicated. For example, the government’s strategy is based on the modelled worst case outcomes for Covid 19 but this has been criticised for being misleading because it implies an event which is unlikely rather than one which close to the centre of the distribution. The implication is that the government based its strategy on a worst case outcome rather than on a more likely outcome with the result that the damage to the economy is far greater than it needed to be. That is a highly contentious suggestion and is not one I would necessarily buy into. After all, a government has a duty of care to all its citizens and if the lives of more vulnerable members of society are saved by imposing a lockdown then it may be a price worth paying.

But it nonetheless raises a question of the way in which potential outcomes are reported. I have made the point (here) in an economics context that whilst we need to focus on the most likely outcomes (e.g. for GDP growth projections), there are a wide range of possibilities around the central case which we also need to account for. Institutions that prepare forecast fan charts recognise that there are alternatives around the central case to which we can ascribe a lower probability. Whilst the likes of the Bank of England have in the past expressed frustration that too much emphasis is placed on the central case, they would be far more concerned if the worst case outcomes grabbed all the attention. The role of the media in reporting economic or financial outcomes does not always help. How often do we see headlines reporting that markets could fall 20% (to pick an arbitrary figure) without any discussion of the conditions necessary to produce such an outcome? The lesson is that we need to be aware of the whole range of outcomes but apply the appropriate weighting structure when reporting possible outcomes.

None of this is to criticise the efforts of epidemiologists in their efforts to model the spread of Covid 19. Nor is it to necessarily criticise the government’s interpretation of it. But it does highlight the difficulties inherent in forecasting outcomes based on models using incomplete information. As Nils Bohr reputedly once said, “forecasting is hard, especially when it’s about the future.” He might have added, “but it’s impossible without accurate inputs.”

Monday 13 November 2017

It's very quiet out there

As UK political uncertainty mounts, it is striking that sterling-denominated assets have held up reasonably well of late. Sterling has traded in a relatively narrow range over the past year with the trade weighted index registering a high of 79 in May and a low of 74 in August. Surprisingly, investor net speculative positions in sterling, which were heavily negative early this year, have now turned flat to slightly positive. This reflects the fact that FX investors are currently not expecting a significant sterling collapse, although the timing of the move does appear to be correlated with changes in the market’s position on BoE rate hikes. Meanwhile, although the FTSE100 has trailed indices such as the Eurostoxx  50 year-to-date, they have moved broadly in line since May and the FTSE has managed a year-to-date return of 3.8%  – not great when set against other markets but nonetheless positive. Moreover, the weakness of sterling tends to be a positive factor for UK equities given how much revenue is booked in foreign currencies (around 70%).

Thus, political uncertainty appears to be conspicuous by its absence so far as markets are concerned, which reflects the fact that investors are looking through all the rhetoric and concluding that the likelihood of a cliff-edge Brexit is low. Since we are still more than 16 months away from the expiry of the Article 50 negotiation phase, markets take the view that there is no sense in panicking now – there will be plenty of time for that later. Nonetheless, the closer we get to the deadline without agreement, the greater the likelihood that assets will come under pressure, but that is probably a story for next year.

To get a sense of how the market and economic agents assess uncertainty in the UK at present, I constructed an uncertainty index based upon eight variables: (i) FTSE100 equity volatility; (ii) EUR/GBP FX volatility; (iii) GBP/USD FX volatility; (iv) the Baker, Bloom and Davis policy uncertainty indicator; (v) GfK survey data for expected consumer finances; (vi)  expected unemployment and (vii) expected economic situation. The final component is (viii) the CBI’s estimate of uncertainty as a factor limiting capex. Furthermore, if we strip out the equity and currency vol measures, we have a five variable index of domestic uncertainty.

The chart suggests that the aggregate uncertainty index has dipped back close to its long-term average (2000-2015). Whilst the domestic indicator has not fallen quite as sharply, it is well below its summer 2016 highs with only the Baker et al policy uncertainty index showing any extended deviation. The interesting thing is that this policy uncertainty index is based on an online trawl of newspaper websites looking for various keywords which express uncertainty. To the extent that much of the concern expressed about Brexit has indeed come via the media (not to mention the blogosphere, so I am as guilty as anyone), it highlights the noise inherent in the debate without necessarily shedding much light on how the economy is performing. Indeed, many of the other indicators normalised very quickly, which suggests that most economic agents generally got on with life in the wake of the Brexit vote.

This does not mean to say that everything will remain so quiet. The GfK survey data point to a deterioration in expectations for the future economic situation with sentiment now back at levels last seen in spring 2013. Moreover, with inflation beginning to put the squeeze on consumers, we are starting to see some deterioration in expectations for consumer finances.

It is worth noting that the indicator is not a good predictor of longer-term trends. Even in the early months of 2008, when there were signs that trouble was brewing in the banking sector and the economy was losing some momentum, both the aggregate and domestic uncertainty indices remained at low levels. A lurch towards the cliff-edge of Brexit could change perceptions quite markedly. Perhaps UK consumers and corporates need to hurt even more before they realise the potential economic consequences of Brexit. This is why just looking at the current relative stability of the uncertainty index is not necessarily a good guide to future trends. In my view – and that of most of the economics profession – a number of senior British politicians do not seem to understand the risk they are taking with the wider economy. It is incumbent upon them to get it right or the electorate may be in a less forgiving mood than it has been of late.

Sunday 17 July 2016

When "I don't know" is the right answer


One of the questions most frequently posed of me as an economist is “what will happen to …” where the object in question is the currency, interest rates, house prices or any other variable you might like to nominate. My stock answer to this question is that if I possessed such clairvoyant knowledge, I would use it to become rich. But I don’t and I’m not. So how have we become the business world’s equivalent of Cassandra?

Modern economic forecasting originates from the pioneering work of Lawrence Klein and others in the late-1940s and 1950s, whose work in macro modelling and forecasting appeared to have successfully cracked the problem of how to predict swings in the economic cycle. As computing power improved, the models became more complex, and the technocratic approach to planning which was increasingly adopted from the 1960s onwards resulted in a huge increase in demand amongst government and business for detailed analysis of future prospects. The fact that such models suffered spectacular forecasting failures during the 1970s and 1980s did not stop economists from pontificating on the future. Even today, organisations like the OECD, IMF and European Commission devote considerable resources to their forecasting units; numerous private sector forecast outfits continue to make a comfortable living by selling projections to their clients and every self-respecting financial institution provides an economic forecast.

The pot is often further stirred by a media which has a fascination for pinning down economists for their views on a diverse range of subjects, most of which we cannot possibly have had time to analyse properly and we end up instead with a sound bite which can often backfire spectacularly. One of my own favourite quotes, which I use to demonstrate the limits of economic forecasting comes from the great American economist Irving Fisher, who, days before the crash of 1929 opined in the New York Times that “stock prices have reached what looks like a permanently high plateau.” He is, of course, not alone. None of us has perfect foresight and every economist who has ever made a forecast knows that reality can bite hard.

I was first awakened to the idea of probabilistic forecasting some 25 years ago after reading Stephen Hawking’s classic book A Brief History of Time. Hawking explained how “quantum mechanics does not predict a single definite result for an observation. Instead it predicts a number of possible outcomes and tells us how likely each of these is.” It sounded like an ideal way to present economic forecasts, and the Bank of England was one of the first institutions to formalise such analysis in the form of a fan chart which assigns probabilities to the likelihood that variables will fall within a particular range.

The chart below shows the range of outcomes which the BoE assigned to its May 2016 inflation forecast, with the cone-shape depicting the range of outcomes that would be expected to occur with 90% probability. The darkest shaded area around the centre of the chart represents the outcome which the BoE would expect with a 30% probability and the lighter shaded areas represent a wider range of outcomes which encompass an even higher likelihood of occurring. But the higher the probability you attach to an inflation outcome, the less precise you can be about what the inflation rate will be. In effect this is akin to Heisenberg’s uncertainty principle which states that “the greater the degree of precision assigned to the position of a particle, the less precisely its momentum can be known (and vice versa).” To say that in three years’ time, you would assign a 90% probability to the likelihood that inflation will be between 0% and 5% may not be the kind of analysis which people will pay good money for, but it is a far more accurate representation of our ability to see into the future.

The BoE’s inflation forecast fan chart, May 2016

Source: Bank of England
The events of recent weeks should by now have awakened us all to the limits of our forecasting prowess. Most rational analysts believed that Brexit was the least likely outcome, but now that the referendum result is known we are all trying to figure out what it means for the economy. Consensus forecasts suggest that UK GDP growth in 2016 and 2017 will come in around 1.6% and 0.7% respectively, versus 2.0% and 2.2% before the referendum. This accords with pre-referendum analysis indicating that the economy would suffer an outcome loss of around 2% in the event of Brexit, relative to what would otherwise would have occurred. But as we learned in the wake of the Lehman’s crisis, economic forecasts made right after big shocks can turn out to be highly inaccurate. 

In truth, we do not really know how the economy will fare over the next two years. If we are honest, we do not know whether Brexit will actually take place at all, particularly in the wake of Theresa May’s recent suggestion that she “won’t be triggering Article 50 until I think that we have a UK approach and objectives for negotiations.” It is not a 100% probability event, and I continue to hold the view that on the basis of current evidence, it’s a 60% event. That view will continue to change as more information becomes available. Those who ask economists to make black-and-white predictions regarding complex events such as these should be met with the response “I don’t know” or at least add the rider “… with any degree of certainty.” That of course is not what we get paid for. So either we adhere to the Mark Twain view that it is better to be thought a fool and stay quiet than open our mouths and prove it beyond doubt, or we should adopt the Heisenberg doctrine that “an expert is someone who knows some of the worst mistakes that can be made in his subject, and how to avoid them.” Perhaps by reminding our interlocutors that if our answers are wrong, it may be as much to do with the nature of their question.