Showing posts with label economic forecasting. Show all posts
Showing posts with label economic forecasting. Show all posts

Monday 11 May 2020

The limits of modelling


The British government has made it clear throughout the Covid 19 crisis that it has been “following the science.” But at this relatively early stage of our understanding of the disease there is no single body of knowledge to draw on. There is a lot that epidemiologists agree on but there are also areas where they do not. Moreover, the science upon which the UK lockdown is based is derived from a paper published almost two months ago when our understanding of Covid was rather different to what we know now. I was thus fascinated by this BBC report by medical editor Deborah Cohen, who posed questions of the current strategy and interviewed experts in the field who expressed some reservations about how the facts are reported. Whilst the report gave an interesting insight into epidemiology, it also reminded me of the criticism directed at economic forecasting.

One of the most interesting issues to arise out of the discussion was the use of models to track the progression of disease. The epidemiologists quoted were unanimous in their view that models were only useful if backed up by data. As Dame Deirdre Hine, the author of a report on the 2009 H1N1 pandemic pointed out, models are not always useful in the early stages of a pandemic given the lack of data upon which they are based. She further noted that “politicians and the public are often dazzled by the possibilities that modelling affords” and that models often “overstate the possibilities of deaths in the early stages” of a pandemic due to a lack of data. As Hine pointed out, epidemiological models only start to become useful once we implement a thorough programme of tracing and tracking people’s contacts, for only then can we start to get a decent handle on the spread of any disease.

This approach has great parallels with empirical macroeconomics where many of the mainstream models used for analytical purposes are not necessarily congruent with the data. Former member of the Bank of England Monetary Policy Committee Danny Blanchflower gave a speech on precisely this topic back in 2007 with the striking title The Economics of Walking About. The objective of Blanchflower’s speech was to encourage policymakers to look at what is going on around them rather than uncritically accept the outcomes derived from a predetermined set of ideas, and to put “the data before the theory where this seems warranted.”

I have always thought this to be very sensible advice, particularly in the case where DSGE models are used for forecasting purposes. These models are theoretical constructs based on a particular economic structure which use a number of assumptions whose existence in the real world are subject to question (Calvo pricing and rational expectations to name but two). Just as in epidemiology, models which are not consistent with the data do not have a good forecasting record. In fact, economic models do not have a great track record, full stop. But we are still forced to rely on them because the alternative is either not to provide a forecast at all, or simply make a guess. As the statistician George Box once famously said, “all models are wrong, but some are useful.”

Epidemiologists make the point that models can be a blunt instrument which give a false sense of security. The researchers at Imperial College whose paper formed the basis of the government’s strategy might well come up with different estimates if, instead of basing their analysis on data derived from China and Italy, they updated their results on the basis of latest UK data. They may indeed have already done so (though I have not seen it) but this does not change the fact that the government appears to have accepted the original paper at face value. Of course, we cannot blame the researchers for the way in which the government interpreted the results. But having experienced the uncritical media acceptance of economic forecasts produced by the likes of the IMF, it is important to be aware of the limitations of model-driven results.

Another related issue pointed out by the epidemiologists is the way in which the results are communicated. For example, the government’s strategy is based on the modelled worst case outcomes for Covid 19 but this has been criticised for being misleading because it implies an event which is unlikely rather than one which close to the centre of the distribution. The implication is that the government based its strategy on a worst case outcome rather than on a more likely outcome with the result that the damage to the economy is far greater than it needed to be. That is a highly contentious suggestion and is not one I would necessarily buy into. After all, a government has a duty of care to all its citizens and if the lives of more vulnerable members of society are saved by imposing a lockdown then it may be a price worth paying.

But it nonetheless raises a question of the way in which potential outcomes are reported. I have made the point (here) in an economics context that whilst we need to focus on the most likely outcomes (e.g. for GDP growth projections), there are a wide range of possibilities around the central case which we also need to account for. Institutions that prepare forecast fan charts recognise that there are alternatives around the central case to which we can ascribe a lower probability. Whilst the likes of the Bank of England have in the past expressed frustration that too much emphasis is placed on the central case, they would be far more concerned if the worst case outcomes grabbed all the attention. The role of the media in reporting economic or financial outcomes does not always help. How often do we see headlines reporting that markets could fall 20% (to pick an arbitrary figure) without any discussion of the conditions necessary to produce such an outcome? The lesson is that we need to be aware of the whole range of outcomes but apply the appropriate weighting structure when reporting possible outcomes.

None of this is to criticise the efforts of epidemiologists in their efforts to model the spread of Covid 19. Nor is it to necessarily criticise the government’s interpretation of it. But it does highlight the difficulties inherent in forecasting outcomes based on models using incomplete information. As Nils Bohr reputedly once said, “forecasting is hard, especially when it’s about the future.” He might have added, “but it’s impossible without accurate inputs.”

Thursday 16 April 2020

Whistling in the dark or shining a light?

The global picture

As the official bodies begin to put out their growth forecasts for 2020 and 2021 the magnitude of the hit facing the global economy following the Covid-19 shutdown is becoming increasingly clear. The IMF’s latest projections suggest that global GDP will contract this year by 3%, rebounding by 5.8% in 2021. We have not seen anything like it in 90 years since the Great Depression, when world activity is estimated to have fallen by 10% between 1930 and 1932 with three successive annual declines of 3% or more. For the record, it took six years for output to regain its pre-crash highs. The IMF is suggesting that next year we will be able to put all of this behind us and push output back above pre-crash levels. I remain highly sceptical.

The good news, as the IMF points out, is that we do not currently have the degree of protectionism and beggar-thy-neighbour policies of the early-1930s which made the downturn so much worse than it needed to be. But economic nationalism is clearly back in fashion, and Donald Trump’s decision to halt US funding to the World Health Organisation during the greatest public health threat in a century is indicative of the febrile sentiment currently at play (not to mention the fact that it is probably one of the dumbest of petty acts and says a lot about Trump’s way of doing business, but in the interests of politeness to my American friends I will leave it there). Interestingly, the IMF’s forecast makes it clear that whilst output in emerging markets will rebound quickly, the advanced economies will not recoup their output losses in 2021. Indeed, EM economies take a relatively small hit with output projected to fall by only 1% this year and surging by 6.6% next year. My concern with this is that many EMs are export-driven economies, and if the developed world is growing relatively slowly, the demand for EM exports may not recover sufficiently quickly to drive the expected global growth surge.

The big imponderable is how deep will be the scars left by the current shutdown? The cause of the economic collapse is simply that much economic activity is prohibited as lockdowns came into force which has resulted in many people having to remain at home. Such impacts will ripple throughout the economy in as-yet unpredictable ways, and whilst fiscal and monetary policies have been turned up to the max they can only mitigate and not totally offset the economic damage. For example, even though interest rates are at rock bottom levels everywhere, this is no guarantee that people will want to borrow when the worst of the crisis is past. Nor will lenders necessarily be willing to grant credit to those individuals and businesses who are struggling to stay afloat if they are perceived to be a bad credit risk. This puts banks in a difficult position. Whilst they were perceived as the bad guys a decade ago, they want to be seen to be making a positive contribution today. But they also have a duty to their shareholders whose returns have taken a beating, and who will not thank them for any big rise in loan-loss provisions.

So far, all of this has been predicated on the assumption that the Covid-19 crisis can be compressed into the second quarter of 2020. This is far from a certainty. Much will depend on what form of exit strategy is adopted by governments: How long will it take to reopen the economy even if the threat passes relatively quickly if the process is staggered over several stages? Then there is the question of whether the viral threat will indeed pass so suddenly. Scientific evidence suggests that social distancing measures may have to remain in place until 2022 and vigilance maintained until 2024, neither of which are conducive to a sudden pickup in activity. For the record, the IMF did conduct alternative scenarios. In one of the worst case outcomes, the assumption of a longer Covid-19 outbreak in 2020 together with a renewed outbreak in 2021, results in a level of GDP next year which is 8% below the baseline discussed above. This would imply an output loss of more than 5% over two years which starts to look more like a 1930s outcome.

The local picture

Closer to home, the UK Office for Budget Responsibility came out with an illustrative scenario earlier this week which suggested UK GDP could collapse by 13% in 2020, with a 35% contraction in Q2 alone, which is followed by a rebound of 18% in 2021 (chart below). To put that into context, this would be the largest annual contraction in GDP since 1709 when the Great Frost wiped out agricultural output. The projected rebound in 2021 would also be the largest since 1704 (apparently). Even allowing for the fact that the historical data are subject to a huge degree of uncertainty, the OBR figures suggest the most volatile swings in output for over 300 years. Like the IMF (whose predictions for UK growth in 2020 and 2021 are a more modest -6.5% and +4.0% respectively), the OBR figures effectively assume that there will be no economic scarring although I doubt very much that if the OBR’s awful 2020 forecast is realised there will be much of a rebound next year.

Predictably, the IMF and OBR projections were met with the usual scepticism from those who have nothing better to do than criticise the forecasting efforts of others. I am not going to jump on that bandwagon. After all, these forecasts are produced because there is a need to have some basis for planning. What would the sceptics rather we do? Produce nothing and trust to luck by making it up as we go along? Just imagine the howls of rage if governments were not prepared for the worst case outcomes. But it does raise a question as to how such analysis should be treated at a time when predicting the future is little more than guesswork. The OBR made it clear that its analysis was a scenario, not a forecast, yet the media treated it as if it were a forecast. You may ask what is the difference? The answer is that a scenario is a conditional assessment based on a “what-if” approach whereas a forecast is typically viewed as an unconditional, what-will-happen event.

Obviously this is a fine distinction but it is important. The OBR is not suggesting in its analysis that it believes the outcome will necessarily be realised but it is an attempt to highlight the economic risks. Arguably there are better ways to do it. It could, for example, have prepared a range of outcomes along the same lines as the IMF and not chosen to discuss one illustrative case which runs the risk of being treated as an unconditional forecast. As former BoE insider Tony Yates pointed out on Twitter, the criticism levelled at the OBR is “the kind of thing that makes policy bodies nervous about being as transparent as they should be to help us hold them to account.  The BoE was paralysed by this nervousness, and made themselves hard to scrutinise.”

The one thing we know is that all forecasts produced in the current uncertain environment will be wrong in some way. They should be viewed as an attempt to shine some light in the dark, however feeble. In truth, the ordinary voter does not care about GDP growth but when you tell them it is a proxy for the path of employment and incomes, we are then talking about something meaningful for them. As a final thought, when the IMF and OBR are so far apart in their views on the UK, this is an indication that the light cast by the forecast insight is dim indeed.

Monday 30 December 2019

Forecasting the UK economy. How did we do in 2019?

David Smith is one of the UK’s top economic journalists and his pieces for The Times and Sunday Times are always worth a read. Since the 1990s Smith has devoted his final column of the year to an assessment of how those who provide forecasts for the UK economy have fared over the past year and 2019 was no exception (here for the full article if you can get past the paywall. Otherwise here). The good news is that most forecasters did pretty well, though if anything they were too optimistic on GDP growth and the extent to which the BoE would hike rates.
Digging into the details, Smith’s methodology is based on an assessment of five indicators – GDP growth, Q4 CPI inflation, current account, unemployment rate and Bank Rate. Using the projection provided by each forecasting group to HM Treasury’s compendium of economic forecasts in the previous January, he provides a scoring system to rank how each group fared. The first caveat is that we do not yet have a full year of data for any of the items except Bank Rate so the rankings may be subject to change once the data for the remainder of the year are released. But I have always had a bigger issue with the somewhat subjective way in which points are allocated (see footnote of Table 1 for details). Moreover, there is a bias towards the growth and inflation forecasts, each yielding a possible maximum of three points whereas only one point is awarded to each of the unemployment rate, current account and interest rate forecasts. And there is always a bonus question designed to ensure that the theoretical maximum number of points sums to 10.

In my view, the trouble with this ranking is that it does not sufficiently penalise those who get one of the forecast components badly wrong – the worst that can happen is that they get zero points. Moreover, since the competition is designed to look at all five components, my system imposes a bigger handicap on those who do not provide a forecast for all of the components (which may be a little harsh, as I discuss below). My ranking system thus uses the same raw data and assumes the same outturn as Smith but measures the results differently and (I hope) reduces the degree of arbitrariness in allocating points. For growth, inflation and the unemployment rate, I measure the absolute difference of each forecast from the outturn (in percentage points). Assuming the same outturns as Smith, GDP growth in 2019 came in at 1.3%; the Q4 inflation rate at 1.5% and the unemployment rate at 3.8%. Thus a GDP growth forecast of 1.5% is assigned an error value of 0.2 (=> ABS(1.3-1.5)); an inflation forecast of 2% results in a value of 0.5 (=>ABS(1.5-2)) and an unemployment forecast of 4% produces a value of 0.2 (=>ABS(3.8-4)).

For quantities such as Bank Rate and the current account, I apply different criteria. Interest rates normally change in steps of 25 basis points so the forecast error is measured as the error in the number of interest rate moves (again the sign of the error is irrelevant). For example, if the forecast in January was for Bank Rate to rise to 1% but it in fact remained at 0.75%, the error value is one (=> ABS(0.75-1)/0.25). With regard to the current account deficit, measured in billions of pounds, I assume forecast errors proportional to each £10bn absolute error (i.e. independent of sign). The outturn is assumed to be minus £90bn, so a group whose January forecast looked for a deficit of £85bn is assigned a value of 0.5 (=> ABS(-85+90)/10).

Having summed up all the error points, I then subtract the number from 10 to derive a value in the range 0 to 10 (the figure can technically go negative, in which case I assume a lower value of zero). However, the astute amongst you may already have spotted that the units involved in the current account forecast are big, so that failure to provide a forecast will become a problem. Indeed, those not providing a forecast are assumed to have input a value of zero, giving them an error value of 9 points (=>ABS(0+90)/10). This becomes a problem for the likes of HSBC, which finishes second in Smith’s rankings but drop way down on the basis of the methodology outlined here, but also Daiwa and Bank of America. This is unduly harsh and we need a better way to take account of zero forecast entries whilst still putting them at a disadvantage compared to those groups who provided an input. One option is simply to assign an error of two points for each missing forecast, on the basis that a group which fails to provide any input for each of the five categories would score zero (10 - 5 x 2).
Having made this correction, we are in a position to look at our revised rankings. Whereas in Smith’s original article, the Santander team came out on top, my rankings give the accolade to Barclays Capital largely because Santander’s current account forecast cost them two points whereas the Barclays team lost only 0.75 points. Honourable mentions also go to Oxford Economics and the EY ITEM team. Who lost out? One of the big losers is Schroders Investment Management, which appear near the top of Smith’s rankings but the fact that they predicted four interest rate hikes when there were no changes, cost them four points. This strikes me as fair. Interest rate projections are a key component of any macro forecast so it is only right that teams get penalised for bigger errors. HSBC slip from second to eighth on the basis that they did not provide a forecast for the current account, which is unfortunate but if it is included in the assessment criteria we have to take account of this. Had the current account been excluded, Santander and HSBC would have held onto their top two places. The revised rankings also put the pro-Brexit Liverpool Macro Research group at the bottom after a poor performance last year.

As for my own performance, I must confess that I did rather better than in Smith’s original ranking, rising to fourth (last year, this methodology would have put me second). I am not going to claim that there is no element of self-justification in the rankings but I have always thought that there was a better way of using the data to derive an ordinal ranking scale over the interval 0 to 10. But perhaps a more important lesson to come out of the analysis is that for all the criticisms of economic forecasting, those involved in making projections are to be congratulated for putting themselves on the line and being prepared to show their errors in public (equity and FX strategists take note). 

Moreover, despite criticisms from the likes of Eurosceptic MP Steve Baker who once said in the House of Commons that “I’m not able to name an accurate forecast. They are always wrong”, we have done pretty well in the UK over the past 2-3 years. And whilst, like football managers, forecasters are only as good as their last projection, I will wager that growth in the UK next year will continue to underperform relative to pre-referendum rates, whether or not Brexit is “done”.  

Wednesday 24 April 2019

A retrospective on macro modelling

Anyone interested in the recent history of economics, and how it has developed over the years, could do worse than take a look at the work of Beatrice Cherrier (here). One of the papers I particularly enjoyed was a review of how the Fed-MIT-Penn (FMP) model came into being over the period 1964-74, in which she and her co-author, Roger Backhouse, explained the process of constructing one of the first large scale macro models. It is fascinating to realise that whilst macroeconomic modelling is a relatively easy task these days, thanks to the revolution in computing, many of the solutions to the problems raised 50-odd years ago were truly revolutionary.

I must admit to a certain nostalgia when reading through the paper because I started my career working in the field of macro modelling and forecasting, and some of the people who broke new ground in the 1960s were still around when I was starting out in the 1980s. Moreover, the kinds of models we used were direct descendants of the Fed-MIT-Penn model. Although they have fallen out of favour in academic circles, structural models of this type are in my view still the best way of assessing whether the way we think the economy should operate is congruent with the data. They provide a richness of detail that is often lacking in the models used for policy forecasting today and in the words of Backhouse and Cherrier, such models were the “big science” projects of their day.

Robert Lucas and Thomas Sargent, both of whom went on to win the Nobel Prize for economics, began in the 1970s to chip away at the intellectual reputation of structural models based on Keynesian national income accounting identities for their “failure to derive behavioral relationships from any consistently posed dynamic optimization problems.” Such models, it was argued, contained no meaningful forward-looking expectations formation processes (true) which accounted for their dismal failure to forecast the economic events of the 1970s and 1980s. In short, structural macro models were a messy compromise between theory and data and the theoretical underpinnings of such models were insufficiently rigorous to be considered useful representations of how the economy worked.

Whilst there is some truth in this criticism, Backhouse and Cherrier remind us that prior to the 1970s “there was no linear relationship running from economic theory to empirical models of specific economies: theory and application developed together.” Keynesian economics was the dominant paradigm, and such theory as there was appeared to be an attempt to build yet more rigour around Keynes’ work of the 1930s rather than take us in any new direction. Moreover, given the complexity of the economy and the fairly rudimentary data available at the time, the models could only ever be simplified versions of reality.

Another of Lucas’s big criticisms of structural models was the application of judgement to override the model’s output via the use of constant adjustments (or add factors). Whilst I accept that overwriting the model output offends the purists, it presupposes that economic models will outperform relative to human judgement. But such an economic model has not yet been constructed. Moreover, the use of add factors reflects a certain way of thinking about modelling the data. If we think of a model as representing a simplified version of reality, it will never capture all the variability inherent in the data (I will concede this point when we can estimate equations, all of which have an R-bar squared close to unity). Therefore, the best we can hope for is that the error averages zero over history – it will never be zero at all times. 

Imagine that we are in a situation where the last historical period in our dataset shows a residual for a particular equation which is a long way from zero. This raises a question of whether the projected residual in the first period of our forecast should be zero. There is, of course, no correct answer to the question. It all boils down to the methodology employed by the forecaster – their judgement – and the trick to using add factors is to project them out into the future so that they minimise the distortions to the model-generated forecast.

But to quote Backhouse and Cherrier, “the practice Lucas condemned so harshly, became a major reason why businessmen and other clients would pay to access the forecasts provided by the FMP and other macroeconometric models … the hundreds of fudge factors added to large- scale models were precisely what clients were paying for when buying forecasts from these companies.” And just to rub it in, the economist Ray Fair ”later noted that analyses of the Wharton and Office of Business Economics (OBE) models showed that ex-ante forecasts from model builders (with fudge or add factors) were more accurate than the ex-post forecasts of the models (with actual data).

Looking back, many of the criticisms made by Lucas at al. seem unfair. Nonetheless, they had a huge impact on the way in which academic economists thought about the structure of the economy and how they went about modelling it. Many academic economists today complain about the tyranny of microfoundations, in which it is virtually impossible to get a paper published in a leading journal without linking models of the economy to them. In addition, the rational expectations hypothesis has come to dominate in the field of macro modelling, despite the fact there is little evidence suggesting this is how expectations are in fact formed.

As macro modelling has developed over the years, it has raised more questions than answers. One of the more pervasive is that, like the models they superseded, modern DSGE models have struggled to explain bubbles and crashes. In addition, their treatment of inflation leaves a lot to be desired (the degree of price stickiness assumed in new Keynesian models is not evident in the real world). Moreover, many of the approaches to modelling adopted in recent years do not allow for a sufficiently flexible trade-off between data consistency and theoretical adequacy. Whilst recognising that there are considerable limitations associated with structural models using the approach pioneered by the FMP, I continue to endorse the view of Ray Fair who wrote in 1994 that the use of structural models represents "the best way of trying to learn how the macroeconomy works."