Showing posts with label economic forecasting. Show all posts
Showing posts with label economic forecasting. Show all posts

Thursday 21 February 2019

Assessing recession probabilities

Over recent months I have spent far more time than is healthy looking at political issues and I will return to them again. But for a pleasant change I wanted to look at some economics, specifically some quantitative analysis of the outlook for the UK. Brexit considerations aside, the UK faces headwinds from the global economic cycle which has lost significant momentum in the last couple of months – indeed the slowdown has come upon us more quickly than imagined. Within Europe, Italy fulfils the technical definition of recession, having posted two consecutive negative quarters of GDP growth in the second half of last year whilst Germany has only just avoided the same fate, with a flat Q4 growth rate following a contraction in Q3 2018. Despite all the concerns regarding the gilets jaunes protests, France did not do badly with quarterly growth of +0.3% in each of Q3 and Q4. The UK did even better, with a +0.6% rate in Q3 followed by a more modest expansion rate of 0.2% in Q4.

But that is all in the past. What about the outlook for the remainder of 2019? Obviously Brexit remains the elephant in the room as far as the UK is concerned, so it is impossible to be precise about what is likely to happen. In terms of what the evidence tells us so far, we know that business fixed investment fell in each of the four quarters of 2018, and in the second half the pace was particularly rapid with spending down 2.6% in real terms in Q4 relative to Q2. By end-2018 the volume of activity was thus the lowest since Q3 2015, and the second lowest quarter in four years. It is possible that it will rebound in the absence of a no-deal Brexit but latest CBI survey evidence continues to suggest that companies are likely to cut capital investment over the next twelve months.

However, declining business investment is not necessarily a good indicator of what will happen to overall GDP. In 14 of the last 53 years business investment has actually fallen at the same time as GDP has expanded. For four consecutive years between 2001 and 2005, when GDP growth averaged 2.7% per annum, business investment declined at an average annual rate of 2.8%. It is thus illustrative to look at the information content in other data to see what it tells us. In doing so, I have relied on the literature on qualitative choice models which tries to find leading indicators to predict recession probabilities (see this New York Fed paper for insight into how such models have been applied in the context of the US).

In applying the analysis to the UK, the object of the exercise is to find indicators which have decent predictive power. I finally opted for the CBI’s business optimism index and the Conference Board’s leading indicator, which is in turn comprised of eight variables (order books, expected output, consumer confidence, bond prices, equity prices (All Shares), new orders, productivity and corporate profits). Although the leading indicator does contain financial variables, equities have a weight of less than 4% so I added a series for real equity prices (using the consumer spending deflator as the relevant price index) to specifically account for the fact that markets often spot downturns in the economic cycle more quickly than the published economic data.

As noted above this methodology uses models of qualitative choice. Such techniques are used to model outcomes where the dependent variable takes a binary 0,1 value depending on the contingent state. In this case the dependent variable is the year-on-year rate of real GDP growth which takes the value 1 when it falls into negative territory and 0 otherwise. Thus what the model tries to do is assess the likelihood that annual growth will turn negative [1]. Based on data back to 1973, we have 184 quarters of data, and on 25 occasions GDP growth turned negative. Using a basic probit model, I estimated a relationship between lagged values of the three explanatory variables and the binary dependent variable to give an assessment of recession probabilities six months ahead.
As the chart shows, on the four occasions since 1973 that growth has gone into negative territory, the model has predicted this six months ahead of time with an accuracy rate of at least 90%. The model suggests that on the basis of current data there is a probability of around 33% that GDP growth will turn negative by mid-2019. In order for that to happen by Q3 would require a sluggish Q1 growth rate and modest declines in Q2 and Q3 (i.e. a technical recession). Absent a Brexit-related collapse, this would appear to be a stretch. Thus the model likelihood of around one-third appears reasonable – an unlikely, but not impossible, scenario.

In contrast to conventional forecasting techniques, we do not attempt to quantify the rate of GDP growth. But the probabilistic approach outlined here gives a sense of the risks surrounding the outlook and thus some steer on how much preparation may be required to offset the worst-case outcomes. Obviously, the analysis is based on the information content in current data and is subject to changes resulting from random shocks. Brexit could thus significantly change the picture, but that is a subject for another day.


[1] Strictly speaking we ought to be looking at quarterly GDP growth to define those periods where there are two consecutive quarterly declines, but there is such a lot of noise in the quarterly data that the equations do not fit the data particularly well.


Sunday 30 December 2018

Forecasting the UK: How did we do?

At this time of year journalists like to look back at how forecasters did over the past twelve months. One of the comparisons I follow most keenly is that compiled by David Smith in the Sunday Times, even though I have some reservations about the methodology (here for the full article if you can get past the paywall). Smith makes the point that those making UK forecasts in 2018 did pretty well. But what has struck me most in recent weeks is not so much that many forecasters did well in their assessment of UK growth in 2018, but how consistent the consensus has been for the past two years regarding the impact of Brexit on the economy.

Each month the Treasury invites contributors to make projections for a wide variety of indicators for the current and following years, with the forecast horizon rolled over in February of each year. The first projections for 2017 were thus published in February 2016. Prior to the EU referendum the consensus was that GDP growth last year would come in at 2.1%. After a brief wobble in the summer and autumn of 2016, when the consensus dipped to 0.7%, forecasters quickly adjusted their forecasts upwards when it became clear that the economy was not falling off a cliff. Nonetheless, growth last year came in at 1.7% - a good 0.4 percentage points below pre-referendum estimates.

The first projections for 2018 were published in February 2017. What is remarkable is that over the last 22 months the projection for 2018 growth has barely changed, and although we only have data for the first 10 months of the year we have enough to be confident that it will come out close to 1.3% for 2018 as a whole. As a rough guide to assess the extent to which forecasts have changed, I used the Treasury’s database of consensus forecasts to measure the consensus projection for growth in December of each year versus the initial estimate for the year in question, made in February of the previous year. For example, in February 2017 the consensus forecast for 2018 UK GDP growth was 1.4%, implying a 0.1 percentage point deviation from (expected) outturn.

We can run this analysis all the way back to 1988. The evidence indicates that the 22-month accuracy of UK forecast projections for 2018 was the second lowest in 31 years, beaten only by 2015 which turned out to be spot on. Of course, forecasts can bounce around throughout the forecast horizon as short-term indicators give additional information. Indeed, the consensus forecasts for 2015 rose during 2014 and early 2015, only to fall back again to where they first started. One measure of forecast stability is the standard deviation of monthly forecasts, and on this basis projections for 2018 have shown the lowest degree of variation in the 31 years of available data (chart).
 

If nothing else, this rather refutes the claim by Steve Baker MP, who noted earlier this year  that he was unable “to name an accurate forecast, and I think they are always wrong.” Where forecasts do tend to go look shaky is when the economy is subjected to an unanticipated shock. A good example of this is the recession of 2008-09 which was entirely unforeseen when the first 2009 forecasts were published in February 2008, with the result that the consensus was forced to downgrade sharply. A similar, although less sharp, correction was evident during the recession of 1990-91. Whilst the economy was not subject to a shock in 2018, the forecasts have to be seen against a backdrop of high post-referendum uncertainty – it has not been a “normal” year.

One of the great ironies of the 2018 comparisons, which are based on projections submitted to the Treasury in January 2018, is that Patrick Minford’s Liverpool Macroeconomic Research (LMR) outfit[1] finished at the bottom of Smith’s rankings. You may recall that Minford is one of the leading lights in the pro-Brexit group Economists for Free Trade which produced its Alternative Brexit Analysis earlier this year which suggested: “All those who attempt to forecast the UK’s long-term future must bear the burden of having their endeavours frequently proved wrong … This isn’t a matter of misforecasting the GDP or inflation figures by the odd percentage point or two. We all do that all the time. Rather, the record is making some serious errors of judgement about some of the key issues before the country.” There is a delicious irony in this statement: Most economists believe that pro-Brexit groups are making serious errors of judgement about the economic impact of Brexit and as a consequence their forecasts are way off target.

The level of uncertainty surrounding next year’s forecasts is even greater and I will deal with that another time. Suffice to say, however, if there is a hard Brexit the consensus forecast for growth of 1.5% is likely to prove an over-estimate. For the record, LMR is forecasting growth of 1.9% next year. It is not impossible, but only if the cliff edge Brexit is avoided.



[1] Mrs Rosemary Irene Minford has more than 75% of the company’s voting rights which of itself is not an issue. But Mrs Minford’s date of birth is given as May 1943 which happens to be Patrick’s d-o-b. Elsewhere in the filings Mrs Minford’s d-o-b is given as January 1946. I am not sure whether this reflects laxity on the part of Companies House or is another example of Minford playing fast and loose with facts. See this link for more detail https://beta.companieshouse.gov.uk/company/01645294

Sunday 19 August 2018

Dealing with forecast uncertainty

A few months back I produced a piece which looked at the economics of the World Cup. The fun part of the analysis was to look at the expected performance of each team based on a number of factors. Using a statistical model, based on the Poisson distribution which took account of the strength of each team and the quality of the opposition, I came up with a ranking that was pretty close to that of the bookmakers. The bit that everyone focused on, of course, was the tip for the tournament. As it happened, my statistical model made Germany favourites to win, but as we all now know Germany failed to qualify from the group stage.

Of course, the press gleefully highlighted the prediction error – as they did with all those who failed to correctly predict the winner. The only thing was, I didn’t really get it wrong. Although I made Germany the most likely team to win, I only assigned an 18% probability to their chances of tournament success, implying an 82% chance of not winning. In bookmakers’ parlance, I put the odds against Germany winning the tournament at 4-1. Sure enough, Germany did not win the tournament – the most likely outcome predicted by the model.

The idea that we apply probabilistic assessments to outcomes strikes me as a sensible way to think about an inherently unknowable future and it is a point I have made on numerous occasions previously (here, for example). At a time when macroeconomics has come in for considerable criticism for its failure to accurately forecast future events, understanding the process of how forecasts are made is worthy of further investigation.

Critical to understanding the nature of an economic forecast is that they are heavily conditional. In fact everything in economics depends on everything else, so if some of the conditioning factors change the forecast is likely to be blown off course. Consider the case of forecasting how a central bank might set interest rates on the basis that it follows an inflation targeting regime. We assume that inflation is a function of the amount of spare capacity in the economy – the less slack there is, the more competition for resources which then bids up their price. The choice of model itself is a major conditioning factor. If central banks use different metrics in making their decision, this raises the chance that the forecast will be wrong.

But let us pursue our assumption a bit further: In order to determine how much slack there is in the economy, we have to understand trends on both the demand and supply side which introduces additional conditioning factors. On the demand side we need to know what is the likely path of driving forces such as incomes, taxes (which influence disposable incomes and labour supply decisions) and wealth (which can be used to finance consumption and which also impacts on desired saving levels). On the supply side, we need to know something about changes in the capital stock, which requires assumptions for investment and the rate of capital depreciation; the size of the labour force and the path of multifactor productivity. It should be pretty obvious by now that in a short space of time, we have identified a whole chain of events which could impact at any point to change our assessment of the amount of spare capacity and thus the potential inflationary threat.

It is pretty unlikely that we are going to predict all the inputs correctly, with the result that there is a considerable margin of uncertainty associated with our projections. When the economy is subject to an exogenous shock, such as in the wake of the Lehman’s bust or Brexit, the degree of uncertainty is significantly raised. Consider the UK in the wake of the Brexit vote: There was no effective government following David Cameron’s resignation and it was totally unclear whether the UK would invoke Article 50 in June 2016, as some had advocated. In this vacuum of uncertainty, large forecasting errors were made in the immediate post-referendum environment.

But contrary to the statements made by a number of pro-Brexit politicians about how the doomsayers were wrong before the referendum, much of what the forecasting profession said has stood up to scrutiny. Notably that the pound would collapse, inflation would rise and the economy would grow more slowly and thus suffer a loss of output relative to the case of no vote for Brexit. Obviously we don’t know what will happen from here because we do not yet know the nature of the UK’s future trading arrangements with the EU. One way to proceed is to outline a number of scenarios and assess what might happen to growth in each case. If we assign a probability to each scenario then our best guess for output growth is the probability-weighted average of the outcomes.

But how useful is the single point estimate for annual growth over the next five years in the case of Brexit? The answer, I suspect, is not much. We are more interested in the cost of our forecast being wrong (i.e. whether we are too optimistic or pessimistic relative to the outturn). We thus should focus on the loss function, which measures the cost of being wrong. This is not something that gets the attention it perhaps deserves because it can be a costly and time-consuming exercise. Instead we generally define a range of forecast extremes which encompass a median (or modal) forecast. The extent to which this forecast lies in the upper or lower half of the range determines the extent to which forecast risks are asymmetric, giving us some idea of the costs of being too optimistic versus being overly pessimistic.The Bank of England has long been an advocate of this approach (see chart, which assesses the range of outcomes for the August 2018 inflation forecast).
  
My efforts at forecasting future economic events are guided by Niels Bohr’s quip that “prediction is very difficult, especially if it's about the future.” Experience has taught me that we should treat our central case economic predictions as the most likely of a range of possibilities, and nothing more. After all, there is no certainty. When Germany can underperform so spectacularly on the international football stage, even the most confident of forecasters should take note.

Sunday 15 April 2018

Fair weather forecasting

The economics profession has had to endure some bad press over the last decade in the wake of the global financial crisis which we failed to foresee and, in the UK case, the dire predictions in the aftermath of the Brexit referendum that were not borne out. But in a way these two examples are to entirely miss the point. Economics is not a predictive discipline – as I have noted countless times before – so criticising economists for failing to predict macroeconomic economic outcomes is a bit like criticising doctors for failing to predict when people will fall ill.

Another of the narratives which has become commonplace in recent years is the notion that economists predict with certainty. This fallacy was repeated again recently in a Bloomberg article by Mark Buchanan entitled Economists Should Stop Being So Certain. In fact, nothing could be further from the truth. The only thing most self-respecting economic forecasters know for certain is that their base case is more likely to be wrong than right. The Bank of England has for many years presented its economic growth and inflation forecasts in the form of a fan chart in which the bands get wider over time, reflecting the fact that the further ahead we forecast the greater the forecast uncertainty (chart). Many other institutions now follow a similar approach in which forecasts are seen as probabilistic outcomes rather one in which there is a single outcome.

Indeed, if there is a problem with certainty in economic forecasting, it is that media outlets tend to ascribe it to economic projections. It is after all, a difficult story to sell to their readers that economists assign a 65% outcome to a GDP growth forecast of 2%. As a consequence the default option is to reference the central case.

One of the interesting aspects of Buchanan’s article, however, was the reference to the way in which the science of meteorology has tackled the problem of forecast uncertainty. This was based on a fascinating paper by Tim Palmer, a meteorologist, looking back at 25 years of ensemble modelling. The thrust of Palmer’s paper (here) is that uncertainty is an inherent part of forecasting, and that an ensemble approach that uses different sets of initial conditions in climatic modelling has been shown to reduce the inaccuracy of weather forecasts. In essence, inherent uncertainty is viewed as a feature that can be used to improve forecast accuracy and not as something to be avoided.

In fairness, economics has already made some progress on this front in recent years. We can think of forecast error as deriving from two main sources: parameter uncertainty and model uncertainty. Parameter uncertainty is derived from the fact that although we may be using the correct model, it may be misspecified or we have conditioned it on the wrong assumptions. We can try and account for this using stochastic simulation methods[1] which subject the model to a series of shocks and gives us a range of possible outcomes which can be represented in the form of a fan chart. Model uncertainty raises the possibility that our forecast model may not be the right one to use in a given situation and that a different one may be more appropriate. Thus the academic literature in recent years has focused on the question of combining forecasts from different models and weighting the outcomes in a way which provides useful information[2], although it has not yet found its way into the mainstream.

Therefore in response to Buchanan’s conclusion that “an emphasis on uncertainty could help economists regain the public’s trust” I can only say that we are working on it. But as Palmer pointed out, “probabilistic forecasts are only going to be useful for decision making if the forecast probabilities are reliable – that is to say, if forecast probability is well calibrated with observed frequency.” Unfortunately we will need a lot more data before we can determine whether changes to economic forecasting methodology have produced an improvement in forecast accuracy and so far the jury is still out. Unlike weather forecasting which at least obeys physical laws, economics does not. But both weather systems and the macroeconomy share the similarity that they are complex processes which can be sensitive to conditioning assumptions. Even if we cannot use the same techniques, there is certainly something to learn from the methodological approach adopted in meteorology.

Economics suffers from the further disadvantage that much of its analysis cuts into the political sphere and there are many high profile politicians who use forecast failures to dismiss outcomes that do not accord with their prior views. One such is the MP Steve Baker, a prominent Eurosceptic, who earlier this year said in parliament that economic forecasts are “always wrong.” It is worth once again quoting Palmer who noted that if predictions turn out to be false, “then at best it means that there are aspects of our science we do not understand and at worst it provides ammunition to those that would see [economics] as empirical, inexact and unscientific. But we wouldn’t say to a high-energy physicist that her subject was not an exact science because the fundamental law describing the evolution of quantum fields, the Schrödinger equation, was probabilistic and therefore not exact.”

As Carveth Read, the philosopher and logician noted, “It is better to be vaguely right than exactly wrong.” That is a pretty good goal towards which economic forecasting should strive.







[2] Bayesian Model Averaging is one of the favoured methods. See this paper by Mark Steel of Warwick University for an overview

Sunday 25 February 2018

Beyond the realms ...


The economic analysis that underpinned the Remain campaign ahead of the EU referendum was widely dismissed as the tool of Project Fear. It is not hard to see why. After all, the Treasury’s analysis of the short-term costs of Brexit sat rather uncomfortably, even at the time, and as events transpired it now looks hopelessly wrong. Indeed, the Treasury suggested that “the economy would fall into recession with four quarters of negative growth. After two years, GDP would be around 3.6% lower in the shock scenario …. the fall in the value of the pound would be around 12%, and unemployment would increase by around 500,000.” As we now know, the UK did not fall into recession and unemployment has fallen, rather than risen. But the Treasury was broadly right about the fall in sterling, and its prediction that “the exchange-rate-driven increase in the price of imports would lead to a material increase in prices, with the CPI inflation rate higher by 2.3 percentage points after a year.”

But the fact that large elements of the analysis used by Remainers was so far off the mark has allowed the Brexiteers to claim they were right all along and that leaving the EU will not be the economic disaster that is claimed. Indeed, the Economists for Free Trade Group (EfFT), led by Patrick Minford, uses the Treasury’s analysis as a counterpoint to suggest in fact there will be significant benefits to leaving the EU, of between 2% and 4% of GDP relative to the baseline of remaining. Their report issued last year was dubious enough (see here for my take on it) but their latest report, released this month appears to be even more of a desperate effort.

It begins with the premise that consensus economic forecasts cannot be trusted, and argues that the economics profession has been wrong on many of the big issues over the years (Thatcher reforms; leaving the ERM in 1992 and failure to join the euro). That is a pretty bad place from which to start because it is a claim that since the consensus was wrong on all the big issues, we should trust Minford and his pals. To quote the report, “Fortunately, a number of leading economists with considerable expertise and good forecasting track records suggest a very different outlook.” We’re so glad you could help out!

But it is a disingenuous claim. Even now there are many who would argue that the Thatcher reforms did not produce the improvements that Minford et al claimed – certainly the costs of those policies were high and their after-effects linger today.  It is certainly wrong to suggest that the economics profession in 1992 “argued that if we left [the ERM] disaster would ensue: inflation would soar and this would necessitate higher interest rates that would lead to an even deeper recession.” (I know because I was there). Similarly, whilst there may have been some prominent commentators suggesting that not joining the single currency was a bad idea, the majority view was not in favour. Nor do EfFT give the Treasury credit for the 2003 report which argued against euro membership.

One of the tactics of irritant groups like EfFT is to set up straw man arguments which they can easily dismiss, thus bolstering their case. “Being  proved  so  wrong  about  the  immediate  impact  of  the  vote  to  leave  has  not  deterred  the economics establishment from continuing to predict disaster in the long term.” The “disaster” in this case is the likelihood that incomes will grow more slowly in the absence of EU membership. Nobody is now seriously arguing that the UK economy will hit the rocks – merely that it will grow more slowly. Not what I would call a “disaster” – more a relative disadvantage.

Aside from the bluff and bluster, it is once we start digging into the details of the EfFT analysis that the weaknesses really emerge. As I mentioned in my previous post, EfFT assume no role whatsoever for gravity. Their claim that gravity effects have “been totally bypassed by the progress of technology” is simply not true. It is less important than it once was, admittedly, but to dismiss it because it does not fit with the story you are trying to tell is intellectually dishonest.

Even more significant is that the empirical work is based on a computable general equilibrium (CGE) model. They are great in theory but suffer from so many practical drawbacks that their results are often little better than guesswork. In brief, a CGE model is based on an input-output matrix and assigns a significant role for prices by assessing how much demand, supply and prices have to change following an economic shock in order to order to restore equilibrium. Amongst their many disadvantages is that it is difficult to determine the functional form used to model at the disaggregated level required by a CGE model. Moreover, because they are not estimated using standard statistical methods, but instead are calibrated (i.e. the parameters of the model are assigned using judgement), we have little idea whether the structural form of the model is consistent with the data in the real world. This is totally unacceptable for policy reasons (and to be fair, the Treasury’s own regional estimates of Brexit costs which are based on similar models, are subject to similar criticisms).

 So the models are somewhat dodgy, but wait until you hear about how Minford and his colleagues bend them to give the number they first thought of. EfFT start from the premise that the UK will benefit from unilateral tariff abolition. They cite the work by Ciuriak and Xiao who point to a 0.8% gain in GDP on the basis of unilateral tariff abolition. But theirs is a relatively cautious work and is obviously based on the assumption that the UK’s trading partners will necessarily reciprocate. EfFT blithely adopt the assumption as a matter of fact. Furthermore, they argue that Ciuriak and Xiao’s analysis suggests that “the combination of tariffs and non-trade-barriers eliminated is just 4 per cent ...” but if we “eliminate non-tariff barriers set up by the EU against the world … Ciuriak’s and Xiao’s results can be multiplied five times.” In other words, a cautious technical result is magnified by a factor of five.

But think about what that statement means. EfFT assume that the EU will reduce its non-tariff barriers. But why should it? It is one thing to talk about tariffs but quite another to quantify the impact of non-tariff barriers which are there to protect EU firms in their home market. This is a race-to-the-bottom assumption which appears to suggest that many of the standards we currently employ today will be swept away. And it is also worth heeding Ciuriak and Xiao’s conclusion that “in a long-term perspective, if the world (including the EU) moves to a similar free trade equilibrium, the first mover advantages to the UK of full liberalization against the rest of the world would eventually be eroded.” In other words if everybody adopts the policy, the UK will be out-competed in a number of key markets.

We could go on but it might be wise just to draw a veil over the nonsense. Suffice to say, with the same model as that applied by EfFT I could come up with a different set of results. But the reason that the economics profession does not buy these results is simply because EfFT assumes that EU membership today is all about costs with no compensating benefits, and if we leave then the costs will simply fall away. Most of us do not see it that way. There are costs, but they are offset by the benefits and leaving the EU will reverse that situation. It is, of course, possible that the majority view will be wrong but I wouldn’t put money on it – even to hedge my bets.

Saturday 3 February 2018

In defence of economic forecasts

It is becoming rather tiresome to hear the constant carping about the value of economic forecasts, particularly when the critics are responding to forecasts that do not accord with their pre-conceived views. Ed Conway weighed into the debate in The Times yesterday (here if you can get past the paywall), with his claim that “the job of a city economist is not to make accurate forecasts; they’re basically there to market their firms.” As a city economist who has spent a lot of time working with various models generating forecasts to meet client demand, I can say with total confidence that Conway is dead wrong. It’s a bit like saying that we should ignore the views of most economic columnists whose raison d’être is to offer clickbait for the masses.

But the most annoying comments of the week came from Eurosceptic MP Steve Baker who proceeded to denigrate the Treasury’s analysis of the negative economic consequences of Brexit. He noted in the House of Commons that “I’m not able to name an accurate forecast, and I think they are always wrong.” The second most annoying comments, and probably more serious set of allegations, came from Jacob Rees-Mogg who accused Charles Grant, the director of the Centre for European Reform, of suggesting that Treasury officials “had deliberately developed a model to show that all options other than staying in the customs union were bad, and that officials intended to use this to influence policy” (here). Grant denied any such implication and Baker was forced to apologise for providing support to what is an outrageous slur on the impartiality of the civil service.

What all this does illustrate, however, is that factual analysis is being drowned out by an agenda in which ideology trumps evidence. With regard to Baker’s claims that forecasts are “always wrong” it is worth digging a little deeper. No economic forecaster will be 100% right 100% of the time – we are trying to predict the unknowable – but there are acceptable margins of error. HM Treasury surveys a large number of forecasters in its monthly comparison of economic projections, which is a pretty good place to gather some evidence. Our starting point is the one-year ahead forecast for UK GDP growth, using the January estimate for the year in question (at this point, we do not have the full numbers for the previous year).

I took the data over the past five years, for which 34 institutions have generated forecasts in each year. The average error over the full five year period, using the current GDP vintage as a benchmark, is 0.63 percentage points. This is not fantastic, though if we strip out 2013, the figure falls to 0.51 pp.  For the record – and probably more by luck than judgement – the errors in my own forecasts were 0.48 pp over the full five year sample and 0.33 pp over 2014-17, so slightly better than the average. But there is a major caveat. GDP data tend to be heavily revised, due to changes in methodology and the addition of data which were not available initially. Thus, the data vintage on which the forecasts were prepared turns out to be rather different to the latest version. Accordingly, if we measure the GDP projection against the initial growth estimate, the margins of error are smaller (0.5 pp over the period 2013-17 and 0.4 pp over 2014-17).


Without wishing to overblow my own trumpet (but what the hell, no-one else will), my own GDP forecasts proved to be the most accurate over the last five years when measured against the initial growth estimate, with an average error of just 0.16pp over the past four years. More seriously, perhaps, the major international bodies such as the IMF and European Commission tend to score relatively poorly, lying in the bottom third of the rankings. These are the very institutions which tend to grab the headlines whenever they release new forecasts. A bit more discernment on the part of the financial journalist community might not go amiss when it comes to assessing forecasting records.

All forecasters know that they are taking a leap into the dark when making economic projections, and I have always subscribed to the view that the only thing we know with certainty is that any given economic forecast is likely to be wrong. But suppose we took the Baker view that forecasts are a waste of time because they are always wrong. The logical conclusion is that we simply should not bother. So what, then, is the basis for planning, whether it be governments or companies looking to set budgets for the year ahead? There would, after all, be no consensus benchmark against which to make an assessment. Quite clearly, there is a need for some basis for planning, so if economic forecasts did not exist it is almost certain that a market would be created to provide them.

As for the Treasury forecasts regarding the impact of Brexit on the UK (here) , it may indeed turn out that the economy grows more slowly than in the years preceding the referendum, in which case the view will be vindicated. There is, of course, a chance they will be wrong. But right now, we do not know for sure (although the UK did underperform in 2017). Accordingly, the likes of Baker and Rees-Mogg have no basis for suggesting that the forecasts are wrong, still less that the Treasury is fiddling the figures. I take it as a sign they are worried the forecasts are likely to prove correct that they have been forced to come out swinging.

Sunday 26 November 2017

The dawning realities of Brexit

If nothing else, the exceptionally weak growth projections released by the OBR on Wednesday should make people realise the scale of the economic challenges the UK will face in the wake of Brexit. It really ought to be the sort of wake-up call that forces people to ask themselves whether leaving the EU is such a good idea after all.
In fairness, the huge downward revision to growth, which now looks for real GDP to average growth of 1.4% per annum over the next five years, was triggered by a downward revision to productivity growth – not wholly related to Brexit. But the OBR did highlight that Brexit-related uncertainty was likely to impact on business investment and assumed that the UK’s “share of EU market in global markets would also fall.” The OBR, whether wittingly or not, provided a scathing indictment of the government’s failure to plan for Brexit: “We asked the Government if   it wished to provide any additional information on its current policies in respect of Brexit that would be relevant to our forecasts. It directed us to the Prime Minister’s Florence speech from September and a white paper on trade policy published in February. We were not provided with any information that is not in the public domain. As in our previous two forecasts, we have not therefore been able to forecast on the basis of fully specified Government policy in relation to the UK’s exit from the EU.”

In the course of this week I have become engaged in a couple of debates about the economic wisdom of Brexit. In the first instance, my interlocutor asked me why I hold the view that the UK needs a trade deal with the EU and why instead we should not simply forge trade deals with the US, China and India. I pointed out that the EU is the UK’s largest single trading partner and that countries located in close proximity tend to trade heavily, which is why the UK exports more to Ireland than to China and India combined. His follow-up question was why the rest of the EU would put up trade barriers against the UK. My response was that if we resort to WTO rules, this will be the default position and because those countries which export a lot to the UK will not be able to do a bilateral deal, we need one with the EU.

The second issue regarded the usefulness of economic forecasts with regard to Brexit, which was particularly appropriate in the wake of the OBR’s assessment. I pointed out that in broad terms the economy is now where most forecasters assumed it would be based on their pre-June 2016 Brexit scenarios. A-ha, said my correspondent, but didn’t most forecasters assume the end of the world in the wake of the referendum. I have to admit most of us were overly gloomy based on our post-referendum forecasts. But as I have noted before, these were made against a backdrop of huge uncertainty: (i) no effective government; (ii) the assumption that Article 50 would be triggered immediately (as David Cameron had indicated) and (iii) some dire survey evidence in the immediate aftermath of the referendum. If you look at what most forecasters (including the Treasury’s longer-term analysis) indicated in their pre-June projections, it was generally assumed that the UK would grow by between 0.5% and 1% more slowly than otherwise – a view which is now being borne out.

Admittedly, the Treasury did not help matters with the dire warnings contained in its publication warning of the short-term implications of Brexit. But most economists dismissed these as being over the top and in retrospect should be viewed as part of the much derided Project Fear. More generally, if we take a medical analogy, and assume that an overly-cautious doctor quarantines a suspected infectious patient, do we then cease to listen to all medical advice? On the whole, I am comfortable with the consensus economic view that, at least over the next couple of years, the UK economy will underperform relative to its developed world peers.

Regarding questions on Brexit, I do welcome the fact that people challenge my views and even if we are unable to persuade each other of our respective case, it is important to have a civil debate about the consequences. I am not sure of the extent to which buyer’s remorse has set in with regard to the Brexit vote. Some people may have changed their mind, though I suspect the numbers are small. But 17 months after the vote and with 16 months until the UK leaves the EU, the understanding of the implications and the degree of preparedness for what follows are shocking. Anger is currently directed at the media (even the pro-Brexit newspapers are focusing on foreign disinformation campaigns), whilst both Leavers and Remainers point the finger at each other (those Leavers arguing that if we all got behind Brexit we could make a success of it, really ought to engage their brain).

But the real culprits are in government. It was David Cameron who took the gamble and failed. It was the Conservative government which adopted a winner-take-all strategy in autumn 2016, and polarised the country rather than seeking to heal divisions. It was Theresa May herself who called an unnecessary election which weakened the government, allowing ministers to intensify their in-fighting. Not that the Labour Party are much better. A large chunk of those who voted for Corbyn in June as a protest against the pro-Brexit Conservatives seem not to understand that he has no  intention of halting Brexit. More damningly, the government has no plan for the future, as the OBR told us last week. Unless the EU takes pity on the UK and agrees in December to start trade talks, the outlook is starting to look pretty dire.

Wednesday 8 November 2017

Breaking point

In the context of the debate on productivity, I was recently asked by a non-economist why economists tend to assume mean reversion when it is clear there has been a structural break in the series. It is a very good question and goes to the heart of a major problem in the business of forecasting. Most of our forecasting models are based on linear (or log-linearised) relationships whose forecast performance is conditioned by historical experience. Therefore, if a relationship has tended to mean-revert in the past, the model will assume it does so in future. As a consequence, we cannot easily deal with structural breaks in economic relationships. To put it another way, even if the data deviates from past performance for a prolonged period of time, it is dangerous to assume that there has been a permanent shift since it is entirely possible that it will soon snap back.

It takes a lot to convince economists that there has been a change in trends. In a classic 1961 paper[1], Lawrence Klein and Richard Kosobud noted that much of macroeconomics is based on five “great ratios” in which the relationships between variables can be assumed to be stable. These five ratios are (i) the savings-income ratio; (ii) the capital-output ratio; (iii) labour's share of income; (iv) income velocity of circulation and (v) the capital-labour ratio. But what might have been true almost six decades ago is no longer the case. Labour’s share in income has fallen sharply in the Anglo-Saxon economies over recent years whilst recent experience has shown big shifts in the capital-output ratio, which resulted in the productivity puzzle. However, these old ideas die hard and whilst we try to ensure that we do not stick to outdated precepts, relationships often change so imperceptibly slowly that it is difficult to be sure whether we are witnessing a cyclical shift or a secular trend.

The problem is well-known in economics and has been much-studied. In my introductory econometrics classes one of the first statistical tests to which I was introduced was the Chow test for structural breaks which assesses whether the coefficients derived from regressions on two different parts of the dataset are equal. Although it is pretty simple stuff, a look at UK year-on-year employment and GDP growth over the period 1980 to 2016 nicely highlights the nature of the problem. A quick glance at the chart certainly suggests that there has been a change in the trend relationship (chart). But it is really not that simple.

Suppose in 2013 a policymaker began to be concerned about the nature of the relationship and suspected there was a break in the historical trend. In early 2013, the intelligent policymaker has quarterly data through to end-2012 and decides to estimate a simple econometric equation using the EViews software over the period 1980 to 2012 (132 observations) in order to test their hypothesis. But conducting a Chow test for a structural break at the end of 2008, 2009 and 2010 does not yield any evidence of a break. The data-driven policymaker would thus conclude that this is problem to keep under observation but it is not yet time to press the panic button. A year later, in early 2014, the policymaker runs the analysis again with an additional four quarters of data (136 observations) but still finds no evidence of a structural break in the relationship. But by early 2015, our policymaker runs the analysis on data through end-2014 and finds some evidence of a break in the data at the end of 2009. By the time we get to early 2016 – three years after they suspect a problem – the statistical evidence is unambiguous: There has been a change in the relationship between GDP growth and employment.

The first point to make clear here is that a cautious policymaker has to wait for evidence that there has been a productivity shift – even if they suspected that something was amiss, they cannot make inferences on a relatively small amount of data because they do not know whether the shift is temporary or permanent. But even when the statistical evidence is clear, we cannot simply jump to conclusions – the blip in the productivity figures could vanish just as suddenly as it appeared. For example, if labour shortages begin to manifest themselves, companies can only increase output by increasing capital investment rather than relying on labour. This will mean GDP grows more rapidly than employment in which case labour productivity figures would begin to look much stronger.

As noted above, this is a simplistic exercise and there are better statistical ways to assess whether there are breaks in a data series, but it does highlight an important point: In a world where we require empirical evidence before making policy changes, we sometimes have to wait a long time to build up sufficient information before we are even half sure that we are doing the right thing. Whilst the BoE and OBR are criticised for failing to foresee these trend shifts in economic performance, anyone who has ever tried modelling and forecasting when trends are changing will tell you that it is much harder than it looks.



[1] Klein, L.R. and R. F. Kosobud (1961) ‘Some Econometrics of Growth: Great Ratios of Economics,’ Quarterly Journal of Economics (75), 173-198

Sunday 15 January 2017

Crisis? What crisis?

David Miles, former Bank of England MPC member and now professor of economics at Imperial College, this week issued a clear rebuttal (here) of Andy Haldane’s charge that economics is in “crisis” (here). Miles makes the point very bluntly: “If existing economic theory told us that such events should be predictable, then maybe there is a crisis. But it is obvious that economics says no such thing … In fact, to the extent that economics says anything about the timing of such events it is that they are virtually impossible to predict; impossible to predict, but most definitely not impossible events.” 

He then goes on to point out that basic economics, in which organisations act in their own best interests, explained perfectly well why the financial crisis happened. In a world in which banks knew that they would face only a limited liability for the losses they created, and where the tax system favoured debt over equity, they had every incentive to increase their leverage. He also reminded us that there is a whole literature on market failure and that economists have won the highest academic honours for “exploring the ways in which free market outcomes can sometimes generate poor results.”

Indeed, when you think about it, the record of economists in predicting economic shocks is no worse than that of seismologists in predicting earthquakes. There are various warning indicators which signal that an earthquake may be imminent but scientists cannot pinpoint accurately when they will happen, and certainly not months or years in advance. Or, as Miles put it, “any criticism of “economics” that rests on its failure to predict the crisis is no more plausible than the idea that statistical theory needs to be rewritten because mathematicians have a poor record at predicting winning lottery ticket numbers.”

As I have noted on numerous previous occasions, economics is not a predictive discipline so we are forced to do the best we can in order to meet the demand for predictions of future economic activity. And unfortunately, despite the best efforts of former UK Chancellor Gordon Brown to abolish boom and bust, we are faced with the problem of simultaneously trying to predict the amplitude and frequency of an economic cycle which is not regular. It can shift abruptly, which leads to structural breaks in our model-based forecasts. If there is a “crisis” in economics it is that too much mainstream policy analysis focuses on the central case outcome, which becomes a binary choice as to whether the forecast in question was attained. This raises a question of whether a forecast for 2% GDP growth in any given year is “wrong” if it turns out to be 2.2%. It is a pointless exercise to strive for that sort of precision, which raises the question of how far away we are allowed to be from the central case before our prediction is deemed “wrong”. 

In fairness the likes of the BoE have long maintained that it is the distribution of risks around the central case which is important (and many others are now catching on). By defining the probability distribution around the central case we then have some idea of the plausible range of outcomes. But we have to accept that economics cannot predict the point at which the steady state switches from one condition to another, in much the same way that quantum physicists cannot determine with any precision the degree to which certain pairs of physical properties of a particle can be known. In other words, we cannot forecast structural shifts.

But one of the things that economics can do is to figure out how behaviour will change once the structural shift has occurred. Forecasters may not have incorporated the crash of 2008 into their central case (I will expound on some of the reasons why on another occasion) but expectations adjusted quite quickly thereafter. It was treated as a structural break with profound consequences for near-term growth, and consensus GDP growth numbers were revised down sharply thereafter, as indeed were expectations for central bank policy rates. Seismologists may not always be able to predict when the earthquake will strike but they know what the consequences will be when they do

Sunday 8 January 2017

Something Fishy in the world of forecasting


The Bank of England’s chief economist, Andy Haldane, created something of a furore on Thursday evening when he admitted that “it’s a fair cop to say the profession is to some degree in crisis” following the EU referendum. Haldane suggested that this was a “Michael Fish moment[1]” for the economics community, whose models struggle to cope with irrationality. In a way this is heartening. Anyone who has read any of my posts over recent months will know that I am critical of much mainstream macroeconomic academic research, in which models are based on rational behaviour (here, for example). I am also highly wary of the output produced by economic models (including my own). What was less heartening was the way in which the press jumped all over the story and began to conflate some of the issues, which generated a number of below the line comments on many newspaper websites demonstrating widespread ignorance of the issues at hand.

The article which most raised my hackles was by Simon Jenkins in The Guardian (here), who wrote that “the profession owes the public an inquest and an apology” and accused economics of grovelling “before its paymasters in government and commerce.” Predictably, the Pavlovian responses of many reader comments were along the lines of “economics is not a science” and “experts don’t know what they are talking about”, thus confirming that Michael Gove was right all along. One small snag: Jenkins’s article was confused and wrong. He conflates the problems in economics and the issue of Brexit as though they were one and the same thing. They’re not.

One issue Jenkins failed to mention was the extent to which many mainstream macroeconomists believe that their discipline has taken a wrong turning. He only need read the blogs by Paul Krugman to get a flavour of that. You certainly won’t find me defending the use of rational behaviour in economic models (for the record, all the models I have ever built use adaptive, rather than rational expectations). His second mistake was to conflate economics as a discipline and the arcane practice of forecasting.  I don’t know how many more times I have to explain to people that economics is not a predictive discipline. We can build models which capture the aggregate performance of the economy over the past but their predictive ability depends on numerous exogenous factors which can throw the model-generated path off course.

It is true that the short-term forecasts in the wake of the referendum turned out to be wrong. But the forecasts were made against the backdrop of zero to partial information; a power vacuum at the heart of government; the expectation that Article 50 would be triggered sooner rather than later, as David Cameron had promised, and a hysterical media. They are the worst circumstances in which to make a forecast for the next two years. But as I have pointed out before, three of my four key pre-referendum predictions have been borne out: Sterling would collapse; the BoE would cut interest rates and the stock market would rally after an initial dip. I was wrong on gilt yields, which fell rather than rose. And time will tell whether my view that post-Brexit growth will slow relative to what we experienced previously will also be borne out.

Jenkins was also wrong to confuse the political and economic aspects of Brexit. The issue of Brexit is purely a political issue but it does have potentially significant economic consequences. Anyone trying to come up with a forecast for the UK over the next five years or so has to take into account the fact that trading arrangements will be different, and they will almost certainly lead to a loss of economic welfare. However, it is worth making the point (again) that most analysis produced before the referendum reckoned that a vote to leave the EU would knock an average of around 0.3% off annual growth for a period of ten to fifteen years. We will likely feel it, but it would put the UK’s growth rate more into line with that of Germany since the turn of the century, rather than the outperformance of recent years. In other words, not a headline-grabbing disaster but not a good outcome either.

Finally, it is worth repeating the point I have been making for some time (most recently last week, here) that point economic forecasts are more likely to be wrong than right and that the probability distribution of risks around that central case is a more useful metric of accuracy. In a way, it was surprising that Haldane did not mention this because the BoE has been a pioneer of probabilistic forecasting. And for all the criticism levelled at the BoE, its August forecast for Q3 GDP growth assigned a near-45% chance that it would exceed 2% - as indeed happened.

It can only be positive that economists hold their hands up and admit when they are wrong. But I am not sure that Haldane did the economics profession or the BoE many favours by giving the media a stick to beat us all with. Indeed, as Mark Harrison of Warwick University asked: “Is epidemiological science in crisis because public health officials did not predict the largest Ebola epidemic in history?”

[1] Michael Fish is a meteorologist who in 1987 famously predicted that rumours of an impending hurricane were unfounded (here). In the event, London was hit by its biggest storm since 1703 which caused major damage and disruption. Moreover, UK markets were effectively closed on the following day (Friday) at a time when Wall Street was tanking. When London markets reopened the following Monday, it prompted a far sharper London collapse, amplifying the negative global sentiment which contributed to the severity of Black Monday on 19 October 1987.