Wednesday, 24 April 2019

A retrospective on macro modelling

Anyone interested in the recent history of economics, and how it has developed over the years, could do worse than take a look at the work of Beatrice Cherrier (here). One of the papers I particularly enjoyed was a review of how the Fed-MIT-Penn (FMP) model came into being over the period 1964-74, in which she and her co-author, Roger Backhouse, explained the process of constructing one of the first large scale macro models. It is fascinating to realise that whilst macroeconomic modelling is a relatively easy task these days, thanks to the revolution in computing, many of the solutions to the problems raised 50-odd years ago were truly revolutionary.

I must admit to a certain nostalgia when reading through the paper because I started my career working in the field of macro modelling and forecasting, and some of the people who broke new ground in the 1960s were still around when I was starting out in the 1980s. Moreover, the kinds of models we used were direct descendants of the Fed-MIT-Penn model. Although they have fallen out of favour in academic circles, structural models of this type are in my view still the best way of assessing whether the way we think the economy should operate is congruent with the data. They provide a richness of detail that is often lacking in the models used for policy forecasting today and in the words of Backhouse and Cherrier, such models were the “big science” projects of their day.

Robert Lucas and Thomas Sargent, both of whom went on to win the Nobel Prize for economics, began in the 1970s to chip away at the intellectual reputation of structural models based on Keynesian national income accounting identities for their “failure to derive behavioral relationships from any consistently posed dynamic optimization problems.” Such models, it was argued, contained no meaningful forward-looking expectations formation processes (true) which accounted for their dismal failure to forecast the economic events of the 1970s and 1980s. In short, structural macro models were a messy compromise between theory and data and the theoretical underpinnings of such models were insufficiently rigorous to be considered useful representations of how the economy worked.

Whilst there is some truth in this criticism, Backhouse and Cherrier remind us that prior to the 1970s “there was no linear relationship running from economic theory to empirical models of specific economies: theory and application developed together.” Keynesian economics was the dominant paradigm, and such theory as there was appeared to be an attempt to build yet more rigour around Keynes’ work of the 1930s rather than take us in any new direction. Moreover, given the complexity of the economy and the fairly rudimentary data available at the time, the models could only ever be simplified versions of reality.

Another of Lucas’s big criticisms of structural models was the application of judgement to override the model’s output via the use of constant adjustments (or add factors). Whilst I accept that overwriting the model output offends the purists, it presupposes that economic models will outperform relative to human judgement. But such an economic model has not yet been constructed. Moreover, the use of add factors reflects a certain way of thinking about modelling the data. If we think of a model as representing a simplified version of reality, it will never capture all the variability inherent in the data (I will concede this point when we can estimate equations, all of which have an R-bar squared close to unity). Therefore, the best we can hope for is that the error averages zero over history – it will never be zero at all times. 

Imagine that we are in a situation where the last historical period in our dataset shows a residual for a particular equation which is a long way from zero. This raises a question of whether the projected residual in the first period of our forecast should be zero. There is, of course, no correct answer to the question. It all boils down to the methodology employed by the forecaster – their judgement – and the trick to using add factors is to project them out into the future so that they minimise the distortions to the model-generated forecast.

But to quote Backhouse and Cherrier, “the practice Lucas condemned so harshly, became a major reason why businessmen and other clients would pay to access the forecasts provided by the FMP and other macroeconometric models … the hundreds of fudge factors added to large- scale models were precisely what clients were paying for when buying forecasts from these companies.” And just to rub it in, the economist Ray Fair ”later noted that analyses of the Wharton and Office of Business Economics (OBE) models showed that ex-ante forecasts from model builders (with fudge or add factors) were more accurate than the ex-post forecasts of the models (with actual data).

Looking back, many of the criticisms made by Lucas at al. seem unfair. Nonetheless, they had a huge impact on the way in which academic economists thought about the structure of the economy and how they went about modelling it. Many academic economists today complain about the tyranny of microfoundations, in which it is virtually impossible to get a paper published in a leading journal without linking models of the economy to them. In addition, the rational expectations hypothesis has come to dominate in the field of macro modelling, despite the fact there is little evidence suggesting this is how expectations are in fact formed.

As macro modelling has developed over the years, it has raised more questions than answers. One of the more pervasive is that, like the models they superseded, modern DSGE models have struggled to explain bubbles and crashes. In addition, their treatment of inflation leaves a lot to be desired (the degree of price stickiness assumed in new Keynesian models is not evident in the real world). Moreover, many of the approaches to modelling adopted in recent years do not allow for a sufficiently flexible trade-off between data consistency and theoretical adequacy. Whilst recognising that there are considerable limitations associated with structural models using the approach pioneered by the FMP, I continue to endorse the view of Ray Fair who wrote in 1994 that the use of structural models represents "the best way of trying to learn how the macroeconomy works."

Wednesday, 17 April 2019

Inflation beliefs

One of the biggest apparent puzzles in macroeconomic policy today is why inflation remains so low when the unemployment rate is at multi-decade lows. The evidence clearly suggests that the trade-off between inflation and unemployment is far weaker today than it used to be or, as the economics profession would have it, the Phillips curve is flatter than it once was (here). But as the academic economist Roger Farmer has pointed out, the puzzle arises “from the fact that [central banks] are looking at data through the lens of the New Keynesian (NK) model in which the connection between the unemployment rate and the inflation rate is driven by the Phillips curve.” But what if there were better ways to characterise the inflation generation process?

Originally, the simple interpretation of the Phillips curve suggested that policymakers could use this trade-off as a tool of demand management – targeting lower (higher) unemployment meant tolerating higher (lower) inflation. However, much of the literature that emerged in the late-1960s/early-1970s suggested that demand management policies were unable to impact on unemployment in the long-run and that it was thus not possible to control the economy in this way. The reason is straightforward – efforts by governments (or central banks) to stimulate the economy might lead in the short-run to higher inflation, but repeated attempts to pull the same trick would result in a response by workers to push for higher wages which in turn would choke off labour demand and raise the unemployment rate. In summary, government attempts to drive the unemployment rate lower would fail as workers’ inflation expectations adjusted. One consequence of this is that the absence of any such trade-off implies the Phillips curve is vertical in the longer-term (see chart).

Another standard assumption of NK models, which are heavily used by central banks, is that inflation expectations are formed by a rational expectations process. This implies some very strict assumptions about the information available to individuals and their ability to process it. For example, they are assumed to know in detail how the economy works, which in modelling terms means they know the correct structural form of the model and the value of all the parameters. Furthermore they are assumed to know the distribution of shocks impacting on the economic environment. Whilst this makes the models intellectually tractable, it does not accord with the way in which people think about the real world.

But some subtle differences to the standard model can result in significant changes to the outcomes, which we can illustrate with regard to some recent interesting work by Roger Farmer. In a standard NK model the crucial relationship is that inflation is a function of expectations and the output gap, and produces the expected result that the long-run Phillips curve is indeed vertical. But Farmer postulates a model in which the standard Phillips curve is replaced by a ‘belief’ function in which nominal output in the current period depends only on what happened in the previous period (known as a Martingale process). Without going through the full details (interested readers are referred to the paper), the structure of this model implies that policies which affect aggregate demand do indeed have permanent long-run effects on the output gap and the unemployment rate, which is in contrast to the standard NK model. Moreover, Farmer’s empirical analysis suggests that the results from models using belief functions fit the data better than the results derived from the standard model.

The more we think about it, the more this structure makes sense. Indeed, as an expectations formation process, it is reasonable to assume that what happened in the recent past is a good indicator of what will happen in the near future (hence a ‘belief’ function). Moreover, since in this model the target of interest (nominal GDP) is comprised of real GDP and prices, consumers are initially unable to distinguish between real and nominal effects, even though any shocks which affect  them may have very different causes. In an extreme case where inflation slows (accelerates) but is exactly offset by a pickup (slowdown) in real growth, consumers do not adjust their expectations at all. In the real world, where people are often unable to distinguish between real and price effects in the short-term (money illusion), this appears intuitively reasonable.

All this might seem rather arcane but the object of the exercise is to demonstrate that there is only a “puzzle” regarding unemployment and inflation if we accept the idea of a Phillips curve. One of the characteristics of the NK model is that it will converge to a steady state, no matter from where it starts. Thus lower unemployment will lead to a short-term pickup in wage inflation. Farmer’s model does not converge in this way – the final solution depends very much on the starting conditions. As Farmer put it, “beliefs select the equilibrium that prevails in the long-run” – it is not a predetermined economic condition. What this implies is that central bankers may be wasting their time waiting for the economy to generate a pickup in inflation. It will only happen if consumers believe it will – and for the moment at least, they show no signs of wanting to drive inflation higher.

Saturday, 13 April 2019

The cost of taking back control


Although the prospect of a no-deal Brexit has been postponed for now, companies incurred significant costs in preparing for an outcome that never materialised. Perhaps these preparations will eventually pay off in the event that a “hard” Brexit does occur, but companies will be hoping that they never have to find out. Their “prepare for the worst and hope for the best” strategy will simply have to be written off as an unanticipated business cost.

If we think of taxation as an extra expense levied on business activity, we can treat the costs of Brexit preparation as an uncertainty tax. The requirement to prepare for Brexit came about as the result of a decision taken by government in much the same way as they decide how to levy taxes – the only difference being that the state did not see any of the revenues (and before anyone reminds me that the referendum result was driven by the view of the electorate, the decision to enact it was taken by government). Analysis conducted by Bloomberg, which looked at the costs incurred by six large companies indicates that they spent a total of £348 million in Brexit-proofing their businesses (chart). For the two large banks in the sample, which accounted for over 80% of these outlays, their preparations cost 0.5% of revenue. It is not a huge amount in the grand scheme of things but it represents a transfer of resources away from productive activity to something with no apparent end-use. Furthermore, the amount spent by HSBC and RBS was equivalent to the annual salaries of 4,000 banking staff.

What is particularly troubling is that these are costs that could have been avoided, or at the very least minimised. The government’s inability to give clear guidance as to what Brexit entailed meant that companies had to figure out the necessary steps for themselves in order to ensure business continuity. Whilst all companies have to prepare for contingencies, the government’s ill-advised decision to leave the EU single market and customs union imposed a cost on business that was wholly avoidable. Ironically, the 2017 Conservative manifesto contained the following pledges: “we need government to make Britain the best place in the world to set up and run modern businesses, bringing the jobs of the future to our country; but we also need government to create the right regulatory frameworks ... We will set rules for businesses that inspire the confidence of workers and investors alike.

The government has also committed £1.5 bn of public money to plan for a no-deal Brexit and the 6,000 civil servants who have been planning for a no-deal Brexit have been stood down. In the bigger picture £1.5 bn is peanuts but back of the envelope calculations suggest that this is equivalent to the annual cost of employing 30,000 nurses (around 10% of the total currently employed) and about the same number of police officers (around 25% of current total figures). Or to put it another way, the UK could have guaranteed the funding of 6,000 police officers for five years at a time when there is concern that police numbers are too low.

It is thus evident that Brexit is not just some arcane parliamentary debate – even the prospect of it entails real resource costs, as funds have to be diverted from areas where they would surely provide a higher social benefit. Fiscal trade-offs always involve an element of guns or butter but it is hard to disagree with the claim by Labour MP Hilary Benn, chair of the parliamentary Brexit committee, that this was a “costly price” to pay for the prime minister’s insistence of keeping no-deal on the table.

And we are not out of the woods yet. The fact that the can has merely been kicked down the road may have avoided the cliff-edge but does nothing to improve companies’ certainty with regard to the future. Business fixed investment volumes have declined for the last four quarters and it is still only slightly higher than prior to the recession of 2008-09. Output growth thus appears to have been driven by an increase in labour input, rather than capital, and whilst this has driven the unemployment rate to its lowest since the mid-1970s, the lack of investment is one of the reasons behind the UK’s poor productivity performance. To the extent that productivity is one of the key drivers of living standards, this sustained weakness in investment acts as a warning signal that Brexit-related uncertainty continues to have wider economic ramifications.

The day after Nigel Farage launched his new political party aiming to stand in the European elections on an anti-EU platform (the irony), I do continue to wonder as an economist what it is he is aiming for. As Chancellor Philip Hammond put it in 2016 “people did not vote on June 23rd to become poorer or less secure.” Yet the economics indicates that is exactly what Farage’s policies will entail. But then it’s never been about the economics – it’s all about taking back control.  As was so spectacularly demonstrated in Brussels on Wednesday evening when the EU27 took control of the Brexit process.

Thursday, 11 April 2019

Still all to play for

We have become used to late nights in Brussels over the years. Between 2010 and 2015 they were a permanent feature of the calendar as EU leaders wrestled with the problems posed by the Greek debt crisis. In recent months we have had to adjust our sleeping patterns once more to accommodate the unfolding drama of the Brexit crisis, with the UK forced to throw itself on the mercy of EU27 leaders to avert the prospect of departing the EU without a deal. As an exercise in irony, the position in which the UK government now finds itself is hard to beat. Remember how Brexit was all about “taking back control?” Remember how “they need us more than we need them?“ And what ever happened to “Brexit means Brexit?” Or “no deal is better than a bad deal?”

For the last six years I have been pointing out that the arguments put forward by Brexit proponents are inherently contradictory. As we have got closer to the departure date the risks to the economy resulting from a no-deal Brexit have become ever more clear, and reasonable politicians realise that they should not be prepared to take risks with the well-being of those who they represent. It has taken a long time for many senior politicians to see the light – too long in my view – but at least they swerved back onto the road at the last minute rather than continue to drive headlong over the cliff. The charlatans in the House of Commons (and beyond) who claim that Brexit is a cause worth crashing the economy for should be called out for what they are: Liars and fantasists who clearly do not understand that in an interconnected world, everything depends on everything else. 

Political and economic independence is a myth – and always has been. If we use trade openness as a proxy for the degree of interconnectedness with the rest of the world, and measure it as the share of exports plus imports relative to GDP, the UK ranks 52nd on a list of 174 countries (just behind France) with a share of 62.5% – higher than the G7 average of 56.2%. The most closed economy, and therefore the one with the least to lose if trade with the rest of the world were to be disrupted, is Sudan with a share of 21.5%. It’s not exactly something to aspire to. The US and China score 26.6% and 37.8% respectively. Something else that struck me this week when looking at the latest set of trade data is that the share of UK merchandise exports going to the EU has risen steadily over the past three years. After bottoming out at 46.6% in 2015 it has since started rising again, reaching 49.1% last year (you can retrieve the data here). What was that about they need us … ?

Looking forward, the UK now has a six month grace period to figure out what it wants to do. The lunatic fringe in the Conservative Party will doubtless continue to rant and rave that any extension is a betrayal of democracy and that the will of the people is being disrespected. But they have nobody to blame but themselves for passing up the opportunity to leave the EU. Theresa May’s government gave them a Withdrawal Agreement, which was admittedly far from perfect but it was always going to be the best deal they would ever get. And they blew it – in much the same way as they blew their opportunity to rid themselves of Theresa May as party leader last December. The time to have a confidence vote was after the Withdrawal Agreement was heavily rejected in January, not before the vote took place. The predictable snarling from Tory backbenches about how they plan to unseat the prime minister reminds me of the kids at school who sat at the back of the classroom and talked tough but ran away at the first sign of trouble. They are, quite simply, tactically inept – and we pay them to run the country (though “run” is just an “i” away from “ruin”).

As it happens, I suspect that the EU also made a blunder yesterday by only offering the UK a 6-month extension rather than a year. EU Council President Tusk’s suggestion of a one-year “flextension” was backed by most European leaders other than French President Macron. Whilst I have sympathy with Macron’s view that the UK could cause more trouble if it remains in the EU and acts as an obstacle to further EU progress, there is a strong sense that he was playing to a domestic audience (though they were more likely to be demonstrating against him than listening to him). The French twice blocked the UK’s entry into what was then the EEC in the 1960s, partly because the government at the time did not believe the UK shared the ideals of the six founder members. What better way to stick two fingers up at Perfidious Albion by demonstrating that de Gaulle was right? But it may have been a tactical error.

The polling evidence clearly indicates that those who believe it was a mistake for the UK to vote to leave the EU now hold a significant lead over those who believe it was the right choice. Moreover, the Poll of Polls collated by What UK Thinks, which surveys how people would vote if they were to be presented with the choice in a second referendum, indicates that Remain holds a substantial lead over the Leavers (chart). Although we have learned to be wary of the predictive power of the polls, it is notable that the 8-point lead in favour of Remain is wider than at any time prior to the 2016 referendum. Simply put, the tide appears to be running against Brexit and I suspect that the shambolic nature of the negotiations and dysfunctional nature of UK politics has played a role in shifting people’s views. If the EU is serious about trying to persuade the UK to remain, a longer extension which allows time to see whether this trend will run further, or indeed reverse, would have been a sensible option.

Whilst much remains to be decided, both regarding Brexit and in the wider political sphere, one thing is certain: There will be more uncertainty.

Saturday, 6 April 2019

Happy tax year


Today is the start of the new tax year in the UK and to celebrate the Institute for Fiscal Studies recently published a nice little piece outlining the impact of the inflation indexation of tax thresholds (here). As the IFS points out, the UK routinely uprates the cash value of tax thresholds in line with inflation, unlike many other countries. This process dates back to 1977 when two backbench MPs introduced a process forcing the government to automatically uprate thresholds in what became known as the Rooker-Wise Amendment. This makes a lot of sense: Without such indexation as wages rise in line with inflation, so ever more people at the lower end of the income scale would be dragged into higher tax brackets.

But the IFS notes that across many tax categories this automatic uprating has not taken place for a number of years. Two of the most interesting examples are fuel duties which have not increased in cash terms since April 2010 whilst working-age benefits have been frozen since 2015-16. At a time when governments around the world are trying to curb vehicle emissions it does seem rather odd that the UK is not taking the opportunity to take the moral high ground by raising emissions taxes. Consumer prices have risen by 16% since the start of 2011 which would add around 10p to a litre of diesel (7.5%). Obviously, drivers will not complain that the real value of fuel taxes has declined over the last 9 years but it does seem at odds with the government’s self-professed green credentials. We can view this move in one of two ways: It is an overtly political move to curry favour with motorists or it is an attempt to reduce the regressive effect of such taxes.

Being charitable, we should perhaps assume the latter option in an effort to redress the effect of a freeze in the cash value of working age benefits, which obviously means a decline in real terms. But as the IFS put it, “the government might believe that benefits should be more or less generous, but the extent of any change in generosity should be thought through and justified, not the arbitrary and accidental result of what the rate of inflation turns out to be.”

The IFS has also identified a growing trend of instances where thresholds are maintained in cash terms and only uprated when the government believes it to be necessary. Most of these instances really only affect those at the upper end of the income scale and whilst the vast majority of taxpayers will not shed any tears for the better paid members of society, they do illustrate how arbitrary manipulation of tax thresholds can result in some strange outcomes which may impact on work incentives.

One such is the £100k threshold at which the personal tax allowance is progressively withdrawn. Anyone earning below this amount is entitled to a £12.5k tax-free allowance but for every £2 of income above £100k the tax free amount is reduced by £1. One result of this is that those earning between £100k and £123.7k are taxed at a marginal rate of 60% (i.e. they keep just 40p out of every £1 they earn thanks to the allowance taper). Yet more bizarre is that those earning between £123.7k and £150k are once again subject to a lower marginal tax rate of 40% (above £150k the marginal income tax rate rises to 45%). “Oh dear, how sad, never mind” you may say. But there are now 968,000 taxpayers’ earning more than £100k – an increase of more than half since 2007-08, who have been dragged into the net due to the failure to index the threshold.
That said, higher taxes on the better off in society have been used to fund an extension of the zero-tax threshold for the less well paid. In fiscal year 2007-08 (the last year before the financial crisis) those earning £10,000 paid an average tax rate of 13.1% (income tax plus National Insurance Contributions). The government’s stated policy of taking the lower paid out of the tax system altogether means that someone earning this amount today pays an average tax rate of only 1.6%. As the chart suggests, the average tax schedule has clearly shifted to the right compared to 2007-08, illustrating the reduction in tax liability of the less well paid. The curve is also lower than it was in 2007-08 for incomes below £100k, so most people pay less tax, but it rises sharply thereafter, and those earning £120k are paying more.

But the true marginal tax rate is not merely made up of the amount levied on income – we have to account for the withdrawal of social welfare entitlements, and those lower down the income scale are in the line of fire. For example, those earning more than £50k per year have to pay back some of their Child Benefit in the form of extra income tax. If they earn between £50k and £60k and have one child, they face a marginal income tax rate of 51% but if they have three children their marginal income tax rate is 65%. Again, this threshold has remained unchanged for some years which means an increasing number of people are likely to run into this problem.

However, one of the more bizarre quirks is the reduction in the pension annual allowance for high-income individuals. The first £40k of pension contributions is free from income tax but those whose income (excluding pension contributions) exceeds £110k face a tapering in their tax free pension allowance. Without going through the details (they are outlined in the IFS paper), the upshot is that someone earning £150k (including pension contributions) can put £40k into their pension without incurring extra taxation but someone whose total income (including pension contributions) is £210k finds that their tax free allowance is reduced to only £10k. More bizarre still is that the likes of senior doctors, whose generous defined benefit pension schemes result in contributions exceeding £40k, can end up facing marginal tax rates of more than 100%. I was recently made aware of instances where such doctors are not prepared to work beyond a certain point because they pay more tax than they earn (obviously, they could offer their time for free but they could equally well enjoy an evening at home if they are not getting paid for it).

Although some of the examples highlighted here are extreme cases, they illustrate how an ill-designed tax system can result in high marginal tax rates that act as a disincentive to work. And the more people are dragged into the tax net due to the absence of indexation, the greater is this effect. A good tax system should meet five basic conditions: fairness, adequacy, simplicity, transparency, and administrative ease. A lack of inflation indexation undermines all of them.