Showing posts with label macroeconomics. Show all posts
Showing posts with label macroeconomics. Show all posts

Monday 15 January 2018

Making sense of macroeconomics

The reputation of macroeconomics took a battering in the wake of the global financial crisis after failing to predict the great recession. Although much of the criticism by outsiders is misplaced, there are some grains of truth and many academic economists would agree that there are many areas where economics needs to improve.

This collection of papers from the Oxford Review of Economic Policy looks at the state of macroeconomics today and provides a range of opinions from leading macroeconomists. More importantly, it shines the spotlight on those areas where economics can be seen to have failed and offers some suggestions about how to take us forward (the papers are not particularly technical and as such are relatively accessible. Credit should also go to the publishers, Oxford University Press, for taking this volume from out behind the paywall).

David Vines and Samuel Wills make the point that macroeconomics has been here before – in the early 1930s and again in the 1970s, and both times the discipline evolved to try and make sense of changed circumstances. But in order to identify what has to change, we need to know where we are and what is wrong. At the centre of the debate stand New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models, which form the workhorse model for policy analysis. 

The general consensus is that they are not fit for purpose – a point I have made before (here and here). Such models are based on microfounded representative-agents – a theoretical approach which postulates that there is a typical household or firm whose behaviour is representative of the economy as a whole. I have always rather struggled with this approach because it assumes that all agents respond in the same way – something we know is not true in the case of households given differing time preferences, depending on age and educational attainment. An additional assumption that underpins such models is that expectations are formed rationally – something we know is not always true.

Thus the consensus appears to be that these two assumptions need to be relaxed if macroeconomics is going to be more relevant for future policy work. You might say that it is about time. Indeed it is a sad indictment that it took the failure of DSGE models during the financial crisis to convince proponents that their models were flawed when it was so obvious to many people all along.

In order to understand this failure, Simon Wren-Lewis offers an explanation as to why this form of thinking became so predominant to the exclusion of other types of model. He argues that the adoption of rational expectations was “a natural extension of the idea of rationality that was ubiquitous in microeconomic theory” and that “a new generation of economists found the idea of microfounding macroeconomics very attractive. As macroeconomists, they would prefer not to be seen by their microeconomic colleagues as pursuing a discipline with seemingly little connection to their ownWhatever the precise reasons, microfounded macroeconomic models became the norm in the better academic journals.” Indeed, Wren-Lewis has long argued that since academics could only get their work published in top journals if they went down this route, this promoted an “academic capture” process which led to the propagation of a flawed methodology.

Wren-Lewis also makes the point that much of so-called cutting edge analysis is no longer constrained to be as consistent with the data as was once the case. He notes that in the 1970s, when he began working on macro models “consistency with the data was the key criteria for equations to be admissible as part of a model. If the model didn’t match past data, we had no business using it to give policy advice.” There is, of course, a well-recognised trade-off between data coherency and theoretical consistency, and I have always believed that the trick is to find the optimal point between the two in the form of a structural economic model. It does not mean that the models I use are particularly great – they certainly would not make it into the academic journals – but they do allow me to provide a simplified theoretical justification for the structure of the model, in the knowledge that it is reasonably consistent with the data.

Ultimately one of the questions macroeconomists have to answer more clearly – particularly to outsiders – is what are we trying to achieve? Although much of the external criticism zooms in on the failure of economists to forecast the future, what we are really trying to do is better understand how the economy currently works and how it might be expected to respond to shocks (such as the financial crisis). Olivier Blanchard  believes that “we need different types of macroeconomic models for different purposes” which allows a continued role for structural models, particularly for forecasting purposes. Whilst I agree with this, I have still not shaken off the conviction, best expressed by Ray Fair back in 1994 (here p28), that the structural model approach “is the best way of trying to learn how the macroeconomy works.” Structural models are far from perfect, but in my experience they are the least worst option at our disposal.

Sunday 24 September 2017

Non-linearity in economics

Gertjan Vlieghe is characterised as one of the more dovish members of the Bank of England MPC, so his recent speech in which he suggested that “the evolution of the data is increasingly suggesting that we are approaching the moment when Bank Rate may need to rise” was indeed noteworthy. It is thus a pity that the rest of the speech was overlooked for it was a fine exposition of the factors driving real interest rates. But it was his dismissal of the “fairly deeply rooted, but wrong, notion in modern macroeconomics, namely that real interest rates are primarily driven by the growth rate of the economy” that really got me thinking.

Vlieghe pointed out that “the idea persists, because of commonly adopted – but misleading – practices in solving macro models.” Modern macroeconomics is based on highly non-linear models but in order to make them more tractable for solution purposes, we use logarithmic transformations to linearise them. Vlieghe uses the example of how the linear transformation of the standard method of discounting future consumer utility results in “a tight relationship between the real interest rates and growth, and nothing else ... it kills off, mechanically, anything that might have been interesting about risk … In the linearised world, there is no risk-free real interest rate.” For those interested in the detail, the relevant part of Vlieghe’s speech is reproduced in a footnote[1].

As it happens, this is perhaps an overly-rigorous theoretical interpretation of the relationship between growth and interest rates. Admittedly, the fact that there is a strong correlation between (real) short-term interest rates and (real) GDP growth does not necessarily imply a causal relationship. But the work of the late-nineteenth century economist Knut Wicksell postulated that there is a ‘natural’ interest rate which is determined by the real disturbances affecting the economy. To the extent that these disturbances are manifest in output growth and inflation, it is clear that nominal GDP growth and interest rates ought to be closely related. It is not a 1:1 relationship, but over the long-run the rate of return on real assets ought to be similar to that on financial assets in order to satisfy equilibrium conditions. Not for nothing have many economists argued that there is a strong case for using nominal GDP growth as an anchor for monetary policy.

That said, Vlieghe’s point on how linear approximations can result in specious outcomes was well made. All students of econometrics spend a lot of time learning about the properties of linear regression models which partly explains why grubby practitioners like me are comfortable applying linear transformations in order to easily apply linear estimation techniques. Another reason for preferring linearity in econometrics is that non-linear solutions can be indeterminate because we do not know whether they are the universally right answer, or whether they apply only under certain conditions. The reason for this is that many of the common solution techniques rely on grid searches conducted over a range of values. In the jargon, we do not know whether we have found local maxima only within the range in which the search is conducted or whether the “true” answer lies outside it. Our models may thus be biased – in other words, deliver the wrong answers under certain conditions – and as a result many economists stick to what they know in the form of linearity.

But this bias towards linearity, tempting though it is, can be applied to situations in which it is not appropriate. The authors of a paper in experimental psychology[2] assessed the accuracy of long-term growth estimates by panels of “experts” and laypeople. Whilst both groups tended to underestimate growth at rates above 1%, the degree of underestimation was greater for “experts” because they ignored exponential effects more often than the group of laypeople. Another paper by DeBock et al (2013) is interesting because it provides a literature review of the reliance on linearity and reports the findings of an experiment amongst business economics students who were confronted with correct and incorrect statements on linearity in economic situations. The authors concluded that many of the students showed over-reliance on linearity in their analysis.

An interesting paper on nonlinearity by Doyne Farmer (here) looks at various aspects of nonlinearity and complexity in economics. He makes the point that DSGE models are too highly stylised to say anything useful about the behaviour of economies in the real world. Instead, economics might start to take lessons from areas such as meteorology which builds very complex data-based nonlinear models. This has been enabled by the significant increase in computing power which allows simulation analysis to be conducted much more cheaply and effectively than in the past.

For policymakers, the fear is that linear approximations in a nonlinear world lead to distorted policy conclusions. One problem is that economics tends to focus on equilibrium solutions. But as noted above, there may not be a single equilibrium. Indeed, the impact of the financial crash of 2008 was an object lesson in how nonlinear feedbacks can produce outcomes far beyond our expectations.

The mathematician Stan Ulam used to give lectures on nonlinear mathematics and apologise that the title was a misnomer, for all interesting maths involves nonlinearity. As Farmer put it, “just as almost all mathematics is nonlinear, almost all economic phenomena are complex … A more tractable topic would be whether there are any problems it does not illuminate, or should not illuminate.”



[2] Christandl, F., and D. Fetchenhauer (2009) ‘How laypeople and experts misperceive the effect of economic growth’, Journal of Economic Psychology (30) pp 381–92

Saturday 15 July 2017

Mr Phillips is resting

Markets are increasingly concerned that central bankers may be about to take away the punch bowl rather earlier than they had previously anticipated. The Bank of Canada was the latest central bank to tighten policy, raising rates by 25 bps this week for the first time since 2010. There is also increased nervousness regarding the policy intentions of the ECB and BoE. But whilst there are good reasons for taking away some of the emergency easing put in place in the wake of the financial crash of 2008-09, it is proving much harder to justify tightening on the basis of inflation than most had expected.

This is a particular problem for the Fed which has nudged up the funds rate in four steps of 25 bps over the past 18 months, but is reliant on signs of higher inflation to justify ongoing policy normalisation. US core CPI inflation, which was running above the Fed’s 2% target rate last year, slipped back to 1.7% in May and June and is thus at the bottom end of the range in place since 2011. Wage inflation has also picked up, but here too the acceleration has been modest, with hourly earnings running at an annual rate of 2.8% in June which is only 0.5 percentage points higher than the average of the last three years.

For an economy which is running close to what appears to be full employment, this might appear rather surprising. But the headline unemployment rate, currently 4.4%, understates the degree of slack in the US labour market. The so-called U6 rate which adds in “marginally attached” workers – defined as “those who currently are neither working nor looking for work but indicate that they want and are available for a job and have looked for work sometime in the past 12 months” – is still at 8.6%. This is slightly higher than the previous cyclical trough in 2007 when it reached 8.0%, and significantly above the low of 6.8% recorded in October 2000. Arguably, therefore, the jobless rate can still fall a little further before wage and price inflation starts to become more of an issue.

The UK shows a similar – indeed, perhaps more extreme – picture with the unemployment rate in the three months to May at its lowest since 1975 whereas the rate of wage inflation, at 1.8%, is a full percentage point below that recorded last November. As in the US, there is a significant amount of spare capacity in the labour market. Currently, 12% of those working in part-time employment are doing so because they cannot find full-time employment. Whilst this is down from a peak of 18.5% in 2013, it is still higher than the 8-9% range recorded before the recession of 2008-09 and points to a certain degree of involuntary underemployment. This in turn suggests that there have been structural changes in the labour market which have impacted on the traditional relationship between headline unemployment and wage inflation.

For many decades, economists have focused on the negative relationship between wage inflation and unemployment first postulated by Bill Phillips in the 1950s. In its simplest form, this suggests that policymakers face a trade-off between unemployment and inflation. In practice, the relationship holds only in the short-term, if at all. What is notable, however, is that in the UK and US there has been a flattening of the curve in recent years, suggesting that any negative relationship between wages and unemployment is even weaker today than in the past. 

This is illustrated for the UK in the chart below, based on an idea presented in Andy Haldane’s recent speech entitled “Work, Wages and Monetary Policy.” The chart shows the trend derived from a linear regression of wage inflation on the unemployment rate over various periods. Two features are evident: Most obviously, the line has moved down reflecting the fact that over time inflation in the UK has fallen. But it is also notable that the slope of the line has become shallower. In other words, UK wage inflation has become less sensitive to changes in the unemployment rate. To illustrate the implications of this, we assess the wage inflation rate consistent with an unemployment rate of 5.5% and how this would change if unemployment fell to 4.5% (current levels).

The results are shown in the table (below). Simply put, an unemployment rate of 5.5% would be associated with wage inflation of 14% on the basis of the relationship over the period 1971-1997, falling to 4.1% between 1998-2012 and just 2.1% on the basis of the data for 2013-2016. But what is also interesting is a one percentage point fall in the jobless rate to 4.5% has a much smaller impact based on recent years’ data than in the pre-recession period. For example, this might have been expected to produce a 0.9 percentage point rise in wage inflation over the 1997-2012 period compared to a 0.5pp rise based on recent data.

Space considerations preclude a look at the reasons for the weaker sensitivity of wage inflation to labour market conditions. It may be the result of factors such as a lower degree of unionisation; the more widespread use of zero hours contracts and the rise of the gig economy, all of which have raised the degree of slack which the headline unemployment rate does not capture. But what the analysis does suggest is that policymakers can afford to spend less time worrying about the impact of low unemployment on wage inflation. There may be a case for higher interest rates but it is not to be found in the labour market.

As a final thought, I am struck by certain parallels with Japan. Following the bursting of the bubble economy, the Japanese authorities failed to spot the structural factors which led the economy to the brink of deflation, notably an ageing demographic profile which prompted a switch towards saving rather than consumption. The one factor we might be missing today is the impact of automation, which threatens a significant substitution of capital for labour and which could put downward pressure on the relative price of labour. I would thus not be in a hurry to raise interest rates to counter a wage inflation threat which has so far failed to materialise.

Tuesday 4 July 2017

Getting our facts right

A few weeks ago I was involved in a debate with a young analyst who refused to believe that exchange rates are driven by factors other than trade deficits (not current accounts, simply the flow of trade in goods). After fruitless attempts to try and engage in some form of intellectual debate, only to be met each time with the stock response “I disagree,” I simply shut down the conversation. This is not my preferred mode of interaction – far from it. We learn from discourse and I like to think I am open to changing my mind on various issues if the facts prove I was wrong.

It was in this vein that I read with interest a blog piece by Noah Smith entitled “Is economics a science?” "Real" scientists would treat the question with contempt and indeed I never try to claim that it is. But what economics tries to do is measure and draw inference from observation. In that respect it employs scientific methods even if it does not always result in scientific conclusions. One reason why the theory and practice differ so much is that the logical economic answer is not always politically acceptable. Economics also has deep philosophical roots which colour the prior beliefs of many practitioners. Indeed, one of Adam Smith’s noted works - admired by many on the political right - was a Theory of Moral Sentiments published 17 years before the Wealth of Nations. It is perhaps these philosophical underpinnings which explain why adherents to the Austrian school of economic thought, which also derives from a branch of philosophy, eschew empiricism in favour of a priori deduction in order to reach a conclusion.

I could not help thinking during the Brexit debate last year that many of the leading Brexiteers were adherents of free market economics of the kind espoused by the Austrian school. It therefore does not surprise me that many of their arguments were not backed up by empirical analysis. I have also been struck by the apparent shift in tone of those who 12 months ago supported Brexit. Only today, the campaign director of Vote Leave, Dominic Cummings, admitted that “in some possible branches of the future leaving will be an error”  (let me correct you there, Dominic. In pretty much all branches of the future leaving will be an error). Cummings appears to be directing much of the blame for this on the way it has been handled by Downing Street. Personally, I prefer the explanation that those responsible for promoting the cause did not do their homework and failed to think through the implications of their actions.  In other words, they adopted a very unscientific approach.

However, we also have to be very careful when making arguments based on data alone. One of the issues which the academic world is currently very concerned about is the accuracy and replicability of much (non-economic) scientific work. Only last week, the president of the Royal Statistical Society, Professor Sir David Spiegelhalter, pointed out that public trust in scientific conclusions is being undermined by a “failure to adhere to good scientific practice and the desperation to publish or perish.” As Spiegelhalter points out, most scientists do not overtly falsify their data, but they sometimes play fast and loose with statistical inference (credit should also go to The Economist for having made this point repeatedly in recent years).

Aside from problems arising from the accuracy of results, economics suffers from another problem due to the quality of the underlying data. Although I do believe that economic statisticians are free from political bias, economic data often suffer from sample bias due to the fact that it is constructed by drawing population inferences from a relatively small sample. It is often an approximation to reality at best. A case in point is UK labour force data, where a tightening of the criteria for benefit eligibility means that many people whose fitness for work is questionable, have been reclassified as part of the labour force. UK immigration data are also not fit for purpose either, despite the fact that they form a key element in the government’s Brexit strategy (amongst other reasons, because the UK does not require migrants to register after arrival, the figures are compiled from the International Passenger Survey, which has numerous methodological shortcomings).

But for all that, a debate based on some form of data is always more informed than one based purely on belief and supposition. As the Canadian academic Marshall McLuhan pointed out, “a point of view can be a dangerous luxury when substituted for insight and understanding.” A year on from the Brexit referendum, that rings all the more true.

Thursday 11 May 2017

High Labour costs

Four weeks from today, the main UK political parties will go head-to-head in an election we do not really need to have. No prizes for guessing that Brexit will be the key battleground on which it will be fought. But with changes in the leadership of all main parties since 2015, this really should be an opportunity to address many of the key economic issues which have plagued the UK over the last seven years. The lack of investment; the over-reliance on austerity and a chance to reset the terms of the EU debate which David Cameron got so totally wrong and which Theresa May is not helping to improve. One might have thought that by now, the parties would have their economic plans ready to roll in order to give us time to assess the issues. Well, not exactly. The Conservatives are not planning to publish their manifesto until next week, and the best we have from Labour is a leaked draft which was splashed all over the press, framed as a socialist document worse than the longest suicide note in history, as their 1983 agenda was dubbed.

If you actually read through the leaked draft of the Labour Party manifesto, rather than rely in the headlines which tell us how very socialist it is, there are some rather interesting ideas in there. Jeremy Corbyn, for all his many faults, is trying to fight an election on issues of fairness and responsibility. The key message is that the vast majority of the electorate has been squeezed since the financial crisis-induced economic collapse, and Labour wants to do something about rectifying it. Thus the plans outlined so far indicate more spending on the NHS and the creation of a National Care Service; the building of more new houses; the scrapping of university tuition fees and the reintroduction of student maintenance grants. Add in the prospect of establishing a National Investment Bank to facilitate £250bn of spending on infrastructure over the next ten years (which is not a bad idea and I will deal with it another time), and you have what sounds like a classic fiscal stimulus. I would use the phrase “pump priming” but Donald Trump has apparently just invented it. (Have you heard that expression used before? Because I haven’t heard it. I mean, I just…I came up with it a couple of days ago and I thought it was good).

There is just one tiny problem: The plan sounds horrendously expensive – and that is before we even talk about the renationalisation of rail and energy. Let’s start with education. The Institute for Fiscal Studies reckons that Labour’s Higher Education policy would raise the deficit by over £8bn (about 0.5% of GDP at current prices). Investing £250bn in infrastructure over a ten year period implies a boost equivalent to 1.5% of GDP per year. To secure the financing, taxes must inevitably go up. Labour has suggested that it will raise income taxes on those earning over £80,000 per year (the top 5%), though has not said by how much, and “will ask large corporations to pay a little more.”

Some back-of-the-envelope calculations suggest that there are 1.1 million taxpayers earning between £80k and £150k per year paying higher rate tax at 40%, and 0.3 million earning above £150k paying a 45% rate. This means that only 25% of all higher rate taxpayers earn more than £80k. We can thus take the HMRC’s tax rate elasticity multiplier which calculates the full effect of raising higher rates taxes, and assume a 25% efficiency rate compared to the full impact. Running through the numbers, each 1% rise in tax on those earning above £80k per year will yield around £0.5bn in revenue per year. If the tax rate is whacked up by 4 to 5 percentage points, we could thus fund the education costs. The ready reckoner also suggests that each 1% on the corporate rate will reap around £2.4bn per annum. Thus, reversing the planned 3 percentage point cut in corporate taxes by 2020 yields another £7.2bn over three years. A Labour government could even raise corporate taxes back towards 25% over (say) five years, yielding an extra £12bn by 2022. Adding up these numbers (an effective 8 percentage point rise in corporate taxes and 5 points on taxes for higher earners), we thus start to get close to the £25bn needed for annual infrastructure spending.

But funding the reprivatisation would be enormously expensive. A brokerage report by Jefferies in 2015 put the cost of renationalising the energy sector at £185bn (~11% of GDP). They also pointed out that “if a future Labour government restricted itself to just acquiring the UK assets of the big six generators plus National Grid, the cost would be £124bn.” I have no idea what renationalising the rail sector would cost but let’s say £60bn for the sake of argument. An increase of £184bn in public outlays would raise the debt-to-GDP ratio by 11% at one stroke. Even assuming this is not a problem, the markets would almost certainly demand a higher risk premium on gilts, so debt servicing costs would rise. But here is the kicker: Labour has proposed a Fiscal Credibility Rule which plans to reduce the current balance to zero on a five year rolling timescale (which sounds to me like a never-never rule), but also that the debt-to-GDP ratio be lower at the end of the parliamentary term than at the beginning. Nationalising rail and energy would blow a hole in that, but fortunately Labour proposes to suspend the operation of the rule so long as monetary policy is operating at the lower bound. So that’s all right then!

All of these numbers are back of the envelope calculations and in no way constitute a detailed analysis of the costs.  Although many commentators liken this document to Labour’s 1983 election manifesto, its 1974 document which called for “more control over the powerful private forces that at present dominate our economic life” was at least as damaging because the party was actually in government. Labour’s main failure in the 1970s was to recognise that the poor performance of the British economy was not due simply to the failings of the capitalist system: It was largely due to an insular view of the economic problems. It feels very much like we are at that point again today.

Thursday 8 December 2016

Those sixties weren't so great


Bank of England Governor Carney delivered an interesting speech this week (here) in which he took a closer look at the reasons behind the rise in populist responses to our current economic ills. None of it was particularly new, but it was illuminating for the fact that a heavyweight economic policymaker addressed the issues in a more rational fashion than I have yet heard from most politicians.

Most of the newspaper headlines focused on his one killer statistic that in the UK we are currently experiencing “the first lost decade since the 1860s.” (Clearly those sixties were not all about flower power). Carney used the excellent BoE database (here) to show that real wages over the last ten years have grown at their slowest rate since the mid-nineteenth century. And as I have long argued, although productivity performance has been lousy it has at least outstripped real wage growth, which suggests that workers have not been compensated for their efforts. Indeed, the share of wage and salary income in UK GDP (technically, gross valued added at factor cost, but let’s not overcomplicate things) has fallen from around 58% in 2011 to 56.5% in 2015.

Carney makes the valid point that although economists are unshakeable in their belief that society benefits from free trade, not everybody benefits equally. As he put it, “the benefits from trade are unequally spread across individuals and time.” So far as I am concerned, he is preaching to the converted: Those of us who recall the wholesale destruction of large parts of the UK manufacturing base in the 1980s need no reminding that there were significant adjustment costs as those losing their jobs had nowhere else to turn. Some people were eventually forced to relocate to find work; others had to wait for new local opportunities to arise. Viewed from 30 years on, many of the economic scars have healed as the UK macroeconomic data show that real incomes per head are almost double the levels of the early-1980s. But at the time the local dislocation was huge, and today it is not just the UK which is facing these problems: It is an issue across the whole of (what is still euphemistically called) the industrialised world.

Those who argue that markets will always adjust often overlook the fact that the longer-term gains are smaller than they appear when offset against the short-term costs. The pace of technological change magnifies these impacts. If people are concerned that their jobs can be replaced by machines, they are bound to become fearful and resentful. This is, of course, not new as the Luddite movement of the early 19th century demonstrates (here for a quick overview). We should also take encouragement from the fact that societies have usually managed to accommodate technological advances relatively easily. But this will not happen if we continue to plod down the same unimaginative policy path that we have been following in recent decades.

Before turning to what needs to be done, Carney defended the role of monetary policy by arguing that it “has offset all of the headwinds to growth arising from private deleveraging, fiscal consolidation and subdued world growth.  People haven’t been made poorer.” But he noted that they feel worse off because productivity growth remains subdued. Recall Paul Krugman’s line that “productivity isn’t everything, but in the long run it is almost everything.  A country’s ability to improve its standard of living over time depends almost entirely on its ability to raise its output per worker.” (Actually that is true only if workers are compensated for their productivity performance which, at least in the UK, has not been true in recent years).

Carney thus believes that efforts to boost productivity are an important element in generating an economic turnaround. Quite how we achieve this is not so easy to identify and we will revisit it another time. However, his conclusion was a sharp retort to the recent criticisms which have been put his way by politicians: “To address the deeper causes of weak growth, higher inequality and rising insecurity requires a globalisation that works for all. For the societies of free-trading, networked countries to prosper, they must first re-distribute some of the gains from trade and technology, and then re-skill and reconnect all of their citizens. By doing so, they can put individuals back in control.” This is an interesting twist on the take-back-control of the Brexiteers, and I have to say I agree with the Governor on this one.

It is just a pity that such a cogent analysis of the UK’s ills was left until almost six months after the referendum. But when it takes the Governor of the Bank of England to point out that “we must grow our economy by rebalancing the mix of monetary policy, fiscal policy and structural reform,” this strikes me as a sad indictment of a political class which continues to deflect the blame for years of policy neglect onto the EU.

Saturday 3 December 2016

Better not call Paul

Paul Samuelson was quite clearly a brilliant man and one of the most influential economists of the twentieth century. His magnum opus, Foundations of Economic Analysis published in 1947, was one of the first rigorous mathematical treatments of important economic concepts. But for all its undoubted brilliance, I have long thought that Samuelson's work was one of the worst things to happen to economics.

This is not to denigrate his work. Samuelson produced some original insight in fields as diverse as consumer theory, welfare economics, public finance and international trade issues. Rather the problem is that he spawned a number of imitators who, captivated by the elegance of his work, attempted to replicate his mathematics rather than his economic insights with the result that academic economics became ever more algebraically rigorous. As Lo and Mueller (2010, here) have pointed out, whilst economics has become much more rigorous and "has led to a number of important breakthroughs ...  ‘physics envy’ has also created a false sense of mathematical precision".

This level of abstraction was part of the reason why the severity of the market crash of 2008 came as such a surprise. As Goldman Sachs' CFO commented in the Financial Times in August 2007 "We are seeing things that were 25-standard deviation moves, several days in a row." Andy Haldane, the Bank of England's chief economist, later pointed out: "Assuming a normal distribution, a 7.26-sigma daily loss would be expected to occur once every 13.7 billion or so years. That is roughly the estimated age of the universe. A 25-sigma event would be expected to occur once every 6x10124 lives of the universe. That is quite a lot of human histories." In simple terms, the risk models used at the time were using the wrong statistical distribution to model risk and making firm conclusions based upon it.

It is thus no surprise that attempts have been made to reclaim the centre ground of economics. A recently-published book entitled Econocracy: The Perils of Leaving Economics to the Experts (here), attempts to redress the balance. As the authors note: “Politics and policymaking are conducted in the language of economics and economic logic shapes how political issues are thought about and addressed. The result is that the majority of citizens, who cannot speak this language, are locked out of politics while political decisions are increasingly devolved to experts.”

A letter in the Financial Times (here) recently made a similar point, arguing that "the folly of mainstream economists is their pretence to emulate the natural sciences, presuming to be value free." The author, Yeomin Yoon of Seton Hall University in New Jersey, noted that current practice increasingly deviates from the teachings of Alfred Marshall, who argued: “(1) Use mathematics as a shorthand language, rather than as an engine of inquiry. (2) Keep to them till you have done. (3) Translate into English. (4) Then illustrate by examples that are important in real life. (5) Burn the mathematics. (6) If you can’t succeed in 4, burn 3 … I think you should do all you can to prevent people from using mathematics in cases in which the English language is as short as the mathematical.” Given that Marshall was no mean mathematician himself that is a pretty powerful argument.

Whilst it is easy to be critical of academic economics for creating a level of abstraction that so many people feel unable to relate to, and as a consequence feel able to ignore (as we saw during the Brexit discussions), the future may not be so bleak. Macroeconomics may be operating in an intellectual cul-de-sac (said the macroeconomist) but matters are not helped by the fact that too many people outside the profession expect economists to be able to predict the future with an unreasonable degree of accuracy. Rather than the study of abstract macroeconomic quantities with highly politicised connotations, the roots of economics lie in the study of how people make decisions. As a result, the discipline of microeconomics is thriving. The new and exciting field is behavioural economics where experimentation rather than algebra is used to tease out some of the newest ideas in economic thinking.

Economics is a discipline which has traditionally borrowed ideas from other areas. For a long time, perhaps, it borrowed too much from mathematics and the physical sciences, but by going back to its roots as a social science and borrowing ideas from psychology, the revolution which so many people are calling for may actually already be happening.

Friday 2 December 2016

Don't make the same mistakes twice

"It is not difficult to indicate the reasons why business last year passed through periods of great anxiety. Under the strain of almost uninterrupted political tension … the state of the world is feverish rather than healthy; and whatever recovery may be seen is anything but steadfast, since it is dependent on the use of stimulants on the one hand and interrupted by grave disturbances on the other. In the face of grim reality in Europe there is decidedly less belief in experimentation with new methods of economic policy.”

This summary of the global conjuncture by the Bank for International Settlements sounds all too familiar. But it was taken from the Annual Report released in May 1939 (here). Indeed, although we should not overdo the parallels with the 1930s, there are a number of worrying economic developments which investors ignore at their peril – and they are not the obvious ones which spring to mind when we talk about the concerns of that particular decade.

It is, however, difficult to overlook the fact that the problems today and in 1939 were triggered by a huge financial crash and compounded by policy errors. Fiscal policy was criticised for being too tight both in the wake of the 1929 crash and again today. In 1931, President Hoover’s State of the Union message noted that “even with increased taxation, the Government will reach the utmost safe limit of its borrowing capacity by the expenditures for which we are already obligated... To go further than these limits ... will destroy confidence.” Today it is European fiscal policy which is accused of not stepping up to the plate. 

Ironically, one of the lessons of the 1930s, derived from the work of economists led by Keynes, was that government had a role to play in the economic cycle by stepping in to make up for any shortfall in aggregate demand. It has always struck me as bizarre that these insights, which prompted Keynes to write The General Theory, should be ignored at a time when the economic cycle shows some similarities with the macroeconomics of the Great Depression. There is also an irony in that both in the 1930s and again today, it is monetary policy which comes in for great criticism. Between 1930 and 1933 the Fed was accused of running an overly restrictive monetary stance. Today, the world’s major central banks are accused of being too lax.

Protectionist sentiment is also rising up the agenda once more. Donald Trump made “promises” on the campaign trail which were nothing short of protectionist (“I would tax China on products coming in … let me tell you what the tax should be … the tax should be 45 percent.”) Of course, this is a response to concerns that US jobs are being “exported” to lower cost economies in much the same way as occurred in the wake of the 1929 crash. Back then, this resulted in the signing into law of the US Tariff Act of 1930 which did a lot of damage to world trade volumes as other countries retaliated against the raising of US import tariffs. It is a sobering thought that in the last four years, the rate of global export growth has posted its slowest multi-year growth rate since the 1930s (see chart).

None of this means that current economic conditions will lead inexorably to the same outcomes as in the 1930s. But we should not ignore the role of fiscal policy in helping to promote recovery – as the OECD noted in the latest edition of its Economic Outlook, released this week. And as the discussions over Brexit continue, some British ministers appear to think that a world of tariffs (albeit low ones) is preferable to belonging to a system without any, so long as they can achieve their own version of economic nationalism. Most economists believe that this will result in a loss of British economic welfare, but just as the likes of US economist Irving Fisher were initially ignored for pointing out the same thing in 1930, so no-one in government appears willing to engage in a rational debate about the costs of Brexit.

Economists of my era were taught that the 1930s were a decade of collective madness and that we had learned the lessons of that benighted period, and would not repeat them again. But history has a habit of making fools of us all. However, we can only hope for rationality to begin to reassert itself before we make even bigger economic mistakes from which it becomes more difficult to recover.

Wednesday 23 November 2016

Modern macroeconomics: Is it really so bad?

I have to confess that I have long been torn between the intellectual pursuit of academic economics and the uselessness of much of the output. Part of the appeal of economic theory is that it attempts to address problems in a rigorous manner. Of course, that is also its Achilles Heel: the intellectual underpinnings of much that passes for current state of the art thinking are simply bogus. For that reason papers such as the one by Paul Romer, chief economist at the World Bank, entitled ‘The Trouble with Macroeconomics’ (here), always strike a chord. In many ways, this is a subversive read for macro economists and makes a number of serious, but in my view substantiated, allegations regarding the state of economics today.

Romer’s key thesis is that the “identification problem” in economics has essentially taken us round in a circle back to where we started in the 1970s. I hope readers will forgive a little digression at this point so that we can more easily understand the nature of the problem which Romer sets out. The identification problem requires, as Chris Sims noted in a famous 1980 paper, that we must be able to identify “observationally distinct patterns of behaviour” for a given model. This is both a philosophical and empirical argument. Philosophically, it requires us to specify very carefully how our economic system works. In an empirical sense, it means we must construct models in which unique values for each of the model parameters can be derived from other variables in the system. This in turn allows us to clearly identify how economic linkages operate.

As it happens most empirical macro models in use in 1980 were over-identified: It was possible to explain each variable in the model by various different combinations of other variables. Consequently, we were unable to determine precisely how the macroeconomy worked. In his 1980 paper entitled ‘Macroeconomics and Reality’ (a title which, when I first read the paper, seemed to be most inappropriate) Sims noted that such models could only be made to work by applying “incredible” identifying restrictions.

This identification problem is the key to understanding Romer’s critique of much modern macroeconomic theory. In his view, by trying to get away from imposing such “incredible” restrictions, macroeconomists have ended up devising models which themselves are increasingly divorced from reality. Romer starts by taking direct aim at Real Business Cycle (RBC) models which were a direct response to many of the criticisms of the over-identified macro models of the 1970s. He argues that they make a hugely simplifying assumption that cyclical fluctuations in output are solely the result of shocks. The question Romer poses is "what are these imaginary shocks?" Is there really no role for monetary policy, as much of thinking in this field suggests? If that is true, we should all pack up and go home – and the Fed, ECB, BoE et al should abandon attempts to stabilise the economic cycle.

He then aims his blunderbuss at the DSGE models which followed from this, arguing in effect that they are a post-truth way of looking at the world because they rely less on data and more on a series of assumptions about how the world works. (I posted on this topic here). Indeed, such is Romer’s apparent contempt for some of this analysis that he states “the noncommittal relationship with the truth revealed by these methodological evasions [and] dismissal of fact goes so far beyond post-modern irony that it deserves its own label. I suggest ‘post-real’.”

Even worse, in his view, is that many of the proponents of modern macroeconomics have tended to band together and reinforced each other’s views rather than challenging them. This unwillingness to challenge belief systems has, in Romer’s opinion, promoted a stagnant culture which has slowed the advancement of new ideas in economics.

Faced with such a nihilistic view of economics, you may wonder what is the point of it all? I share Romer’s criticisms of many of the ideas which have found their way into the economic mainstream. But I also adhere to the George Box school of thought that whilst all models are wrong, some are useful. There is thus nothing inherently wrong with the idea of going in the direction of RBC or DSGE models – it is just that they have captured the high intellectual ground and have proven difficult to shift. And like it or not, many of the competing theories have not proven up to scratch either.

What is also interesting is that whilst many economists passionately advocate a particular school of thought, few people of my acquaintance argue in public in such terms. So either this debate is a particularly American academic thing or it is confined to those pushing hard to get their material into the journals (and failing). And finally, I have long argued that the financial crisis will act as an efficient way of winnowing out many of the worst ideas in macroeconomics. Just as the Great Depression of the 1930s produced the ideas of Keynes and his acolytes, so the current crisis in the western word may yet lead to a more fruitful approach to many economic issues. So chin up, Mr Romer. The darkest hour comes just before dawn.