Showing posts with label macroeconomics. Show all posts
Showing posts with label macroeconomics. Show all posts

Friday, 25 July 2025

Rationality meets reality

 The rational expectations (RE) revolution which swept through macroeconomics in the 1970s and 1980s has changed the way we think about many aspects of macro. As theories go, it is coherent and persuasive and has allowed us to think differently about many aspects of economics and finance. But there have been rumblings recently from respected professionals in the field expressing doubts about its usefulness. As one who has never fully bought into the idea that this is actually how people form expectations, I am obviously prone to confirmation bias, but clearly I am not the only one who has reservations about one of the key underpinnings of modern macroeconomics.

What are rational expectations?

In very simple terms, RE assumes that economic agents make the best use of all currently available information to make predictions about future events in a logically consistent manner. The upshot is that individuals do not make systematic forecast errors (although they can make random errors). This appears uncontroversial at first glance but it has profound consequences for policymakers. Prior to the work of Robert Lucas in the 1970s, it was assumed that a paradigm used to assess the outcome of a policy change would either remain unchanged in future, or would change only slowly as expectations adapted to new evidence. But Lucas pointed out that as economic agents recognise and internalise the way policy affects the economy, they will change their expectations formation process. As a result, the old paradigm is no longer valid. Applying the same policy options in future would result in different outcomes because agents would anticipate what was likely to happen and act accordingly.

A simple example is the  Phillips curve, which was based on the idea that there is a stable and exploitable inverse trade-off between inflation and unemployment. In this static world, a policymaker wishing to reduce unemployment would be prepared to allow inflation to rise. But in a world where expectations are formed rationally, they can only get away with that once.  Next time round workers push for higher wage claims to offset the erosion of real wages, with the result that employment falls (unemployment rises). As it happened this was pretty much what happened in the 1970s as the inverse relationship between the two broke down.

Lucas led the intellectual charge of New Classical economics which usurped the dominant Keynesian paradigm, and challenged the efficacy of discretionary macroeconomic policies by arguing that if individuals can foresee the consequences of policy changes, attempts to manipulate the economy through fiscal or monetary policy become less effective. The New Keynesian response was to synthesise Keynesian principles with insights from the New Classical revolution. Crucially, however, they did not reject the RE hypothesis.

Finance, too, has been captured by the RE revolution. RE are a key component of the Efficient Markets Hypothesis, according to which asset prices reflect all available information, and market participants form rational expectations about future events. In an efficient market, it is assumed that investors cannot consistently achieve abnormal returns by exploiting past information because prices already incorporate all relevant data. Furthermore, the Capital Asset Pricing Model (CAPM) assumes that investors form homogeneous and internally consistent expectations about returns, which is related to the rational expectations idea that agents' forecasts are consistent with the model they use.

Are expectations formed rationally?

Rational expectations are, to use the jargon, ‘model consistent’. In other words the average predictions across all economic agents match the predictions of an economic model which captures the true structure of the economy. Obviously, we are not solving complex models of the economy to derive views about the future. Instead, we rely on heuristic rules of thumb, public forecasts, market signals or simplified mental models. If these rules generate outcomes in line with how the economy actually works, we can still proceed on the basis that agents form rational expectations. But it is questionable whether such rules actually work, particularly at times of elevated uncertainty such as we are experiencing today. Given the raised prospect of extreme outcomes in the wake of Donald Trump’s election, we have even less certainty about what the world might look like in future. Due to this lack of information, agents can be excused for simply extrapolating forward based on past performance on the basis that this represented “normality” and that risks are evenly distributed around this outcome. It might be a rational way of looking at the world, but now expectations are being formed adaptively rather than in a model-consistent way.

What does the evidence tell us?

Nowhere are RE more central than in financial markets, where asset prices are typically assumed to reflect the rationally expected present value of future cash flows. In an important 1981 paper, Robert Shiller examined this idea by testing whether fluctuations in stock prices could be explained by changes in expectations of future dividends. According to the standard RE-based present value model, most of the variability in prices should come from new information that affects expected future dividends. However, Shiller found that actual stock prices were far more volatile than the relatively stable stream of realised dividends would justify. This result, often described as the “excess volatility puzzle,” called into question whether prices are set purely on the basis of rational expectations, or whether they are also influenced by factors such as changing risk premia, investor sentiment, or other non-fundamental forces.

In an attempt to update the Shiller methodology, I computed the ex post “fair value” of the S&P (in real terms) by discounting actual realised dividends over a five-year horizon, assuming perfect foresight of future payouts. As the dataset extends only to June 2025, the perfect foresight calculation is only feasible up to June 2020; for subsequent periods, I extrapolated using the trailing 12-month average dividend to maintain a continuous valuation series. While this is not exactly comparable, it is an approach often used in the academic literature. The results suggest that around the time of the dot com bubble in the late-1990s, and again around the time of the Lehman’s bust, equities were overvalued relative to fundamentally justified levels. However, these periods pale in comparison to the post-2020 period (methodological differences notwithstanding), suggesting that equities may be experiencing a period of irrational exuberance.

Providing statistical evidence for the existence (or absence) of RE is challenging. One approach is to compare the fundamentally justified price – defined above as the discounted value of future earnings – with the actual observed price. If RE hold, all available information should already be reflected in the price, implying that the difference between the two should not be systematically related to any other variable. Consequently, regressing this difference on another metric should yield a coefficient that is not statistically different from zero (see below).

However, the results suggest that a regression of the difference on observable variables such as the P/E ratio or dividend yield do indeed generate coefficients which are statistically significantly different from zero. The charts (below) plot the forecast error – defined as the difference between the actual return and the expected return – against the observed P/E ratio and dividend yield. Under the RE hypothesis, forecast errors should be purely random: they should not be systematically related to any information known in advance. This should appear as a scatter of points randomly distributed around zero, with the fitted regression line essentially flat. But the plots show a statistically significant upward slope, suggesting that forecast errors are systematically related to both the P/E ratio and dividend yield, which imply that investors could, in principle, have used these variables to improve their forecasts. This provides some evidence against RE, as it implies that prices do not fully incorporate available information at any given time.

You don’t just have to take my word for it. Cliff Asness, one of the most astute and intellectually rigorous portfolio managers out there, wrote an excellent paper in 2024 arguing that markets have become far less efficient since the early 1990s. Asness offers three reasons why markets are now less efficient compared with the pre-1990 period: (i) the rise of indexing has made stock prices more inelastic with respect to new information; (ii) an extended period of low interest rates has distorted investors ability to respond appropriately to changed information and (iii) the rise of social media has amplified trend following and momentum strategies at the expense of rational information processing.

One worrying thought is that if markets, with their access to huge amounts of data, are not processing information in a manner consistent with RE, what is the likelihood that households are doing it? This matters because RE are absolutely central to modern DSGE (Dynamic Stochastic General Equilibrium) models in which agents (households, firms, policymakers) are assumed to form expectations about the future that are model-consistent. If expectations are not formed rationally, forecasts based on such models may be potentially biased, and policy outcomes may be less effective.

Final thoughts

Looking at market movements in recent months, perhaps we can be forgiven for thinking that the past is the only guide we have to future performance. But such reliance on past trends carries its own risks. Adaptive expectations, by definition, anchor forecasts to recent experience, making them slow to incorporate new structural shifts or unprecedented shocks. In the current environment of heightened uncertainty, characterised by a global political realignment and the disruptive potential of technological change, such inertia may lead to systematic forecast errors. Instead of anticipating turning points, markets and policymakers risk being repeatedly surprised by outcomes that fall outside the narrow band of recent history.

Moreover, if everyone leans too heavily on the same backward-looking heuristics, market dynamics themselves can amplify volatility. Herding behaviour may set in, reinforcing bubbles or deepening downturns as agents all update beliefs in the same direction, as Asness implies. In this sense, the process of expectations formation becomes not merely a passive reflection of past data, but an active force that shapes the trajectory of the economy.

Unfortunately, it is extremely difficult to capture the complexity of the expectations formation process, while those which rely solely on historical patterns risk missing the disruptive events that define each economic cycle. Whether expectations can ever be fully rational in the strict sense remains debatable – but recognising the limitations of both model-based and adaptive approaches is a step toward better decision-making in an uncertain world.

 

Sunday, 14 April 2024

Error correction (or blame deflection?)

For anyone interested in the practice and methodological issues associated with economic forecasting, you could do a lot worse than read the Bernanke Review of forecasting at the Bank of England. According to the FT, the former Fed chair who was commissioned to produce a report on the BoE’s forecasting practices after its failure to predict the rise in inflation in 2022, was “brutally honest about [its] failings.“ Brutal might be overstating it, but it was an honest assessment that one feels is shared by many BoE insiders. As one might expect, not all economists agreed with all of its conclusions but there was a lot to like about it.

The fact that Bernanke outlined many shortcomings in the BoE’s practices should come as no surprise. No system is ever perfect, and the fact that the current monetary framework has been in place for almost 30 years does suggest that it is time to have a close look. There are a number of questions around the whole process, however. Why was it necessary to have such a review in the first place? If the processes really are as poor as Bernanke highlighted, why did it require an external review to point it out? And if the purpose of the exercise was to address policy errors, should we not be spending time looking at the policy making process rather than putting a lot of effort into the forecast generation process? I will deal with these points below.

What were the conclusions?

It is perhaps instructive first to reflect on Bernanke’s main process recommendations. One of the most widely trailed in advance was the suggestion that the BoE publish scenarios alongside the main forecast. This would “help assess the costs of potential risks to the outlook” and “stress test the judgements made by the MPC.” There is a lot of merit in doing this: The experience of recent years which has produced the Covid-19 pandemic and the oil price shock, suggests that a single forecast with a univariate central case cannot adequately capture all future states of the world. Even allowing for risks in the form of a fan chart, no forecast could capture shocks of the magnitude of 2020 (see chart below). The Bernanke Review went as far as suggesting that “the fan charts as published in the MPR have weak conceptual foundations, convey little useful information over and above what could be communicated in other, more direct ways, and receive little attention from the public. They should be eliminated.” While there is some truth in this, it may be going too far to eliminate them, as fan charts are a very useful way of conveying risks around a central case in a stable environment, and there is a case for retaining them.

Another very important consideration was the nature of conditioning assumptions, particularly for the future path of interest rates. There are a number of reasons why using market rate expectations as the appropriate starting point is less than optimal. For one thing, “forward rates implied by the market curve are not pure forecasts of future rates, because forward rates may incorporate risk and liquidity premiums.” In addition they may not reflect the MPC’s best judgement of the path of rates, meaning that “a forecast conditioned on the market curve may be misleading.” One alternative is for the central bank to give a preferred path for rates, much as the Riksbank does, although as I argued in this post in 2019, this could simply create a hostage to fortune. Instead, the practice of offering alternative scenarios based on different rate paths will probably suffice.

A final big point, and one that is close to my heart, is that the software required to manage and manipulate data “is seriously out of date and difficult to use” and should be upgraded and constantly monitored. I don’t know which systems Bernanke is referring to but my own experience with languages such as R and Python, now en vogue in economic circles, is that they are far less user-friendly and flexible than some of the systems designed in the 1970s. The review was also critical of the BoE’s macro model, COMPASS, unveiled to great fanfare in 2013. Bernanke did not explicitly say that DSGE models may not be up to the job of forecasting but he offered the view that structural models (of the kind I have long advocated) still have a role to play in forecasting – after all, the Fed still uses them.

Policy considerations

The elephant in the room, however, is why it was felt that such a review was required in the first place. The answer, to put it bluntly, is that it was designed to keep politicians off the BoE’s back after it was accused of failing to predict the huge rise in inflation in 2022 (true) and the fact that its policy response was too slow (less true). In fact, the BoE's inflation forecast in February 2022 was above that of the consensus, predicting end-2022 inflation at 5.8% versus a consensus expectation of 4.6% (outturn: 10.8%) and end-2023 inflation at 2.5% versus the consensus prediction of 2.1% (outturn: 4.2%). Thus, while the BoE forecast was a significant under-estimate, it was less so than most forecasters.

As for the policy response, as I (and many others) have noted previously there was little anyone could have done to prevent an inflation spike in the face of an external oil price shock. Recall that the UK had just come off the back of a pandemic which had resulted in the steepest decline in output in 300 years and whose long-term effects were at that time still unknown. It did not feel like the right time for a sharp tightening of monetary policy. However a review of process is a standard response to issues that are more a matter of policy. Simply put, it is a way to deflect attention.

Another issue worth addressing is the question raised by the Sunday Times economics editor David Smith as to why it took an external review to highlight these shortcomings, which were well known internally. We are very much in speculative territory here, but since Bernanke took a lot of evidence from BoE insiders – past and present – it is hard to avoid the conclusion that this review offered an opportunity to tackle internal inertia. This may be the result of senior managers lack of knowledge of the issues involved; the fact that their attention has been diverted by other policy matters in recent years (Brexit, the pandemic) or simply a lack of budget resources. Either way the Review is a good way to get their attention.

Last word

It is always a good thing to review forecast models and processes, especially when they have been in place for so long and the Bernanke Review put the BoE’s process under a lot of scrutiny. In many ways it simply came across as a call to modernise a system, which in the grand scheme of things was already pretty decent but perhaps had been neglected a little over the past decade. However, the one thing it will not fix is that the future is inherently unknowable. No matter how state of the art, no forecasting system can cope with the kind of shocks to which we have been subject of late. Give it another decade and we will be having this debate all over again.

Wednesday, 30 November 2022

The DSGE paradigm: Do Stop Generating Errors

From RBC to DSGE

The recent passing of Ed Prescott, the 2004 Nobel Laureate in economics, was a cause for sadness across the economics profession. Prescott was universally recognised as a revolutionary thinker in the field of macroeconomics and one of his great innovations (along with fellow Laureate Finn Kydland) was the introduction of so-called Real Business Cycle (RBC) models. In simple terms, these models postulate that business cycle fluctuations arise as a result of labour supply decisions in response to stochastic shocks. One of the consequences of this paradigm is that business cycles are optimal responses to productivity shocks and that interventions to offset such shocks are harmful because they cause the economy to deviate from its long-run optimal path.

The first attempt to produce an economic model based on these principles was Prescott’s 1986 paper ‘Theory Ahead of Business Cycle Measurement’ which was very much based on calibrated responses rather than one which used statistical techniques to fit the data. It was also a model which assumed a world in which there were no distortions. Unsurprisingly, Keynesian economists did not take the RBC conclusions lying down. They argued that the economy was characterised by frictions such as nominal rigidities, the existence of monopoly power and information asymmetries which can result in involuntary unemployment, thus opening up a role for governments to smooth the cycle. In response, the so-called New Keynesians devised a model paradigm which required the imposition of a number of restrictive assumptions in order to approximate the world as they saw it.

Thus did the literature on New Keynesian Dynamic Stochastic General Equilibrium (DSGE) models come into being: Dynamic because they operate over very long (infinite) horizons; Stochastic because they deal with random shocks and General Equilibrium because they are built up from microfoundations. Such models now dominate much of the academic thinking in the modelling and policy literature. But they are mathematically complex, opaque and founded on a series of assumptions that calls into question whether they have anything useful to contribute to the future of macroeconomics[1].

What’s not to like? Quite a lot as it happens!

In the words of Olivier Blanchard, “there are many reasons to dislike current DSGE models” particularly because of the apparently arbitrary nature of the assumptions on which they are based. For example, aggregate demand is based on infinitely lived households which are assumed to have perfect foresight. Show me one of those and I will give you some hen’s teeth. Furthermore, the inflation equation is based on a forward looking equation that does not take any account of inflation persistence. But perhaps the most contestable features of DGSE models is their slavish adherence to microfoundations. These attempt to embed economic behaviour patterns that are invariant to a particular state of the world. This allows macroeconomics to escape from the charge posed by the Lucas critique that the parameters of any model change as circumstances change – a criticism of the models in operation in the 1970s, and which was perceived to be one of the reasons why they performed so badly in predicting the recessions of the time.

There is a lot wrong with this way of thinking. For one thing, the microfoundations are based on the behaviour of representative agents. In other words, they impose a theory of how individual firms and households act and assume that we can scale this up to the wider economy. As one who grew up using models based on aggregate data, it has always struck me as odd that we should discard much of the richness inherent in the observational evidence of macro data. An interesting theoretical paper published in 2020 makes the more subtle point that for the representative agent to mimic the preference structure of the population requires the imposition of extreme restrictions on the utility function used to describe household behaviour. The supreme irony of this is that the DSGE revolution was able to capture the intellectual high ground because the structural modelling paradigm that it replaced was unable to counter the criticism levelled in Chris Sims’s classic 1980 paper that such models relied on “incredible” identifying assumptions.

A further thought is that we have little evidence that the utility functions of representative agents are invariant over time, as modern macro theory assumes. But whilst there is no doubt that the research underpinning the macro revolution in the late-1970s and early-1980s – including the influential work of Lucas – is intellectually persuasive, the evidence of this year alone, in which inflation spiked to 40-year highs unforeseen by most models in 2021, does not persuade me that the DSGE revolution has significantly enhanced the thinking in modern macro.

By this point you have probably gathered that I am highly sceptical of much of the work conducted in modern macro modelling in the last 40 or so years. This is not to deny that it is intellectually fascinating and I am more than happy to play around with DSGE models. But as Anton Korinek points out in this fascinating essay, “DSGE models aim to quantitatively describe the macroeconomy in an engineering-like fashion.” They fall victim to the “mathiness” in economics, of which Paul Romer was so scathing.

And their forecasting performance is poor

We might be more accepting of the DSGE paradigm if it produced significantly better forecasting results than what went before. The events of the past 15 years suggest that this is far from the case and it is now generally acknowledged that the out-of-sample forecasting performance of DSGE models is very poor. If this does not render them useless as a forecasting tool, it suggests that they are no better than the structural models which the academic community has spent forty years trying to knock down. Proponents will argue that this is not what they are designed to do. Rather they are designed to understand how the economy is constructed around the deep-seated parameters underpinning household and corporate decision-making which allows for policy evaluation.

This debate was brought into sharp focus recently following the publication of a fascinating paper on the properties of DSGE models which is less concerned about whether they represent good economics but whether they represent good models in a statistical sense. The answer, according to the authors, is that they do not. The paper can get quite dense in places but one of the things it does is to examine how well it can fit nonsense data. By randomly swapping the series around and feeding them into the DSGE model, “much of the time we get a model which predicts the [nonsense] data better than the model predicts the [actual] data.” They draw the damning conclusion that “even if one disdains forecasting as an end in itself, it is hard to see how this is at all compatible with a model capturing something – anything – essential about the structure of the economy.”

Last word

As one who for many years has used models for forecasting purposes that have been sniffily dismissed by the academic community, it is hard to avoid a sense of schadenfreude. Blanchard offers us a way out of this impasse, arguing that theoretical models of the economy have a role to play in “clarifying theoretical issues within a general equilibrium setting ... In short, [they] should facilitate the debate among macro theorists.” By contrast, policy models of the type with which I am most comfortable, “should fit the main characteristics of the data” and be used for forecasting and policy analysis. 

There is some merit in this argument. By all means continue to tinker with DSGE models to see what kinds of insight they can generate but do not let them anywhere near the real world until their forecast performance substantially improves. In the words of statistician George Box, “all models are wrong but some are useful”. And some are DSGE models.


[1] For an excellent introduction to many of the issues in modern macro, check out this free online textbook ‘Advanced Macroeconomics: An Easy Guide’ By Filipe Campante, Federico Sturzenegger and Andrés Velasco

Thursday, 24 February 2022

Redrawing the economic map

As Europe awoke to the news that Russia has launched military action against Ukraine, it is hard to avoid the conclusion that the geopolitical tectonic plates have shifted. A war on this scale in continental Europe is not something we have seen since 1945 although we should not forget that the Soviet Union did invade Hungary in 1956 and Czechoslovakia in 1968, so there is a parallel (albeit inexact). Twenty years ago it all seemed very different. Having experienced a form of shock therapy following the collapse of the Soviet Union in 1991, which saw hyperinflation and a huge collapse in output, the Russian economy stabilised in the late-1990s. Accession to the G8 in 1997 gave rise to hopes that it would become a reliable international partner whose political and economic interests would be more aligned with the west. The annexation of Crimea in 2014 changed all that.

From a geopolitical perspective, Russia has become an increasingly difficult problem for the west to manage. To quote the British parliamentary Intelligence and Security Committee, “Russia is simultaneously both very strong and very weak.” It is a significant military player with a permanent seat on the UN Security Council. Yet it has a relatively small population compared to the west and a weak economy which is heavily reliant on hydrocarbon revenues. This makes it difficult to respond effectively. Military engagement by the west in Ukraine is out of the question, although matters could escalate if former Soviet Republics which are now NATO members (Estonia, Latvia and Lithuania) face a similar threat. That is something we would rather not think about. But as a response to the Ukrainian incursion, it is clear that the west will implement economic sanctions.

Germany has already paused its certification of the Nord Stream 2 gas pipeline. Whilst this does not mean that the pipeline will never be used, it is a significant move and will have major implications. Although Chancellor Scholz wants to wean Germany off imported Russian gas, the view in Moscow is this will be impossible in the short term and may not even be possible on a 5-10 year horizon. This is indeed plausible and the 2011 decision to end nuclear energy generation now looks rather ill-judged. However, now that the Greens are part of the government coalition, their resolve to further reduce dependency on fossil fuels should not be underestimated. Either way, this is likely to have significant costs for one or both parties – we cannot predict at this point where the incidence will fall.

How effective will sanctions be?

The German case was merely the most high profile of a range of sanctions introduced in recent days (the British government’s efforts were a particularly weak response) and more will be forthcoming. Sanctions are now regarded as the first response to aggressor nations and in contrast to the old maxim of “shoot first and ask questions later”, the Peterson Institute for International Economics (PIIE) acknowledges that “advance planning for the imposition of sanctions is now the norm.” But what are they designed to achieve? There are a range of possibilities. Sometimes they are simply implemented to satisfy domestic considerations rather than influence the actions of others; they may be designed to send a signal that the actions of others are unacceptable, or they may be intended to implement regime change. Whether or not sanctions are successful depends on which of these is the intention. 

A study by PIIE suggested that sanctions tend to be effective in roughly one-third of cases. But the success rate depends very much on the objectives. “Episodes involving modest and limited goals, such as the release of a political prisoner, succeeded half the time. Cases involving attempts to change regimes (e.g., by destabilizing a particular leader or by encouraging an autocrat to democratize), to impair a foreign adversary’s military potential, or to otherwise change its policies in a major way succeeded in about 30 percent of those cases. Efforts to disrupt relatively minor military adventures succeeded in only a fifth of cases.” The invasion of Ukraine is not a “minor military adventure” so the odds that sanctions will reverse the outcome are limited. In any case, as PIIE notes, “sanctions are of limited utility in achieving foreign policy goals that depend on compelling the target country to take actions it stoutly resists.” US efforts to promote political change in Iran and Cuba are testimony to this. Furthermore, according to PIIE: “It is hard to bully a bully with economic measures” since the evidence suggests that democratic regimes are more susceptible to such pressure than autocracies.

In order to manage expectations, western governments have to make it clear at the outset that they do not expect economic sanctions to facilitate regime change in Russia or reverse the Ukrainian invasion (although Putin did suggest in his speech that Russia does not intend to occupy Ukraine). What kind of sanctions could the west impose? The first option would be to sanction the free flow of Russian capital by imposing restrictions on those with close links to the government, the rationale being that pressure placed on Putin by power brokers would destabilise his grip on power. This would include restrictions on their personal activity and also the banks that they use.

There have also been calls to cut Russian access to SWIFT, the global interbank payments system. Whilst this would severely impact on Russian banks, European creditors would also struggle to get their money out of Russia. This would be particularly problematic since BIS data indicate that European banks hold the vast majority of foreign banks' exposure to Russia (chart above). Russia also has huge FX reserves, totalling around $630 billion, which would mean that it has no immediate need for market access to foreign currency. It also has its own financial payments system (SPFS) which funds around 20% of Russian settlements. However, the Atlantic Council think tank reckons that “the Russian equivalent of SWIFT remains mostly aspirational [and] is much ado about nothing,” concluding that its importance is overblown.

One concern is that sanctions on Russia could drive it further into China’s orbit as the two countries have become closer in recent years as they seek to weaken US hegemony. But China has to balance its relationship with Russia against its need to preserve relationships with the west and it has nothing to gain from any conflict with Ukraine. However, closer Chinese ties may weaken the impact of any economic sanctions that the west might impose (for example, if China were to become a bigger importer of Russian oil and gas).

Implications for the west

Thirty years ago much of the talk in policy circles was of the peace dividend that would accrue as a result of reduced defence spending. That dividend now appears to have been used up and recent events may force governments to think about raising defence spending. Only seven of the 30 NATO members currently spend more than 2% of GDP on defence – the benchmark which all members are meant to achieve by 2024. Increased defence spending will stretch public finances more than in the period 1945-90 because ageing populations mean that health services are a bigger competitor for tax revenue. It is therefore possible that taxation will have to rise in order to pay for it.

More generally the era of peace, prosperity and openness characterised by increased globalisation is apparently in retreat (see the KOF index). Russia may not be the superpower of old but it is big enough to cause problems if it flexes its muscles. China waits in the wings, perhaps assessing the west’s response to the Ukrainian invasion as it ponders how to deal with Taiwan. For those of us old enough to remember the Cold War, none of this is new. Nor does it necessarily mean huge changes in the way we live our day-to-day lives. But governments will have to make increasingly difficult choices about resource allocation as we revert to a world of “them and us.”

Tuesday, 15 February 2022

The Magic Money Tree

Modern Monetary Theory (MMT) is back in the headlines following a recent piece in the New York Times (here). In truth, the article is more a profile of one its best known proponents, Professor Stephanie Kelton of Stony Brook University, than an attempt to examine MMT. Mainstream economists have nonetheless queued up to criticise it, probably because the original headline was titled “Time for a Victory Lap” (it has since been changed to “Is This What Winning Looks Like” so the subeditor has a lot to answer for). However, the article still contains the phrase “Kelton … is the star architect of a movement that is on something of a victory lap”. This has enraged the mainstream economics community because far from enjoying a victory lap, MMT remains untried, unproven and untestable.

A lot has happened since I first looked at the subject three years ago: Kelton’s book The Deficit Myth has become a best seller whilst the pandemic has focused minds on the role of government deficits. This fascinating area is thus worth revisiting. However, the quip that Modern Monetary Theory is not modern, is not about money and is not a theory still holds true. It is not modern because it has its roots in Abba Lerner’s Functional Finance Theory which first saw the light of day in 1943 and which suggests that government should finance itself to meet explicit economic goals, such as smoothing the business cycle, achieving full employment and boosting growth. It is also more a fiscal theory than a monetary one. At heart it is based on the premise that since the government is the monopoly supplier of money, there is no such thing as a budget constraint because governments can finance their deficits by creating additional liquidity at zero cost (subject to an inflation constraint). It is most definitely not a theory about how the economy works. Instead it is closer to a doctrine to which its adherents passionately adhere whilst regarding non-believers as having not yet seen the light (or worse, economic heretics).

What particularly riles the mainstream community is that there is no formal model which can be written down and therefore no testable hypothesis. In the words of blogger Noah Smith, “MMT proponents almost always refuse to specify exactly how they think the economy works. They offer a package of policy prescriptions, but these prescriptions can only be learned by consulting the MMT proponents themselves.” This is particularly irksome because it allows MMT proponents to sidestep the criticisms of the doctrine, of which there are many.

Many of these criticisms centre around the role of money, upon which the fiscal analysis is founded. For example, it treats money as being primarily created by the state (defined as the government sector plus the central bank) and has little or nothing to say about the role of banks in the process. It also treats money as a public good which should be used to maximise social welfare rather than its more prosaic use as a medium of exchange. This in turn assumes there is only one form of money in the economy, but as I have pointed out before this is not the case. Domestic actors may choose to use foreign currency, for example, or opt for digital options. Thus, although governments can create money almost without limit, there is no guarantee that demand will match supply. Increasing supply way beyond demand will only lead to currency debasement. In an excellent paper by the Banque de France[1] (here) the authors do a good job of picking holes in the theoretical underpinnings of MMT, noting that none of its supporters acknowledge “the reason modern literature on money puts  forward for what makes legal currency “acceptable” by the public, i.e. monetary policy credibility.”

Whilst MMT does rest on shaky theoretical foundations, it is not the only area in modern macroeconomics to suffer from such problems. The New Keynesian school, which is the predominant model used by central banks, assumes no role for the quantity of money. It also imposes perfect pass-through from the policy rate to all other rates in the economy, thus giving the central bank a powerful lever to affect intertemporal decisions, which is extremely questionable. Nobel Laureate Joseph Stiglitz published a paper in 2017 which argued that “the DSGE models that have come to dominate macroeconomics during the past quarter-century [apply] the wrong microfoundations, which failed to incorporate key aspects of economic behavior. Inadequate modelling of the financial sector meant they were ill-suited for predicting or responding to a financial crisis; and a reliance on representative agent models meant they were ill-suited for analysing either the role of distribution in fluctuations and crises or the consequences of fluctuations on inequality.”

It is thus perhaps a little unfair to single out MMT which has fallen victim to the fetish for quantification in economics. Current academic practice seems to believe that if something cannot be quantified it is not a valid explanation of how the economy works. It is instructive to remember that the ideas of Keynes, which came to dominate the agenda after 1945, were also subject to significant criticism following their publication in the 1930s. Nonetheless there is a lot wrong with MMT and I concur with the conclusion to the BdF paper: “Such a stark contrast with mainstream economics analysis and recommendations would be understandable if MMT economists engaged into a debate with their colleagues to explain and justify their positions, from both a theoretical and empirical point of view. However, they rather prefer to talk between themselves, repeating consistently the same ideas that others formulated in a distant past, disregarding facts and theories that do not fit into their approach, and accusing those who do not share their ideas of being incompetent.”

Yet despite all these reservations MMT has opened up a debate about the role of government both during and in the wake of the pandemic. One of the core ideas of MMT is that governments are not like households because they have an (almost) infinite life and therefore debt can be repaid over periods extending over many generations. There is thus no rush to impose significant fiscal tightening as the economy recovers from the Covid shock. This view is, of course, not unique to MMT: It is a standard element in fiscal dynamics but it is a lesson that governments should heed as the rush to take away support after the pandemic gathers momentum.

If it has opened the eyes of politicians to the uses of fiscal policy after decades in the doldrums, then maybe MMT has served a useful function. But a policy of near unlimited fiscal expansion is for the birds. It calls to mind the other acronym often applied to MMT: The Magic Money Tree.


[1] Drumetz, F. and C. Pfister (2021) ‘The Meaning of MMT’, Banque de France Working Paper 833

Wednesday, 20 October 2021

No expectations

Inflation remains one of the big items on the policy agenda with the IMF’s latest World Economic Outlook devoting a considerable amount of space to the topic, warning that “central banks should remain vigilant about the possible inflationary effects of recent monetary expansions.” In fairness the IMF does use the stock phrase beloved of policy analysts that “long-term inflation expectations have stayed relatively anchored.” However this comes at a time when parts of the macroeconomics profession are beginning to question just how much we really know about the inflation generation process, with the role of expectations coming under particular scrutiny.

Like many areas of economics the forces underpinning inflation have been subject to various fads over the years. Between the 1950s and 1970s, attention focused on the labour market and the role of the wage bargaining process. During the 1980s monetary trends were the flavour of the period but over the last 25 years the main area of focus has been the deviation of output from the NAIRU and the determination of inflationary expectations. Given the change in fashions over the years, it is difficult to take seriously the idea that there is a generic theory of inflation: like theories of the exchange rate, different factors drive the process at different times.

For my part, I have long harboured doubts about the way macroeconomics treats inflation (see the posts Do we know what drives inflation? from August 2017 and Monetary policy complications from October 2017 for more detail). It was thus heartening to see that economists at the Federal Reserve share similar reservations. In a highly readable paper published last month, which received considerable media exposure, Jeremy Rudd of the Fed staff posed the questionWhy do we think that inflation expectations matter for inflation? (and should we?)

As the paper’s abstract noted, “A review of the relevant theoretical and empirical literature suggests that this belief [in expectations] rests on extremely shaky foundations, and a case is made that adhering to it uncritically could easily lead to serious policy errors.” Rudd goes on to describe competing models of inflation used by macroeconomists over the past 50 years and concludes that the use of expectations to explain inflation dynamics is both unnecessary and unsound. In his view it is unnecessary because it can be explained more readily by other factors and unsound because it is not based on any good theoretical and empirical evidence. Moreover, the theoretical models are influenced by short-term (usually one period ahead) expectations, which “sits uneasily with the observation that in policy circles … much more attention is paid to long-run inflation expectations.”

The empirical evidence suggests little evidence of a direct effect of expectations on inflation. According to Blinder et all (1998)[1], “what little we know about firms’ price-setting behavior suggests that many tend to respond to cost increases only when they actually show up and are visible to their customers, rather than in a pre-emptive fashion.” Evidence from the Atlanta Fed survey of business inflation expectations over the past decade confirms that expectations have been remarkably constant until relatively recently with unit costs one year ahead generally expected to grow at an average rate of 2% (chart). However, it is notable that during 2021 there has been a sharp pickup as the economy suffered bottlenecks in the wake of the pandemic.

The standard central bank view of inflation expectations was highlighted in a 2019 speech by BoE MPC member Silvana Tenreyro. It is a perfectly fine piece of conventional economic analysis as befits an orthodox central bank economist. She noted that household inflation expectations are a key input into the BoE’s thinking, arguing that for any given interest rate, higher inflation expectations increase households’ incentive to spend today rather than saving. But once you start digging below the surface, the argument rests on some weak foundations. For example, the evidence from both the US and UK suggests that households consistently expect CPI inflation to average close to 3% at horizons of between one and five years ahead. In other words, despite the best efforts of central banks, households continue to expect inflation to run above their target rate. Tenreyro was also forced to concede that “households do not always adjust their expectations even when prices start rising more quickly or slowly than they had expected” which really ought to raise some questions about their usefulness.

A more serious criticism of inflation expectations came from Rudd who pointed out that “the presence of expected inflation in these models provides essentially the only justification for the widespread view that expectations actually do influence inflation … And this apotheosis has occurred with minimal direct evidence, next-to-no examination of alternatives that might do a similar job fitting the available facts, and zero introspection as to whether it makes sense.” Instead Rudd offers the explanation that the absence of a wage-price spiral is one of the key defining features of recent inflation dynamics. He goes on to suggest that “in situations where inflation is relatively low on average, it also seems likely that there will be less of a concern on workers’ part about changes in the cost of living … But this is a story about outcomes, not expectations.” In other words, when inflation is below a certain threshold level workers stop pushing for bigger wage hikes which has contributed to keeping inflation low – unlike in the 1970s.

This has a number of important policy implications:

  • First, it will be important to keep an eye on whether wage settlements are responding to higher price inflation. 
  • Second, because central bank economists, who are influenced by latest academic thinking, generally tell policymakers that “expected inflation is the ultimate determinant of inflation’s long-run trend, [they] implicitly provide too much assurance that this claim is settled fact. Advice along these lines also naturally biases policymakers toward being overly concerned with expectations management, or toward concluding that survey- or market-based measures of expected inflation provide useful and reliable policy.” 
  • Third, precisely because inflation dynamics are influenced more by outcomes than expectations, “it is far more useful to ensure that inflation remains off of people’s radar screens than it would be to attempt to “reanchor” expected inflation.” 
  • Finally, “using inflation expectations as a policy instrument or intermediate target has the result of adding a new unobservable to the mix. And … policies that rely too heavily on unobservables can often end in tears.”

Even if you do not accept Rudd’s premise (and many mainstream economists do not) this is an important contribution to the debate. One of the criticisms being bandied around as the economy rebounds from the 2020 collapse is that there is insufficient diversity of thinking around some of the key underpinnings of mainstream macro. If nothing else, Rudd forces us to think more critically about how we think of inflation and our understanding will be all the stronger for it.


[1] Blinder, A., E. Canetti, D. Lebow, and J. Rudd (1998). ‘Asking About Prices: A New Approach to Understanding Price Stickiness’. New York: Russell Sage Foundation.