I must admit to a certain nostalgia when reading through the
paper because I started my career working in the field of macro modelling and
forecasting, and some of the people who broke new ground in the 1960s were
still around when I was starting out in the 1980s. Moreover, the kinds of models
we used were direct descendants of the Fed-MIT-Penn model. Although they have
fallen out of favour in academic circles, structural models of this type are in
my view still the best way of assessing whether the way we think the economy
should operate is congruent with the data. They provide a richness of detail
that is often lacking in the models used for policy forecasting today and in
the words of Backhouse and Cherrier, such models were the “big science”
projects of their day.
Robert Lucas and Thomas Sargent, both of whom went on to win
the Nobel Prize for economics, began in the 1970s to chip away at the
intellectual reputation of structural models based on Keynesian national income
accounting identities for their “failure
to derive behavioral relationships from any consistently posed dynamic
optimization problems.” Such models, it was argued, contained no meaningful
forward-looking expectations formation processes (true) which accounted for
their dismal failure to forecast the economic events of the 1970s and 1980s. In
short, structural macro models were a messy compromise between theory and data
and the theoretical underpinnings of such models were insufficiently rigorous
to be considered useful representations of how the economy worked.
Whilst there is some truth in this criticism, Backhouse and
Cherrier remind us that prior to the 1970s “there
was no linear relationship running from economic theory to empirical models of
specific economies: theory and application developed together.” Keynesian
economics was the dominant paradigm, and such theory as there was appeared to
be an attempt to build yet more rigour around Keynes’ work of the 1930s rather
than take us in any new direction. Moreover, given the complexity of the
economy and the fairly rudimentary data available at the time, the models could
only ever be simplified versions of reality.
Another of Lucas’s big criticisms of structural models was the
application of judgement to override the model’s output via the use of constant
adjustments (or add factors). Whilst I accept that overwriting the model output
offends the purists, it presupposes that economic models will outperform
relative to human judgement. But such an economic model has not yet been constructed.
Moreover, the use of add factors reflects a certain way of thinking about
modelling the data. If we think of a model as representing a simplified version
of reality, it will never capture all the variability inherent in the data (I
will concede this point when we can estimate equations, all of which have an R-bar
squared close to unity). Therefore, the best we can hope for is that the error
averages zero over history – it will never be zero at all times.
Imagine that we are
in a situation where the last historical period in our dataset shows a residual
for a particular equation which is a long way from zero. This raises a question
of whether the projected residual in the first period of our forecast should be
zero. There is, of course, no correct answer to the question. It all boils down
to the methodology employed by the forecaster – their judgement – and the trick
to using add factors is to project them out into the future so that they
minimise the distortions to the model-generated forecast.
But to quote Backhouse and Cherrier, “the practice Lucas condemned so harshly, became a major reason why
businessmen and other clients would pay to access the forecasts provided by the
FMP and other macroeconometric models … the hundreds of fudge factors added to
large- scale models were precisely what clients were paying for when buying
forecasts from these companies.” And just to rub it in, the economist Ray Fair ”later noted that analyses of the Wharton and Office of Business
Economics (OBE) models showed that ex-ante forecasts from model builders (with
fudge or add factors) were more accurate than the ex-post forecasts of the models
(with actual data).”
Looking back, many of the criticisms made by Lucas at al. seem
unfair. Nonetheless, they had a huge impact on the way in which academic economists
thought about the structure of the economy and how they went about modelling
it. Many academic economists today complain about the tyranny of microfoundations,
in which it is virtually impossible to get a paper published in a leading journal
without linking models of the economy to them. In addition, the rational
expectations hypothesis has come to dominate in the field of macro modelling, despite
the fact there is little evidence suggesting this is how expectations are in
fact formed.
As macro modelling has developed over the years, it has
raised more questions than answers. One of the more pervasive is that, like the
models they superseded, modern DSGE models have struggled to explain bubbles and crashes. In addition, their treatment of
inflation leaves a lot to be desired (the degree of price stickiness assumed in
new Keynesian models is not evident in the real world). Moreover, many of the
approaches to modelling adopted in recent years do not allow for a sufficiently
flexible trade-off between data consistency and theoretical adequacy. Whilst
recognising that there are considerable limitations associated with structural
models using the approach pioneered by the FMP, I continue to endorse the view
of Ray Fair who wrote in 1994 that the use of structural models represents
"the best way of trying to learn how
the macroeconomy works."
No comments:
Post a Comment