Saturday , April 21 2018
Home / Roger Farmer's Economic Window / Large Scale Econometric Models: Do they Have a Future?

Large Scale Econometric Models: Do they Have a Future?

Summary:
Here is an intriguing question: How is the Large Hadron Collider like the National Institute Global Economic Model? Read on! It was a great pleasure to organize a session on econometric models for the Royal Economic Society Conference at the University of Sussex. In my new role as Research Director at the National Institute of ...

Topics:
Roger Farmer considers the following as important:

This could be interesting, too:

Timothy Taylor writes The Clean Cooking Problem: 2.3 Million Deaths Annually

FT Alphaville writes Further Reading

Tyler Cowen writes Subliminal education?

Miles Kimball writes Christie Aschwanden on Buying Doubt

Here is an intriguing question: How is the Large Hadron Collider like the National Institute Global Economic Model? Read on!

It was a great pleasure to organize a session on econometric models for the Royal Economic Society Conference at the University of Sussex. In my new role as Research Director at the National Institute of Economic and Social Research (NIESR) I have inherited the responsibility for research and development of the National Institute Global Economic Model, NiGEM; the preeminent model of the world economy. As you might expect, given my role at NIESR, the answer to the question posed in this session is a resounding yes!

Large Scale Econometric Models: Do they Have a Future?

For the session at Sussex, in addition to my own presentation, I assembled three outstanding speakers, Tony Garrett from the Warwick Business School (WBS), Marco Del Negro of the Federal Reserve Bank of New York and Garry Young, Director of Macroeconomic Modelling and Forecasting at NIESR.

Tony kicked off the session with a description of the work he’s been engaged in at WBS along with his co-authors Ana Galvao and James Mitchell. The University of Warwick has signed a memorandum of understanding (MOU) with the National Institute of Economic and Social Research that gives Warwick graduate students access to the expertise of the applied economists at NIESR and where NIESR gains from the academic expertise of Warwick economists. As part of that MOU, Ana and Tony have agreed to publish their forecasts each quarter in the National Institute Review as a benchmark against which to measure the performance of the NiGEM team. Tony gave us a fascinating account of what it is the WBS team does!

Their approach is reduced form and eclectic. WBS has a stable of more than twenty-five models that are averaged with weights, updated in real time, by past forecast performance. Tony showed us how his forecasts had performed in the past relative to three alternatives; the Bank of England, the Survey of Professional Forecasters and the Blue Chip forecasts. He described different ways of evaluating forecasts, both by comparing point forecasts and density forecasts, for output growth and inflation. Perhaps the most interesting result, for me, was that the judgmental forecasts of the Bank of England often outperform econometric models at short horizons.

Tony’s talk was followed by Marco Del Negro from the New York Fed who described the behaviour of a medium scale Dynamic Stochastic General Equilibrium (DSGE) model they’ve been running at the NY Fed since 2008. DSGE models have received quite a bit of bad press lately as a result of the failure of almost all of the experts to predict the 2008 financial crisis. Marco gave a spirited defence of DSGE models by showing us the forecast performance of the NY Fed’s DSGE model from 2008 to the present. The model is written in a relatively new computer language; JULIA. The code is open source, blindingly fast and widely used in research publications in leading journals. For the MatLab users out there: perhaps it’s time to switch?

In the third presentation of the day we were treated to an entertaining interlude when the projection facility malfunctioned and Garry Young ad-libbed for ten minutes with a cricketing anecdote. When he resumed, Garry gave us an account of the use of NiGEM to forecast the effects of Brexit. NiGEM has more than 5,000 equations, covers 60 countries and is widely used by central banks and national treasuries around the world for scenario analysis. NiGEM has a lot more in common with the NY Fed’s DSGE model than most people realize.

In the final presentation of the day, I tied the three presentations together by explaining the history of econometric modelling beginning with Klein Model 1 in the 1940s and ending with the NY FED’s DSGE model and with NIESR’s NiGEM. For me, the main story is continuity. With the publication of Robert Lucas’ celebrated critique of econometric modelling in 1976, large-scale models disappeared from the halls of academia. But they never disappeared from central banks, treasuries and research institutes where, as Garry reminded us, they have been used as story-telling devices for more than fifty years.

The version of NiGEM we work with today has come a long way from the backward looking equations of Klein model 1. It has been lovingly tended and developed by distinguished teams of researchers who have passed through the National Institute over the years. Past NIESR researchers include among their number, some of the leading applied economists and applied econometricians in the UK and the model they developed includes state-of-the art assumptions including the ability to add forward looking elements and rational expectations in solution scenarios.

Large-scale econometric models are here to stay. Policy makers use models like NiGEM to analyse policy alternatives and that is unlikely to change soon. In my presentation I argued for a closer dialogue between economic theorists and applied economists, similar to the dialogue that currently exists between theoretical physicists and applied physicists. I argued that NiGEM located at NIESR, is to economics as the Large Hadron Collider (LHC) located at CERN, is to physics.  Just as physicists use the LHC to test new theories of subatomic particles so economists should use NiGEM to test new theories of macroeconomics. I plan to put that idea into practice.

Each month, research teams at NIESR will run a horse race of a stripped down base version of NiGEM against an alternative version that incorporate one or more different equations. In a separate presentation at the Royal Economic Society Conference this year, I discussed work I am engaged in with a research team at UCLA where we have developed a new theory of belief formation. At NIESR, we will build that theory into a parallel version of NiGEM and each quarter, we will report the forecast results of the two models. As new developments are found to provide better forecasts than the status quo, they will be built into the core structure of NiGEM.

According to Forbes, the operating budget of the Large Hadron Collider is approximately one billion US dollars a year. NiGEM is entirely funded from subscriptions and the operating budget is well south of half a million US dollars. Funding agencies take note: we could make some pretty cool improvements for a billion a year.

Roger Farmer
ROGER E. A. FARMER is a Distinguished Professor of Economics at UCLA and served as Department Chair from July 2008 through December 2012. He was the Senior Houblon-Norman Fellow at the Bank of England, January-December 2013.

Leave a Reply

Your email address will not be published. Required fields are marked *