I've sworn off macro-bashing. I said what I had to say. And I'm seeing lots of young macro people doing good stuff. And the task of macro-method-criticizing has been taken over by people who are better at it than I am, and who have much better credentials. My macro-bashing days are done.But sometimes I just have to offer macro folks some marketing advice.The new defense of DSGE by Christiano, Eichenbaum, and Trabandt is pretty cringe-inducing. Check this out: People who don’t like dynamic stochastic general equilibrium (DSGE) models are dilettantes. By this we mean they aren’t serious about policy analysis. Why do we say this? Macroeconomic policy questions involve trade-offs between competing forces in the economy. The problem is how to assess the strength of those forces for the
Noah Smith considers the following as important:
This could be interesting, too:
Menzie Chinn writes Stop Stephen Moore
Eric Crampton writes In Medio Stat Virtus
Menzie Chinn writes Glenn Rudebusch on “Climate Change and the Federal Reserve”
Bradford DeLong writes Brad DeLong's Grasping Reality 2019-03-26 00:52:20
But sometimes I just have to offer macro folks some marketing advice.
The new defense of DSGE by Christiano, Eichenbaum, and Trabandt is pretty cringe-inducing. Check this out:
People who don’t like dynamic stochastic general equilibrium (DSGE) models are dilettantes. By this we mean they aren’t serious about policy analysis. Why do we say this? Macroeconomic policy questions involve trade-offs between competing forces in the economy. The problem is how to assess the strength of those forces for the particular policy question at hand. One strategy is to perform experiments on actual economies. This strategy is not available to social scientists. As Lucas (1980) pointed out roughly forty years ago, the only place that we can do experiments is in our models. No amount of a priori theorizing or regressions on micro data can be a substitute for those experiments. Dilettantes who only point to the existence of competing forces at work – and informally judge their relative importance via implicit thought experiments – can never give serious policy advice.That reads like a line from a cackling cartoon villain. "Buahahaha, you pitiful fools" kind of stuff. It's so silly that I almost suspect Christiano et al. of staging a false-flag operation to get more people to hate DSGE modelers.
First, calling DSGE critics "dilettantes" was a bad move. By far the best recent critique of DSGE (in my opinion) was written by Anton Korinek of Johns Hopkins. Korinek is a DSGE macroeconomist. He makes DSGE models for a living. But according to Christiano et al., the fact that he thinks his own field has problems makes him a "dilettante."
OK, but let's be generous and suppose Christiano et al. didn't know about Korinek (or Ricardo Caballero, or Paul Romer, or Paul Pfleiderer, etc.). Let's suppose they were only talking about Joseph Stiglitz, who really is something of a dilettante these days. Or about bloggers like Yours Truly (who are actual dilettantes). Or about the INET folks. Even if so, this sort of dismissive snorting is still a bad look.
Why? Because declaring that outsiders are never qualified to criticize your field makes you look insular and arrogant. Every economist knows about regulatory capture. It's not much of a leap to think that researchers can be captured too -- that if the only people who are allowed to criticize X are people who make a living doing X, then all the potential critics will have a vested interest in preserving the status quo.
In other words, Christiano et al.'s essay looks like a demand for outsiders to shut up and keep mailing the checks.
Second of all, Christiano et al. give ammo to the "econ isn't a science" crowd by using the word "experiments" to refer to model simulations. Brad DeLong already wrote about this unfortunate terminology. Everyone knows that model simulations aren't experiments, so obstinately insisting on misusing this word just makes econ look like a pseudoscience to outside observers.
Third, Christiano et al. are just incorrect. Their defense of DSGE is, basically, that it's the only game in town - the only way to make quantitative predictions about the effects of policy changes.
That's wrong. There are at least two other approaches that are in common use - sVARs and SEMs. sVARs are often used for policy analysis in academic papers. SEMs are used by central banks to inform policy decisions. Both sVARs and SEMs claim to be structural. Lots of people laugh at those claims. But then again, lots of people laugh at DSGE too.
In fact, you don't always even need a structural model to make quantitative predictions about policy; often, you can do it in reduced form. When policy changes can be treated like natural experiments, their effects - including general equilibrium effects! - can be measured directly instead of inferred from a structural model.
As Justin Wolfers pointed out on Twitter, at least one of questions that Christiano, et al. claim is only answerable by DSGE simulations can actually be answered in reduced form:
Does an increase in unemployment benefits increase unemployment? On the one hand, conventional wisdom argues that higher benefits lead to higher wages and more unemployment. On the other hand, if the nominal interest rate is relatively insensitive to economic conditions, then the rise in wages raises inflation. The resulting decline in the real interest rate leads to higher aggregate demand, a rise in economic activity and lower unemployment. Which of these effects is stronger?A 2015 paper by John Coglianese addresses this question without using a DSGE model:
I analyze a natural experiment created by a federal UI extension enacted in the United States during the Great Recession and measure the effect on state-level employment. I exploit a feature of this UI extension whereby random sampling error in a national survey altered the duration of unemployment insurance in several states, resulting in random variation in the number of weeks of unemployment insurance available at the state level.Christiano et al. totally ignore the existence of natural experiments. They claim that in the absence of laboratory experiments, model simulations are the best we've got. The rapidly rising popularity of the natural experiment approach in economics doesn't even register on their radar screens. That's not a good look.
Finally, Christiano et al. strike a tone of dismissive arrogance, at a time when the world (including the rest of the econ profession) is rightly calling for greater humility from macroeconomists. The most prominent, common DSGE models - the type created by Christiano and Eichenbaum themselves - failed pretty spectacularly in 2008-12. That's not a record to be arrogant about - it's something to apologize for. Now the profession has patched those models up, adding finance, a zero lower bound, nonlinearity, etc. It remains to be seen how well the new crop of models will do out of sample. Hopefully they'll do better.
But the burden of proof is on the DSGE-makers, not on the critics. Christiano et al. should look around and realize that people outside their small circle of the world aren't buying it. Central banks still use SEMs, human judgment, and lots of other tools. Finance industry people don't use DSGEs at all. Even in academia, use of DSGE models is probably trending downward: