Thursday , January 27 2022
Home / Econbrowser - James Hamilton / On Confidence Intervals and Logs

# On Confidence Intervals and Logs

Summary:
Confidence Intervals Reader Steven Kopits is critical of them: If the historical data is relatively stable compared to future events, then confidence intervals or the like can be useful. So, for example, if we take traffic on the George Washington Bridge and adjust for weather, time of day, day of year, and weekend/holiday, then I think a confidence interval is a useable piece of information. You can make decisions on that basis. … However, if the data is unstable or not well understood, if the methodology is new or not well understood, and if the exceptions to the general rules are not well understood, then confidence intervals can provide a false sense of confidence. For example, if I fail to adjust GW Bridge numbers for time of day, then confidence intervals will be effectively

Topics:
Menzie Chinn considers the following as important:

This could be interesting, too:

Scott Sumner writes A disappointing Powell press conference

Scott Sumner writes What does it mean to say that something is inflationary? (part 2)

Scott Sumner writes What does it mean to say that something is inflationary?

Tyler Cowen writes *Labor Econ Versus the World*

Confidence Intervals

Reader Steven Kopits is critical of them:

If the historical data is relatively stable compared to future events, then confidence intervals or the like can be useful. So, for example, if we take traffic on the George Washington Bridge and adjust for weather, time of day, day of year, and weekend/holiday, then I think a confidence interval is a useable piece of information. You can make decisions on that basis. …

However, if the data is unstable or not well understood, if the methodology is new or not well understood, and if the exceptions to the general rules are not well understood, then confidence intervals can provide a false sense of confidence. For example, if I fail to adjust GW Bridge numbers for time of day, then confidence intervals will be effectively useless. So if I took average transit times on the bridge, but crossed at 3 am, then I would likely be too pessimistic about transit times.

I find confidence intervals are often used for political purposes, to project certainty where methodologically, that overstates the case. Sometimes materially. Hurricane Maria in PR serves as a case study.

Readers will recall that Mr. Kopits provided a range of estimates in the case of Hurricane Maria, which subsequently turned out to be too low. On 5/31/2018 he wrote:

I would expect the excess deaths at a year horizon (through, say, Oct. 1, 2018) to total perhaps 200-400.

He subsequently raised on 6/4 his estimate to 1400 as of Dec 2017 (see this post for discussion). You can see the various estimates as tabulated by Sandberg et al. (2019) here.

To summarize graphically:

Figure 1: Cumulative excess deaths from September 2017, for simple time dummies OLS model (blue), OLS model adjusting for population (green), and Quantile Regression model adjusting for population (red), Milken Institute point estimate (black square) and 95% confidence interval (gray +), Santos-Lozada, Howard letter (chartreuse triangle), Cruz-Cano and Mead (pink squares), Kopits (teal triangle). Not pictured: Kopits estimate of 300-400 for October 2018. Source: author’s calculations, Milken Institute (2018)Santos-Lozada and Howard (2018), Cruz-Cano and Mead (2019), and Kopits (2018).

For a long examination of Mr. Kopits’ failure to understand what a confidence interval is, see this post.

So, my view – confidence intervals are useful. Even if they are wide (in which case they tell you to beware the siren call of certitude). They certainly beat “pull it out of one’s a**” approach. Since Mr. Kopits is unlikely to believe me, I’ll note David Romer’s praise of confidence intervals.

Final point on confidence intervals: On the issue of changing conditions (I’m not sure what “unstable data” is), there are ways to account for time variation in the standard errors. The most-simple minded (I teach my undergraduates!) is to do a rolling regression. Consider the regression:

Δp\$Import = α + βΔs\$ + u

One can estimate this with a rolling window, of 24 months in this case. I use the BLS measure of import prices of commodities for the left-hand side variable, and the (nominal) broad trade weighted dollar  exchange rate for the right hand side (both logged, in first differences). For full sample:

Δp\$Import = 0.002 + 0.44Δs\$ + u

Adj-R2 = 0.22, SER=0.011, Nobs = 314. Bold face denotes significant at 5% msl, using HAC robust standard errors (Newey-West).

β coefficient estimate and +/- 1.96 standard errors below:

Figure 2: Exchange rate pass through rolling regression coefficient, 24 month window (black) and +/- 1.96 standard errors (gray line). Source: Author’s calculations.

Logs

Regarding my time series plot of Long Beach Port traffic, Steven Kopits writes:

Why would you even think to present that in log terms? Totally deceptive. Do the graphs straight up.

I actually think it’s deceptive to plot things in levels when one is interested in growth rates. Consider the PCE deflator. If you plot in levels, it looks like the price level has been growing at a constant rate since about 1985 to 2020. If you plot in log levels, you see that the pace has slowed since after the Great Recession (in other words, a straight line in a base e log scale denotes a constant growth rate).

Figure 3: Personal consumption expenditure deflator (blue, left scale), personal consumption expenditure deflator (brown, right log scale). NBER defined recession dates peak-to-trough shaded gray. Source: BEA, NBER.

For more on anti-log discourse, see this post.

Question for everyone: Do they still teach logs and exponentiation in high school?

He is Professor of Public Affairs and Economics at the University of Wisconsin, Madison