Ever since I read the hysterically incorrect interpretation of a confidence interval from a person who purports to be a policy analyst, I’ve been looking for a succint explanation from a statistician, as a handy reference. Here it is (h/t David Giles via Mark Thoma): The specific 95 % confidence interval presented by a study has a 95 % chance of containing the true effect size. No! A reported confidence interval is a range between two numbers. The frequency with which an observed interval (e.g., 0.72–2.88) contains the true effect is either 100 % if the true effect is within the interval or 0 % if not; the 95 % refers only to how often 95 % confidence intervals computed from very many studies would contain the true size if all the assumptions used to compute the intervals were correct. It is
Topics:
Menzie Chinn considers the following as important: Uncategorized
This could be interesting, too:
Scott Sumner writes Just do it.
Tyler Cowen writes Monday assorted links
Tyler Cowen writes *VC: An American History*, by Tom Nicholas
Tyler Cowen writes What will Singapore do with its NIRC?
Ever since I read the hysterically incorrect interpretation of a confidence interval from a person who purports to be a policy analyst, I’ve been looking for a succint explanation from a statistician, as a handy reference. Here it is (h/t David Giles via Mark Thoma):
The specific 95 % confidence interval presented by a study has a 95 % chance of containing the true effect size. No! A reported confidence interval is a range between two numbers. The frequency with which an observed interval (e.g., 0.72–2.88) contains the true effect is either 100 % if the true effect is within the interval or 0 % if not; the 95 % refers only to how often 95 % confidence intervals computed from very many studies would contain the true size if all the assumptions used to compute the intervals were correct. It is possible to compute an interval that can be interpreted as having 95 % probability of containing the true value; nonetheless, such computations require not only the assumptions used to compute the confidence interval, but also further
Source: Greenland et al. (2016).
assumptions about the size of effects in the model. These further assumptions are summarized in what is called a prior distribution, and the resulting intervals are usually called Bayesian posterior (or credible) intervals to distinguish them from confidence intervals.