I was quite interested to read the scientific article “Association Between Religious Service Attendance and Lower Suicide Rates Among US Women,” by Tyler J. VanderWeele, Shanshan Li, Alexander C. Tsai and Ichiro Kawachi. I was wondering by what magic they were hoping to get the causal effect of religious attendance on suicide from the non-experimental data in the Nurse’s Health Study. (I wrote about dietary evidence in the Nurse’s Health Study and the statistical issues in interpreting
Miles Kimball considers the following as important:
This could be interesting, too:
Timothy Taylor writes Untangling India’s Distinctive Economic Story
Tim Harford writes Book of the Week 8: Deep Thinking by Garry Kasparov
FT Alphaville writes Markets not live, Monday 24th February 2020
FT Alphaville writes Snap AV: Singapore’s sinking air cargo
I was quite interested to read the scientific article “Association Between Religious Service Attendance and Lower Suicide Rates Among US Women,” by Tyler J. VanderWeele, Shanshan Li, Alexander C. Tsai and Ichiro Kawachi. I was wondering by what magic they were hoping to get the causal effect of religious attendance on suicide from the non-experimental data in the Nurse’s Health Study. (I wrote about dietary evidence in the Nurse’s Health Study and the statistical issues in interpreting that evidence in “Hints for Healthy Eating from the Nurse's Health Study.”)
It would have been interesting to see the regression coefficient for a change in religious attendance. Unfortunately, it seems they didn’t look at that, but rather “controlled” for past religious attendance. “Controlling” for a variable by including it in a regression isn’t really controlling for what that variable is intended to measure or is proxying for when that variable is measured with error relative to what it is proxying for. It is only partially controlling. Whether or not one is “controlling” for variables can only be verified when one explicitly thinks through measurement error issues. And “controlling” for variables is seldom achieved without thinking through measurement error issues. (The advantage of using the first difference of religious attendance as a right-hand-side variable is that the first difference of religious attendance should measure the true change in whatever religious attendance is intended as a proxy, plus error. The error should bias the coefficient toward zero, but is less likely to change the sign and statistical significance of the sign of the coefficient.)
But the biggest issue with the paper lies in a different direction. They recognize the issue and try to parry it in these passages:
For an unmeasured confounder to explain the HR estimate of 0.16 (95% CI, 0.06-0.46), the unmeasured confounder would have to both increase the likelihood of religious service attendance and decrease the likelihood of suicide by 12-fold above and beyond the measured confounders; weaker confounding would not suffice. To bring the estimate’s upper confidence limit of 0.46 above 1.0, the unmeasured confounder would still have to both increase the likelihood of religious service attendance and decrease the likelihood of suicide by 3.7-fold above and beyond the measured confounders.
Our study made use of observational data. Although we adjusted for major confounders regarding the association between religious service attendance and suicide, the results may still be subject to unmeasured confounding by personality, impulsivity, feelings of hopelessness, or other cognitive factors. However, in sensitivity analysis, for an unmeasured confounder to explain the effect of religious service attendance on suicide, it would have to both increase the likelihood of religious service attendance and decrease the likelihood of suicide by greater than 10-fold above and beyond the measured covariates. Such substantial confounding by unmeasured factors seems unlikely, given adjustment for an extensive set of covariates and the known risk factor associations for suicide.
Unlike the authors, it is not hard for me to think of a very powerful potential confounder. Having one’s life be a mess could easily both reduce religious attendance powerfully and powerfully increase the probability of suicide. That is a story in which there wouldn’t have to be any causal effect of religious attendance on suicide at all.
Note that one’s life being a mess could both lead to more suicide and reduce any kind of social engagement and community support. So this is a problem not just for showing that religiosity can reduce suicide—which it might through social support and community—but for showing that any other kind of social support and community reduces suicide.
Even if something religious is causally reducing suicide, it definitely doesn’t have to be religious attendance. Anything correlated with religious attendance could also yield the evidence they point to. To see this point, suppose someone very much wanted to attend church, but was geographically too far away to make it feasible. One could easily imagine that if there are religious forces that reduce suicide, many of them might still be operative. Indeed, the authors recognize that it might be a matter of religious belief that both helps lead to religious attendance and reduces suicide:
Although religious service attendance has commonly been used in previous published studies and tends to be the strongest religious predictor of health, religiosity is multidimensional, and different aspects of religion and spirituality may therefore be differently associated with suicide. Data on religious service attendance were collected through a self-reported questionnaire and, moreover, may be subject to measurement error and possible overreporting, although the relative ordering of frequency might still be preserved. Further research could examine other religious practices, mindfulness practices, other aspects of spirituality and religiosity, other race/ethnic and demographic groups, and other forms of social participation.
In the whole paper, the most persuasive evidence about the effect of religiosity on suicide is that religious attendance seemed to have a bigger proportional effect on Catholics than on Protestants. The best I can come up with as confounders for this results are
Relative to Protestant teaching, Catholic teaching doesn’t stop people from committing suicide, but makes people underreport suicide more if they are a believing and attending Catholic. And those who provide or withhold crucial evidence on cause of death often have similar religious beliefs and attendance to the one who died. This could in principle be addressed by looking at differences between the attendance of the one who died versus the attendance of the ones who provided or withheld crucial evidence about the cause of death.
Relative to Protestant teaching, Catholic teaching makes people especially unwilling to attend when their lives are in a mess.
Despite these possible stories (which may or may not be true and may or may not have any oomph to them), the fact that the differential content of Catholicism vs. Protestantism seems to matter is the strongest evidence they have that there is causality running from religiosity to reduced suicide.
I would love to see a paper that tried to get identification to test the effect of church attendance on suicide in the typical way economists try to get identification. For example, do people who live further from the nearest church commit suicide more often? That might be a doable research project. And one could do some good placebo tests by running regressions with closeness to community centers, or stores or bars as well as the regressions on closeness to a church.
For another post at the intersection of religion and statistics, don’t miss “Who Leaves Mormonism?”