The Social Science Statistics Blog has an interesting article discussing a recent paper on the death penalty and whether it has a deterrent effect on crime (specifically murder). Here is an excerpt:
since 1960, will find that homicide rates went up when the death penalty went away, and then homicide rates declined when the death penalty was re-instituted (see Figure 1 of the Donohue and Wolfers paper), and similar patterns have happened within states. So it’s not a surprise that regression analyses have found a deterrent effect. But, as noted, the difficulties arise because of the observational nature of the treatment, and the fact that other policies are changed along with the death penalty. …The problem is that we aren’t sure, and we probably never will be unless someone gets to randomly assign death penalty policy to states or countries. This raises a problem that we often face in social science: there are questions that are interesting, and there are questions that we can answer, and the intersection of those two categories is probably a lot smaller than any of us would like. This doesn’t seem to be a realization that has crept into the media as of yet, so it is no surprise that studies that purport to give answers to interesting questions will get more coverage than those pointing out why those answers probably don’t mean very much.
There are three often-mentioned criteria for causality: 1) the two variables are associated (there is a correlation between the death penalty and murder rates, 2) the cause has to come before the effect (which seems to be the case in the recent paper), and 3) all alternative explanations must be ruled out. This third criteria is the one that is not satisfied in this recent research and is probably the most difficult to satisfy. The authors of the above entry point out one way to satisfy this criteria would be to have an experimental design where the researcher can control the manipulation of when the death penalty is instituted.