Wednesday, April 07, 2010


Causality, sometimes framed as blame, is often a popular topic.  Andrew Gelman recently blogged about it at some length (enough length so that I'll let you go read his ideas without my adding to them here).

While I quite appreciate deep thinking into such topics, I'm attracted to Jane Davidson's rather pragmatic approach in Why genuine evaluation must include causal inference.

Jane seems to have built some of her ideas on work by Michael Scriven.  As I don't have a single ready reference for his ideas on the subject, I'll let you explore Google Scholar for more information (a good source for finding more of Jane's work, too).

Labels: ,


Blogger Jane D. said...

Thanks for the mention, Bill, and for the link to Andrew Gelman's lengthy article.

I do think Andrew Gelman massively overcomplicates the issue by assuming that ALL causes of an observed effect need to be explained. In evaluation, the only important causal questions are (1) whether the program (or other evaluand) was at least "a" reasonably substantial (i.e. non-trivial) cause, not "the" cause, of a particular outcome or set of outcomes, and if so, (2) how substantial the contribution was (i.e how much credit should the program get as one contributor in the mix?).

Like many others he goes from "gosh, it's quite tricky, isn't it?" to "oh well, obviously nothing useful can be done". This all-or-nothing mentality has a lot to answer for, e.g. saying that if an RCT or decent quasi-exptal design with sufficient statistical power can't be done, then there's no point in saying anything about outcomes (see the Blueprint Evaluation for an example ).

Jane Davidson

08 April, 2010 04:01  

Post a Comment

<< Home