Making more sense with numbers, part 8
Statistical problems have three parts: the setup, the calculation, and the presentation of results. By my understanding, Bayesian and classical (frequentist) statistics differ in all three.
In the setup, Bayesian statistics starts with the development of a probabilistic model and a set of prior probabilities for the parameters of interest. Classical statistics seems to start with the development of a null hypothesis (what if there is no effect from whatever intervention being considered) and an alternative hypothesis. There's a difference in how one considers information one has before the data collection starts. Some have taken Bayesian approaches to task for the sometimes subjective form of those prior probabilities, but others have pointed out that classical approaches also have their subjective moments in assuming that the particular nature of the classical assumptions apply in a particular situation. Some point out that one can pick prior probabilities in a way that doesn't rely on subjective assessments; those tend to be the weakly informative priors you can read about. I'm intrigued by this part of the difference, but it's not the telling difference for me.
In the calculation, the classical statistical approach relies on selecting the appropriate test to decide if one should accept or reject the null hypothesis or to calculate confidence intervals for parameters of interest. As some have pointed out, this is not always an easy task, and the tests are not always easily matched to complex problems. With unique problems, one may have to modify the problem to match the method or invent new methods to match the problem.
The Bayesian approach relies on basic probability models, which makes it easier to develop an approach that meets the specific problem at hand. This is a telling difference for me.
There is a problem. Except for the simpler cases (for example, see the original original Making sense with numbers), it's often hard to carry out the integration involved in making the calculations. Markov chain Monte Carlo (MCMC) approaches make that much more approachable, but they're not things one carries out on the back of an envelope.
Finally, there's the presentation of the data. This, too, is telling for me. While the classical approach gets tied up in explaining precisely what it means to reject the null hypothesis or what a confidence interval means, the Bayesian result means exactly what most of us likely think when we hear a statistical result: it states the probability of a particular event we care about happening.
I'm still looking for a short, easy-to-read but complete elevator speech from a statistician on the topic that's consistent with some of what Andrew Gelman writes (I think he has some excellent writing on the subject, but I'm not sure I've found anything that fits the elevator speech model). In the meantime, Bayesian Statistical Inference for Psychological Research may help some begin to understand, even as it's somewhat old chronologically. Some might enjoy Why we (usually) don't have to worry about multiple comparisons. shows a simple but powerful application of Bayes Theorem, although it's rather more simple than what one would recognize today as Bayesian analysis.
Objections to Bayesian statistics actually does contain an elevator speech about Bayesian inference, even if it is a bit mathematically concise: "'Bayesian inference' represents statistical estimation as the conditional distribution of parameters and unobserved data, given observed data."
It's a bit longer than an elevator speech, but Dr. David Lucy of Lancaster University does have a short introduction to Bayesian methods that may help; it's part of his CFAS415a course materials.
If you've got a great but simple introduction that can explain the difference between Bayesian and classical inference well, please add it to the comments here! Thanks.