Modern Science and the Bayesian-Frequentist Controversy

The Bayesian-Frequentist debate reflects two different attitudes to the process of doing science, both quite legitimate. Bayesian statistics is well-suited to individual researchers, or a research group, trying to use all the information at its disposal to make the quickest possible progress. In pursuing progress, Bayesians tend to be aggressive and optimistic with their modeling assumptions. Frequentist statisticians are more cautious and defensive. One definition says that a frequentist is a Bayesian trying to do well, or at least not too badly, against any possible prior distribution. The frequentist aims for universally acceptable conclusions, ones that will stand up to adversarial scrutiny. The FDA for example doesn’t care about Pfizer’s prior opinion of how well it’s new drug will work, it wants objective proof. Pfizer, on the other hand may care very much about its own opinions in planning future drug development.1

To me, it’s amazing how similar the ambiguous regions of behavioral decision theory are to the major questions of theoretical statistics: people seem largely unable to systematically decide whether they want to be minimaxing (which seems very close to Efron’s vision of frequentist thought as stated here) or whether they want to be minimizing expected risk (which is closer to my own vision of Bayesian thinking). My own sense is that we learn as a global culture, over time, which error functions are least erroneous — and we do so largely by trial and error.

Most interesting to me is to consider individual differences in the error functions people effectively use: I suspect political preferences correlate with a propensity to focus on worst case thinking rather than average case thinking. Also, I’m fascinated by the way that a single person switches between worst case and average case thinking: I suspect there’s as much to be learned here as there was in understanding what drives risk seeking behavior and what drives risk average behavior.

HT: John D. Cook


  1. Bradley Efron : Modern Science and the Bayesian-Frequentist Controversy

2 responses to “Modern Science and the Bayesian-Frequentist Controversy”

  1. Harlan

    Thanks for posting this, interesting. There’s a lot to be said about average-case/worst-case thinking in various disciplines. I’m building a Bayesian prediction model for work, and we’ve done a fair amount of back-and-forth already about what summary statistics various business users should use. In some cases, the posterior mean makes a lot of sense, especially when the predictions are really just a way to rank things. In other cases, something like the posterior 25th percentile, or 10th percentile, or 5th percentile make more sense, if what you’re really trying to do is avoid worst-case scenarios (which tend to have properties like being dramatically unprofitable). It’s both an educational opportunity for business folks, as well as interesting to me about what it really means to predict something.

    I’m also reminded of the worst-case/adversarial analyses that dominate computational learning theory, and how they seem sorta irrelevant in real-world statistical applications, at least until those odd cases where worst-case behavior becomes critically important…

  2. John Cook

    One advantage of worst-case analysis over average-case analysis is that the former makes fewer assumptions. You have to assume a probability model before you can say what “average” means, and that model may be wrong. In many situations, you know the worst-case with certainty and you don’t really know the average case.