I have just read a very nice paper on how badly confidence levels are interpreted as probabilities of events in Particle Physics:

It explains well how the frequentist hypothesis-test language can be misleading, to say the least. To be honest, it's amazing how after so many examples of how the Bayesian (although it should be called Laplacian) reasoning is the correct way of doing probabilistic inference, people still are stubborn enough to reject it by the silliest reasons.

It's extremely simple to see that the frequentist approach is a particular case of the much more general, Bayesian framework (once more, read Jaynes or Sivia). The last arguments I heard were related to quantum mechanics. Some people say that, because probabilities are fundamental on QM, the frequentist framework is the correct one. That's nonsense, of course. QM probabilities are still Bayesian. They give the odds of

*result***a****the preparation of an experiment. Obviously, as always, they coincide with the frequentist calculations in the long run, but still, whenever your quantum state is |+>, you can safely say that the probability of measuring a spin up or down is the same, even if you do only one experiment (not an infinity amount of them). The meaning is simply that you don't have any reason to prefer either result, up or down, in your next measurement.***given*
The important thing to remember is this: frequentists cannot say that your NEXT coin throw have a probability of 1/2 tails and 1/2 heads because that

**be meaningless for them (although it can be masked by a lot of tricky justifications). On the opposite hand, that kind of reasoning is totally allowed by Bayesian inference, where it has the simple and intuitive clear meaning that there is no reason to favour either tails or heads.***should*
That should be simple, isn't it? In this case, it is.

## 2 comments:

By contrast, what I don't understand is how so many people have been misled by alleged criticisms of frequentist error statistical methods. They depend on a series of howlers, I'm afraid. I admit that there are silly and unthinking uses of frequentist methods, and my own philosophy of statistics provides a basis for voiding the flaws and fallacies they involve. It is not surprising that Bayesians are increasingly looking to frequentist principles to find a foundation for their own work. see my blog:

errorstatistics.com

As I said, the issue with the frequentist approach is not that the methods do not really work. It's known very well that frequentist methods and Bayesian methods will give the same answers when the dataset is large. Still, Bayesian methods allow you to talk about probabilities without the need of a large sample size.

About Bayesian looking for frequentist principles to find a foundation, I would be grateful if you could point to some references that I could check.

~Roberto.

Post a Comment