[SciPy-User] "small data" statistics

Sturla Molden sturla at molden.no
Fri Oct 12 07:30:10 EDT 2012


On 12.10.2012 13:12, Sturla Molden wrote:

> * The Bayesian approach is not scale invariable. A monotonic transform
> like y = f(x) can yield a different conclusion if we analyze y instead
> of x.

And this, by the way, is what really pissed off Ronald A. Fisher, the 
father of the "p-value". He constructed the p-value as a heuristic for 
assessing H0 specifically to avoid this issue. Ronald A. Fisher never 
accepted the significance testing (type-1 and type-2 error rates) of 
Pearson and Neuman, as experiments are seldom repeated. In fact the 
p-value has nothing to do with significance testing.

To correct the other issues of the p-value Fisher later constructed a 
different kind of analysis he called "fiuducial inference". It is not 
commonly used today.

It depends on looking at hypothesis testing as signal processing:

measurement = signal + noise

The noise is considered random and and the signal is the truth about H0.
Fisher argued we can interfere the truth about H0 from subtracting the 
random noise from the collected data. The method has none of the 
absurdities of Bayesian and classical statistics, but for some reason it 
never got popular among practitioners.


Sturla











More information about the SciPy-User mailing list