[SciPy-User] Unit testing of Bayesian estimator

Anne Archibald peridot.faceted at gmail.com
Mon Nov 9 13:14:36 EST 2009


2009/11/9  <josef.pktd at gmail.com>:

> >From the posterior probability S/(S+1), you could construct
> a decision rule similar to a classical test, e.g. accept null
> if S/(S+1) < 0.95, and then construct a MonteCarlo
> with samples drawn form either the uniform or the pulsed
> distribution in the same way as for a classical test, and
> verify that the decision mistakes, alpha and beta errors, in the
> sample are close to the posterior probabilities.
> The posterior probability would be similar to the p-value
> in a classical test. If you want to balance alpha and
> beta errors, a threshold S/(S+1)<0.5 would be more
> appropriate, but for the unit tests it wouldn't matter.

Unfortunately this doesn't work. Think of it this way: if my data size
is 10000 photons, and I'm looking at the fraction of
uniformly-distributed data sets that have a probability > 0.95 that
they are pulsed, this won't happen with 5% of my fake data sets - it
will almost never happen, since 10000 photons are enough to give a
very solid answer (experiment confirms this). So I can't interpret my
Bayesian probability as a frequentist probability of alpha error.

> Running the example a few times, it looks like that the power
> is relatively low for distinguishing uniform distribution from
> a pulsed distribution with fraction/binomial parameter 0.05
> and sample size <1000.
> If you have strong beliefs that the fraction is really this low
> than an informative prior for the fraction, might improve the
> results.

I really don't want to encourage my code to return reports of
pulsations. To be believed in this nest of frequentists I work with, I
need a solid detection in spite of very conservative priors.

Anne



More information about the SciPy-User mailing list