[SciPy-dev] confidence intervals on multi-parameter minimisation

Robert Kern robert.kern at gmail.com
Wed Mar 21 18:45:54 EDT 2007


Graeme O'Keefe wrote:
> thanks,
> 
> I've gone through the docstrings and test_odr.py, quite  
> straightforward, well packaged.

Thank you!

> I still run fmin_l_bfgs_b to bound the solution and then I use that  
> as a starting point.
> 
> Of course, now I know my model is really bad from the parameter sd_beta.

Take those uncertainty estimates with a grain of salt. They are based on a
linearization (quadraticization, really) of the loss function around the optimal
parameters. So if the loss function is quite flat around the optimal value, but
goes up more sharply than the paraboloid found by the Hessian matrix around the
optimal parameters, then the uncertainties could be much larger than are really
warranted.

I always recommend doing a little Monte Carlo post-mortem to verify the
estimates. Generate parameter values with numpy.random.multivariate_normal()
with the optimal parameter values as the mean and cov_beta as the covariance
matrix. Throw away the values outside of your bounds. Then put the good
parameters into your function and plot all of them against your data. If you are
doing ordinary least squares, it's also quite easy to evaluate the loss
function, too, and plot its distribution. It's more difficult to do that with
orthogonal distance regression because the functionality that finds the
orthogonal distances is not exposed by itself.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth."
  -- Umberto Eco



More information about the SciPy-Dev mailing list