[SciPy-User] confidence interval for leastsq fit

josef.pktd at gmail.com josef.pktd at gmail.com
Thu Apr 28 12:16:52 EDT 2011


On Thu, Apr 28, 2011 at 12:01 PM, Bruce Southey <bsouthey at gmail.com> wrote:
> On 04/28/2011 10:31 AM, Till Stensitzki wrote:
>>> gets better as this linearized form approximates the true model and
>>> correct if the model is linear.
>> True, but in some cases the approximation is quite bad,
>> e.g. the mentioned sum of exponentials.
>>
>>> By exhaustive search I presume you mean
>>> boostrapping which is going to better provided that there are enough
>>> points, samples, method used and how appropriate the model actually is.
>> Bootstrapping is to take random samples and compare the results of
>> fitting the samples? Nope, by exhaustive search i am the procedure
>> described by the linked paper. You change the optimal parameter
>> by a small margin, repeat the fit with the one parameter hold and
>> than compare the chi^2 of this fit with the chi^2 of the optimal fit.
>>
>> Till
>>
> If you can estimate this fixed parameter, then you really need to
> include in your model. That type of 'messing about' does not provide the
> correct values so of course your confidence intervals will be 'wrong'.
> Wrong because the variances and covariances are *conditional* on the
> fixed value and do not account for the uncertainty in estimating the
> 'fixed' parameter.
>
> If you can not estimate that parameter, then 'whatever' you is rather
> meaningless because there are an infinite number of solutions. This is
> not a problem if your desired function is estimable (as in analysis
> variance with dummy variables) where you estimate the differences not
> actual values). Otherwise your answer will depend on the constraint
> imposed so if the constraint is wrong so will be the answer.

The approach sounds to me like the same principle as behind profile
likelihood, which is just a way to trace the likelihood contour.
e.g.
http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/nonlinearoptexpls_sect19.htm#

My guess is under the assumption of a normal additive error, using the
F-statistic or the loglikelihood level or ratio might be pretty much
equivalent.

Josef




>
> Bruce
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list