[SciPy-User] confidence interval for leastsq fit

josef.pktd at gmail.com josef.pktd at gmail.com
Thu Apr 28 12:56:13 EDT 2011


On Thu, Apr 28, 2011 at 12:42 PM, Bruce Southey <bsouthey at gmail.com> wrote:
> On 04/28/2011 11:16 AM, josef.pktd at gmail.com wrote:
>> On Thu, Apr 28, 2011 at 12:01 PM, Bruce Southey<bsouthey at gmail.com>  wrote:
>>> On 04/28/2011 10:31 AM, Till Stensitzki wrote:
>>>>> gets better as this linearized form approximates the true model and
>>>>> correct if the model is linear.
>>>> True, but in some cases the approximation is quite bad,
>>>> e.g. the mentioned sum of exponentials.
>>>>
>>>>> By exhaustive search I presume you mean
>>>>> boostrapping which is going to better provided that there are enough
>>>>> points, samples, method used and how appropriate the model actually is.
>>>> Bootstrapping is to take random samples and compare the results of
>>>> fitting the samples? Nope, by exhaustive search i am the procedure
>>>> described by the linked paper. You change the optimal parameter
>>>> by a small margin, repeat the fit with the one parameter hold and
>>>> than compare the chi^2 of this fit with the chi^2 of the optimal fit.
>>>>
>>>> Till
>>>>
>>> If you can estimate this fixed parameter, then you really need to
>>> include in your model. That type of 'messing about' does not provide the
>>> correct values so of course your confidence intervals will be 'wrong'.
>>> Wrong because the variances and covariances are *conditional* on the
>>> fixed value and do not account for the uncertainty in estimating the
>>> 'fixed' parameter.
>>>
>>> If you can not estimate that parameter, then 'whatever' you is rather
>>> meaningless because there are an infinite number of solutions. This is
>>> not a problem if your desired function is estimable (as in analysis
>>> variance with dummy variables) where you estimate the differences not
>>> actual values). Otherwise your answer will depend on the constraint
>>> imposed so if the constraint is wrong so will be the answer.
>> The approach sounds to me like the same principle as behind profile
>> likelihood, which is just a way to trace the likelihood contour.
>> e.g.
>> http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/nonlinearoptexpls_sect19.htm#
>>
>> My guess is under the assumption of a normal additive error, using the
>> F-statistic or the loglikelihood level or ratio might be pretty much
>> equivalent.
>>
>> Josef
>>
>>
> Perhaps the 'decidedly inferior' marginal profile confidence intervals
> is closer as you fix all but one parameter and compare that to the full
> likelihood.
> http://www.unc.edu/courses/2010fall/ecol/563/001/docs/lectures/lecture11.htm#marginal
>
> It also relies rather heavily on the assumption that the likelihood
> ratio is chi-squared.

from Tills reference
"""
In the exhaustive search procedure the rate constant ki was changed
(increased or decreased) from its optimal value by a certain fraction.
Subsequently, a new minimization of x2 was performed in which ki is
fixed, whereas all other fit parameters were allowed to relax, in order
to find a new minimum on the reduced x2-surface. The rate constant ki
was thus increased/decreased stepwise until the new minimal value for
the reduced x2 was significantly worse than X2 . This procedure
mapped the complete x2-surface around Xmin It provided error ranges
for each rate constant, at a given statistical accuracy.
"""

hold one parameter fixed, maximize chi2, or loglikelihood with respect
to all other parameters
(same idea as concentrated likelihood)

Josef


>
> Bruce
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list