[SciPy-User] fmin_slsqp exit mode 8

josef.pktd at gmail.com josef.pktd at gmail.com
Sat Sep 29 11:31:18 EDT 2012


On Sat, Sep 29, 2012 at 7:05 AM, Pauli Virtanen <pav at iki.fi> wrote:
> 29.09.2012 02:24, josef.pktd at gmail.com kirjoitti:
> [clip]
>> I tried to scale down the objective function and gradient, and it works
>>
>> np.linalg.eigvals(poisson_l1_res._results.model.hessian(poisson_l1_res.params))
>> array([-588.82869149,  -64.89601886,  -13.81251974,   -6.90900488,
>>          -0.74415772,   -0.48190709,   -0.03863475,   -0.34855895,
>>          -0.28063095,   -0.16671642])
>>
>> I can impose a high penalization factor and still get a successful
>> mode=0 convergence.
>> I'm not sure the convergence has actually improved in relative terms.
>>
>> (Now I just have to figure out if we want to consistently change the
>> scaling of the loglikelihood, or just hack it into L1 optimization.)
>
> Ideally, the SLSQP algorithm itself would be scale invariant, but
> apparently something inside the code assumes that the function values
> (and maybe gradients) are "of the order of one".

That sounds like the right explanation.

I was also surprised that it only has one precision parameter, acc,
where I didn't figure out where it is used (maybe everywhere), but we
needed to make it smaller than the default.

Josef

>
> --
> Pauli Virtanen
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user



More information about the SciPy-User mailing list