[SciPy-User] scipy.optimize.root with method = 'lm' problem

William Heymann immudzen at gmail.com
Wed Dec 28 08:06:14 EST 2016


I just did a singular value decomposition in both MATLAB and numpy and here
is what I see. All the values are positive although some of them are very
small. I can also attach the jacobian and value vector from numpy if that
helps.

Right now the python is making 3 steps with the singular values completely
unchanged and the inputs to the function are identical and then on the 4th
step is makes a huge change which pushes most of the values to inf which
then obviously fails.

What I can't figure out is why the inputs are binary identical for 3
function calls and then suddenly the input changes by such an extreme.

The MATLAB version takes fairly reasonable step sizes, changes at each step
and ends up converging.

MATLAB

1.629938107547077e+00
6.451465786563481e-01
2.305477859914866e-01
6.833480272247973e-02
1.137797505806489e-02
7.131295727016028e-04
3.071566607427812e-06
1.959945932974291e-08
1.709619125695288e-11
9.452800099857666e-12

Python

1.62993811e+00
6.45146579e-01
2.30547786e-01
6.83348026e-02
1.13779751e-02
7.13129573e-04
3.07156658e-06
1.95996695e-08
1.76904080e-11
1.18986429e-11

On Tue, Dec 27, 2016 at 4:14 PM, Matthieu Brucher <
matthieu.brucher at gmail.com> wrote:

> Hi,
>
> There isn't much to advise you without any data or way to reproduce this.
> As a small value is added to the Jacobian to keep it positive semi-definite
> in the algorithm, the fact that it diverges seems to indicate that the
> eigenvalues of your system are not (or it is a bug, but for this, we will
> need more data to reproduce the issue).
> Can you output at each stage your Jacobian and check that it doesn't
> present that kind of behavior?
>
> Cheers,
>
> Matthieu
>
>
> 2016-12-27 15:13 GMT+01:00 William Heymann <immudzen at gmail.com>:
>
>> I am using scipy.optimize.root with method = 'lm' and jac=True and the
>> descent algorithm looks like it is taking steps that are far too large.
>>
>> I have 10 variables and my goal function returns a vector of 4374 outputs
>> and 4374x10 for the jacobian. The initial jacobian matches the same one I
>> generated when I use lsqnonlin in matlab and that works just fine. Instead
>> with scipy I am getting no changes at all for the first two iterations and
>> then suddenly a huge jump to basically the upper end of the double range
>> which seems pretty extreme.
>>
>> I would really like to make this work with scipy and I have my function
>> along with the exact derivatives for the jacobian computed with AD. I have
>> also looked at the singular values of the jacobian and they are all
>> positive and non-zero so the system should be locally convex at least.
>>
>> This is the call I make to scipy and it seems reasonable
>>
>> sol = scipy.optimize.root(residual, start, args=(large, small, weight,
>> data), method='lm', jac=True, options={'ftol':1e-6, 'xtol':1e-6})
>>
>> Thank you
>>
>> _______________________________________________
>> SciPy-User mailing list
>> SciPy-User at python.org
>> https://mail.python.org/mailman/listinfo/scipy-user
>>
>>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://blog.audio-tk.com/
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at python.org
> https://mail.python.org/mailman/listinfo/scipy-user
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/scipy-user/attachments/20161228/babaa98f/attachment.html>


More information about the SciPy-User mailing list