[SciPy-User] SciPy leastsq behavior

Matt Newville newville at cars.uchicago.edu
Mon Oct 28 10:48:50 EDT 2013


Hi Francois-Xavier,

I'm not sure I can answer all your questions correctly but....

On Wed, Oct 16, 2013 at 12:24 PM, François-Xavier Thomas
<francois-xavier.thomas at airinov.fr> wrote:
> Hi all,
>
> I am trying to debug precision errors in an optimization process using
> ScipPy's leastsq method.
>
> Attached is a log of the start of the optimization process, showing
> mean errors and differences between consecutive X values for each f(X)
> call.
>
>   * Why is f(X) computed even though X hasn't moved at all? Did I miss
> something obvious?

The objective function will be called many times to compute the
Jacobian with values varying at the level of machine tolerance.
Depending on how you print out the values, it may appear the values
aren't changing.    Providing a small script that shows the problem is
always beneficial here -- we don't know where the output you've sent
is coming from.

It appears your initial values are all close to zero -- is that
correct?  What if you change the initial values to small random
numbers?

>   * I set the "diag" parameter to [0.1, 0.1, 0.1, 1000, 500, 500, 0.1,
> 0.1, 0.01, 0.01, 0.01, 0.01], which roughly corresponds to the order
> of magnitude of my X values. Should that help the optimizer?

I think so, but I believe it should not be necessary in most cases.
Have you tried not using it?

>   * Can leastsq work well on higher floating point precisions (i.e.
> numpy.float128)? What is the underlying precision of the MINPACK
> libraries?

I believe the answer is No, everything is done at double precision.

Cheers,

--Matt Newville



More information about the SciPy-User mailing list