[SciPy-User] SciPy leastsq behavior

François-Xavier Thomas francois-xavier.thomas at airinov.fr
Wed Oct 16 13:24:42 EDT 2013


Hi all,

I am trying to debug precision errors in an optimization process using
ScipPy's leastsq method.

Attached is a log of the start of the optimization process, showing
mean errors and differences between consecutive X values for each f(X)
call.

  * Why is f(X) computed even though X hasn't moved at all? Did I miss
something obvious?
  * I set the "diag" parameter to [0.1, 0.1, 0.1, 1000, 500, 500, 0.1,
0.1, 0.01, 0.01, 0.01, 0.01], which roughly corresponds to the order
of magnitude of my X values. Should that help the optimizer?
  * Can leastsq work well on higher floating point precisions (i.e.
numpy.float128)? What is the underlying precision of the MINPACK
libraries?

I realise you might not know the answers to all of these questions,
but any hint or caveat you might have from your own experience would
be useful!

Thanks!
François-Xavier

-- 
François-Xavier Thomas -- Image Processing

AIRINOV
Incubateur PRINE
110 rue des Poissonniers -  48 voie CI18
75899 Paris Cedex 18
09 72 39 75 82
www.airinov.fr
-------------- next part --------------
A non-text attachment was scrubbed...
Name: optimization.log
Type: application/octet-stream
Size: 6970 bytes
Desc: not available
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20131016/3ee4d02d/attachment.obj>


More information about the SciPy-User mailing list