[SciPy-User] Normalization for optimization in python

Daπid davidmenhur at gmail.com
Mon Apr 27 18:17:57 EDT 2015


On 27 April 2015 at 14:40, cjaramillo <cjaramillo at gradcenter.cuny.edu>
wrote:

> Thanks, David. However, you can still run into biased convergences even
> when
> all variables would be under the same units. For example, when you have a
> parameter that translates in the range from 0 to 10 [mm] and others that
> translate in the hundreds of [mm].


That is still not a problem. 10 mm and 500 mm are the same amount (more or
less). The whole purpose of scaling is to minimise numerical inaccuracies
when adding or subtracting numbers. But since the resolution of a double is
1e-15, you need nine orders of magnitude difference before you get
accuracies comparable with perfectly scaled float (1e-6) (waiving my hands
violently here).

Note that, due to the nature of the optimisation problems, the numerical
noise in most "simple" functions will get swamped by the procedure, unless
you are interested in an extremely accurate result on a finely crafted
function. You can check on your case and compare with and without the
scaling, without providing the gradient for simplicity.

I'm searching on the web more about this,
> but I would appreciate you can refer me to a source where they employ
> unit-based scaling (normalization) of Jacobian matrices for their use
> within
> optimization.
>

This is usually called "natural units" or "nondimensionalisation". You will
perhaps find more relevant hits in the context of differential equations. I
don't know from the top of my head of any reference, but this should be
covered in most numerical analysis texts.


/David.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20150427/b871b1f5/attachment.html>


More information about the SciPy-User mailing list