[Tutor] constrained least square fitting using python

eryksun eryksun at gmail.com
Sat Aug 10 09:18:54 CEST 2013


On Fri, Aug 9, 2013 at 9:12 PM, Oscar Benjamin
<oscar.j.benjamin at gmail.com> wrote:
>
> Which of the two solutions is more accurate in terms of satisfying the
> constraint and minimising the objective function? Do they take similar
> times to run?

In my example, minimize() required 28 function evaluations, so your
solution is more efficient.

As far as minimizing the objective and satisfying the constraint, the
results are very close:

    >>> objective(pcons, ymeas, X)   # minimize
    2036.6364327061785
    >>> np.sum(pcons)
    -2.0816681711721685e-17

    >>> objective(plsqcon, ymeas, X) # analytic
    2036.6364327060171
    >>> np.sum(plsqcon)
    0.0

I think SLSQP is using something like gradient descent on successive
quadratic approximations of the objective. But it's not something I
know a lot about. Here's the paper and links to the FORTRAN source
code:

Dieter Kraft, "Algorithm 733: TOMP–Fortran modules for optimal control
calculations," ACM Transactions on Mathematical Software, vol. 20, no.
3, pp. 262-281 (1994)
http://doi.acm.org/10.1145/192115.192124

http://www.netlib.org/toms/733
https://github.com/scipy/scipy/blob/master/scipy/optimize/slsqp/slsqp_optmz.f

> This probably is a good use for np.matrix (I guess you generally
> dislike it for the same reasons I do: explicit is better than implicit
> etc.).

I'm not against np.matrix in principle. I just didn't want to muddy the water.


More information about the Tutor mailing list