[Tutor] constrained least square fitting using python

eryksun eryksun at gmail.com
Sat Aug 10 01:00:23 CEST 2013


On Fri, Aug 9, 2013 at 2:40 PM, Oscar Benjamin
<oscar.j.benjamin at gmail.com> wrote:
>
>     E[y] = X p
>
> is the estimate of y given X and p. We want to choose p_lsq to minimise
>
>     res = sum((E[y] - y)^2) = sum((X p - y)^2)
>
> The optimal solution is given by the exact solution of the linear system
>
>     X' X p_lsq = X' y
>
> where X' means the transpose of X.

The pseudoinverse I mentioned is (X' X)^-1 X'. Oscar's way using
linalg.solve on the above equation will behave better numerically.

> The least-squares choice p_lsqcon that satisfies the linear constraint
>
>     A p = a
>
> is given by
>
>     p_lsqcon = p_lsq - Z A' [A Z A']^(-1) (A p - a)
>
> where Z is the inverse of X' X. In general A is a matrix but in your
> case it is a 1xm vector of all 1s and a is just 0. The expressions
> above may look complicated but when you actually work it out they're
> not so bad.

The result from the analytic solution is very close to what minimize()
got in my example:

    >>> Q = np.dot(X.T, X)
    >>> plsq = np.linalg.solve(Q, np.dot(X.T, ymeas))
    >>> plsq
    array([-1.96260467,  2.19944918, -0.75938176])

    >>> Z = np.linalg.inv(Q)
    >>> A = np.ones((1, 3))

I'm tempted to switch to np.matrix here, which supports infix matrix
products and exponents. But I'll just break this mess up into temp
variables:

    >>> t1 = np.dot(Z, A.T)
    >>> t2 = np.linalg.inv(np.dot(np.dot(A, Z), A.T))
    >>> t3 = np.dot(A, plsq)
    >>> plsqcon = plsq - np.dot(np.dot(t1, t2), t3)

    >>> plsqcon
    array([-1.9447156 ,  1.95757573, -0.01286013])

My solution from minimize():

    >>> pcons
    array([-1.94471575,  1.95757699, -0.01286124])


More information about the Tutor mailing list