[Numpy-discussion] polyfit with fixed points

Charles R Harris charlesr.harris at gmail.com
Tue Mar 5 08:41:55 EST 2013


On Tue, Mar 5, 2013 at 6:23 AM, Charles R Harris
<charlesr.harris at gmail.com>wrote:

>
>
> On Tue, Mar 5, 2013 at 12:41 AM, Jaime Fernández del Río <
> jaime.frio at gmail.com> wrote:
>
>> On Mon, Mar 4, 2013 at 8:37 PM, Charles R Harris <
>> charlesr.harris at gmail.com> wrote:
>>
>>>
>>> There are actually seven versions of polynomial fit, two for the usual
>>> polynomial basis, and one each for Legendre, Chebyshev, Hermite, Hermite_e,
>>> and Laguerre ;)
>>>
>>
>> Correct me if I am wrong, but the fitted function is the same regardless
>> of the polynomial basis used. I don't know if there can be numerical
>> stability issues, but chebfit(x, y, n) returns the same as
>> poly2cheb(polyfit(x, y, n)).
>>
>> In any case, with all the already existing support for these special
>> polynomials, it wouldn't be too hard to set the problem up to calculate the
>> right coefficients directly for each case.
>>
>>
>>> How do you propose to implement it? I think Lagrange multipliers is
>>> overkill, I'd rather see using the weights (approximate) or change of
>>> variable -- a permutation in this case -- followed by qr and lstsq.
>>>
>>
>> The weights method is already in place, but I find it rather inelegant
>> and unsatisfactory as a solution to this problem. But if it is deemed
>> sufficient, then there is of course no need to go any further.
>>
>> I hadn't thought of any other way than using Lagrange multipliers, but
>> looking at it in more detail, I am not sure it will be possible to
>> formulate it in a manner that can be fed to lstsq, as polyfit does today.
>> And if it can't, it probably wouldn't make much sense to have two different
>> methods which cannot produce the same full output running under the same
>> hood.
>>
>> I can't figure out your "change of variable" method from the succinct
>> description, could you elaborate a little more?
>>
>
> I think the place to add this is to lstsq as linear constraints. That is,
> the coefficients must satisfy B * c = y_c for some set of equations B. In
> the polynomial case the rows of B would be the powers of x at the points
> you want to constrain. Then do an svd on B, B = u * d * v. Apply v to the
> design matrix of the unconstrained points A'  = A * v.T so that B' becomes
> u * d. The coefficients are now replaced by new variables c' with the
> contraints in the first two columns. If there are, say, 2 constraints, u *
> d will be 2x2. Solve that equation for the first two constraints then
> multiply the first two columns of the design matrix A' by the result and
> put them on the rhs, i.e.,
>
>     y = y - A'[:, :2] * c'[:2]
>
> then solve the usual l least squares thing with
>
>     A[:, 2:] * c'[2:] = y
>
> to get the rest of the transformed coefficients c'. Put the coefficients
> altogether and multiply with v^T to get
>
>     c = v^T * c'
>
>
There are a few missing `'` in there, but I think you can get the idea, we
are making the substitution c = v^T * c'.

Chuck
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20130305/5e0401c3/attachment.html>


More information about the NumPy-Discussion mailing list