[SciPy-User] estimation problem

josef.pktd at gmail.com josef.pktd at gmail.com
Fri Jul 15 14:31:42 EDT 2011


On Fri, Jul 15, 2011 at 2:12 PM, Neal Becker <ndbecker2 at gmail.com> wrote:
> josef.pktd at gmail.com wrote:
>
>> On Fri, Jul 15, 2011 at 1:39 PM, Neal Becker <ndbecker2 at gmail.com> wrote:
>>> josef.pktd at gmail.com wrote:
>>>
>>>> On Fri, Jul 15, 2011 at 12:58 PM, Neal Becker <ndbecker2 at gmail.com> wrote:
>>>>> I have a known signal (vector) 'x'.  I recieve a vector 'y'
>>>>>
>>>>> y = F(k x) + n
>>>>>
>>>>> where n is Gaussian noise, and k is an unknown gain parameter.
>>>>>
>>>>> I want to estimate k.
>>>>>
>>>>> F is a known function (nonlinear, memoryless).
>>>>>
>>>>> What might be a good approach to try?  I'd like this to be an 'online'
>>>>> approach - that is, I provide batches of training vectors (x, n), and the
>>>>> estimator will improve the estimate (hopefully) as more data is supplied.
>>>>
>>>> scipy.optimize.curve_fit
>>>>
>>>> I would reestimate with the entire sample after a batch arrives using
>>>> the old estimate as a starting value.
>>>>
>>>> There might be shortcuts reusing and updating the Jacobian and
>>>> Hessian, but I don't know of anything that could be used directly. (I
>>>> don't have much idea about non-linear kalman filters and whether they
>>>> would help in this case.)
>>>>
>>> In my case, x, y, n are complex.  I guess I need to handle that myself
>>> (somehow).
>>
>> I guess curve_fit won't help then.
>> optimize.leastsq should still work if the function returns a 1d array
>> abs(y-F(x)) so that (abs(y-F(x))**2).sum() is the real loss function.
>>
>> If k is also complex, then I would think that it will have to be
>> separated into real and complex parts as separate parameters.
>>
>> If you need the extra results, like covariance matrix of the estimate,
>> then I would just copy the parts from curve_fit.
>>
>> (I don't think I have seen complex Gaussian noise, n, before.)
>>
>> Josef
>>
>
> What I tried that seems to work is:
>
> def func (x, k):
>  return complex_to_real (complex_func (real_to_complex (x * k)))
>
> popt, pcov = curve_fit (func, complex_to_real(x), complex_to_real (y))
>
> where complex_to_real: interpret a complex vector as alternating real/imag parts
> real_to_complex: interpret alternating entries in float vector as real/imag
> parts of complex
>
> Does this seem like a valid use of curve_fit?

I'm not very good in complex algebra without sitting down and go
through the details.

your complex_to_real(x), complex_to_real (y) have now twice the length
of the original x, y

Does your func also return twice as many values? (after more thought,
yes, since this is curve_fit and not leastsq.)

Then, you are using a different objective function, sum of squares of
real plus squares of complex parts, instead of square of complex.
(real(y) - real(F(x)))**2 + (imag(y) - imag(F(x)))**2, instead of
(y-F(x)) * (y-F(x)).conj()  ?

I don't know if this matters.

Josef

>
>
> _______________________________________________
> SciPy-User mailing list
> SciPy-User at scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
>



More information about the SciPy-User mailing list