[SciPy-Dev] Accuracy of single-precision FFT

Pauli Virtanen pav at iki.fi
Thu Jun 24 12:22:13 EDT 2010


Thu, 24 Jun 2010 22:31:52 +0800, Ralf Gommers wrote:
[clip: significant errors in float32 fft in Scipy]
>>    1 0.0
>>   17 4.76837e-07
>>   37 2.98023e-06
>>   97 0.000104427
>>  313 0.000443935
>>  701 0.00112867
>> 1447 0.00620008
>> 2011 0.0138307
>> 3469 0.16958
>>
>> So even decimal=4 would fail for 97 already. For larger primes the FFT
>> should be slower but not less accurate, right?
>
> Any opinion on this? Is it easily fixable? This is the last thing
> holding up 0.8.0 I think, can we mark it knownfail for that or does
> anyone think it's important enough to delay the release for?

IIRC, single precision (float32) FFT is a new feature in Scipy 0.8, and 
was not present in earlier releases. I think Numpy and previous versions 
of Scipy were doing the FFT all the time in double precision (check 
this!).

There are now two possibilities:

1) the single precision FFT in Scipy works incorrectly,

2) the single precision FFT in Scipy works correctly, but the precision 
   unavoidably sucks for large arrays.

I guess (2) is more likely here.

Moreover, I would guess the error is the largest in the high-frequency 
components, but remains low in at low frequencies.

    ***

The main question for the release, IMO, is:

	Should the single-precision FFT implementation be used
	by default if input data is single precision? 

I'm not completely convinced it should be, since if (2) is true, then 
this may surprise some people who expect more accuracy.

Perhaps it should be put as a keyword option instead, or just split the 
functionality to different functions. I think we should maybe delay the 
release until we are sure which way is best to take here.

    ***

David, any comments?

-- 
Pauli Virtanen




More information about the SciPy-Dev mailing list