[SciPy-User] Single precision FFT insufficiently accurate.

Anne Archibald aarchiba at physics.mcgill.ca
Sun Jun 27 23:16:31 EDT 2010


On 27 June 2010 22:30, David Cournapeau <cournape at gmail.com> wrote:
> On Mon, Jun 28, 2010 at 8:29 AM, Pauli Virtanen <pav at iki.fi> wrote:
>> Sun, 27 Jun 2010 23:13:44 +0200, Sebastian Haase wrote:
>>> this workaround seems in-acceptable if single-precision was used (by the
>>> user of SciPy) because of memory constrains .... !!!
>>
>> Suggest a better one, then.
>>
>> We cannot just return incorrect results, which FFTPACK seems to produce,
>> especially as this easily occurs in the size range where memory
>> constraints would be important.
>>
>> Moreover, single-precision FFT is a new feature in 0.8.x, so it probably
>> does not have an exceedingly large number of users yet who rely on it.
>
> Nevertheless, Sebasiten remark just made me realize that using double
> instead of single in some cases which are input-dependent is not that
> great. It means that some program may work in some cases, but will not
> in other (because of memory issues).

I think falling back to double in this case is perfectly acceptable -
after all, any user of the FFT in general has to know that the
behaviour is severely data-dependent. In fact, since our FFT for those
sizes seems to be O(n**2), they will almost certainly find that speed
impels them to switch long before memory becomes an issue: the
smallest array where I can imagine a user caring about the usage of a
remporary double array is in the tens of millions of elements, and
with our current FFTPACK implementation that will be so slow as to be
unusable - unless they use a product of powers of 2, 3, and 5. In that
case they get an O(n log n) algorithm - and they get direct single
computation.

> Maybe it is better to remove it altogether in 0.8.0 - I will try to
> implement Bluestein algo during euroscipy,

This would definitely be a big improvement - I don't care so much
about the precision issue, though it is important, but having a
decently-fast algorithm for arbitrary sizes would be a good idea.

That said, even with FFTW3, which is pretty good about using the best
algorithm for your particular case, it often pays to pad rather than
use an awkward size (though the best padding is not necessarily
power-of-two, according to my time trials):
http://lighthouseinthesky.blogspot.com/2010/03/flops-and-fft.html
So weird-size FFTs don't completely take the burden of padding off the
user (though I suspect that there are some oddball applications where
the exact FFT size matters).

Anne



More information about the SciPy-User mailing list