[SciPy-User] Single precision FFT insufficiently accurate.

David Cournapeau cournape at gmail.com
Mon Jun 28 00:40:44 EDT 2010


On Mon, Jun 28, 2010 at 12:16 PM, Anne Archibald
<aarchiba at physics.mcgill.ca> wrote:

>
> I think falling back to double in this case is perfectly acceptable -
> after all, any user of the FFT in general has to know that the
> behaviour is severely data-dependent. In fact, since our FFT for those
> sizes seems to be O(n**2), they will almost certainly find that speed
> impels them to switch long before memory becomes an issue: the
> smallest array where I can imagine a user caring about the usage of a
> remporary double array is in the tens of millions of elements

Or if you run many 1d fft on a 2d array - typical example is in audio
processing where each row would be a window of a few hundred samples,
and you have as many rows as you can memory-wise.

> That said, even with FFTW3, which is pretty good about using the best
> algorithm for your particular case, it often pays to pad rather than
> use an awkward size (though the best padding is not necessarily
> power-of-two, according to my time trials):
> http://lighthouseinthesky.blogspot.com/2010/03/flops-and-fft.html
> So weird-size FFTs don't completely take the burden of padding off the
> user

Of, definitely. Any FFT user should know that power of two should be
used whenever possible/feasable. But generally, if you care about
memory, padding as a significant cost in some cases (like the one I
mentioned above).

David



More information about the SciPy-User mailing list