[SciPy-Dev] Accuracy of single-precision FFT

Anne Archibald aarchiba at physics.mcgill.ca
Fri Jun 25 12:38:44 EDT 2010


Let's drop the MKL discussion completely. It is not a solution to the
current problem, since some versions of numpy must be compiled without
it. If you want to discuss the implications of the GPL, please start
another thread so I can killfile it.

On 25 June 2010 11:35, Sturla Molden <sturla at molden.no> wrote:

> I also wonder if this in-accuracy is due to FFTPACK's algorithms or the
> Fortran compiler? NumPy uses  fftpack_lite which is a C library. It is
> easy to compile fftpack_lite for single precision as well (basically
> just tydef'ing Treal to float). Is that inaccurate too? If not, we have
> a Fortran compiler problem.

It seems clear from the performance that FFTPACK is falling back to an
O(n**2) algorithm for large prime sizes. This will also result in the
much worse numerical behaviour that we see.

It seems to me that the right short-term solution is to switch to
double precision for difficult sizes (any non-power-of-two is probably
fine) and add a warning to the docstring (in particular mentioning the
horrible performance as well as the roundoff error, which can be an
issue for doubles too). This we can do for 0.8.

In the medium term, I think a python implementation of Bluestein's
algorithm is the way to go; it's not so difficult, given that we
already have efficient FFTs, but it's a bit bug-prone for  0.8. (And
there's some tuning to be done - for example, do we pad up to the next
power of two, or the next number divisible entirely by 2,3,5 and 7?
The latter might actually be faster.)

In the long term, if numpy is to become an array library plus a python
wrapper, the FFT code must migrate to C. I don't anticipate this being
too bad either, but it amounts to extending FFTPACK.


Anne



More information about the SciPy-Dev mailing list