[Numpy-discussion] Out-of-RAM FFTs

Greg Novak novak at ucolick.org
Wed Apr 1 11:26:45 EDT 2009


Hello,
I'd like to do an FFT of a moderately large 3D cube, 1024^3.  Looking
at the run-time of smaller arrays, this is not a problem in terms of
compute time, but the array doesn't fit in memory.  So, several
questions:

1) Numerical Recipes has an out-of-memory FFT algorithm, but looking
through the numpy and scipy docs and modules, I didn't find a function
that does the same thing.  Did I miss it?  Should I get to work typing
it in?
2) I had high hopes for just memory-mapping the large array and
passing it to the standard fft function.  However, the memory-mapped
region must fit into the address space, and I don't seem to be able to
use more than 2 GB at a time.  So memory mapping doesn't seem to help
me at all.

This last issue leads to another series of things that puzzle me.  I
have an iMac running OS X 10.5 with an Intel Core 2 duo processor and
4 GB of memory.  As far as I've learned, the processor is 64 bit, the
operating system is 64 bit, so I should be able to happily memory-map
my entire disk if I want.  However, Python seems to run out of steam
when it's used 2 GB.  This is true of both 2.5 and 2.6.  What gives?
Is this a Python issue?

Thanks,
Greg



More information about the NumPy-Discussion mailing list