[Numpy-discussion] Changing FFT cache to a bounded LRU cache

Marten van Kerkwijk m.h.vankerkwijk at gmail.com
Sun May 29 13:12:55 EDT 2016


Hi,

I did a few simple timing tests (see comment in PR), which suggests it is
hardly worth having the cache. Indeed, if one really worries about speed,
one should probably use pyFFTW (scipy.fft is a bit faster too, but at least
for me the way real FFT values are stored is just too inconvenient). So, my
suggestion would be to do away with the cache altogether.

If we do keep it, I think the approach in the PR is nice, but I would
advocate setting both a size and number limit (e.g., by default no more
than 8 entries or so, which should cover most repetitive use cases).

All the best,

Marten

p.s. I do like having a quick fft routine in numpy. My main gripe is that
it always casts to float64/complex128 rather than sticking with the input
dtype. Hope to get around to making  a PR for that...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20160529/8c4ce97f/attachment.html>


More information about the NumPy-Discussion mailing list