[SciPy-dev] Old numarray convolve2d fft=1 option

Anne Archibald peridot.faceted at gmail.com
Wed May 2 13:32:09 EDT 2007


On 02/05/07, David O'Sullivan <dosu004 at sges.auckland.ac.nz> wrote:

> I'm relatively new to this, and certainly no expert on the underlying
> math of all this stuff, but I'm trying to do relatively large kernel
> convolutions on large 2d matrices.  The kernels might be 160x160 while
> the matrices themselves might be 2500x2500.  They're also likely to be
> sparse matrices, but I'll worry about that later - for now I'm just
> scoping things out.

Sparse matrices are going to want another algorithm - if they're
sparse enough, the direct approach to convolutions (implemented taking
sparseness into account) will almost certainly be more efficient.

> This option doesn't seem to be in the current scipy.signal.convolve2d
> function.  Presumably it would speed 2d convolutions up a lot?

Basically, yes. FFTs are fairly fast and they turn convolutions into
elementwise multiplication.

It would not be terribly difficult to implement from scratch: you take
a two-dimensional real FFT of each array, padding appropriately,
multiply the FFTs, and take an inverse two-dimensional real FFT. You
may have to fiddle with normalizations a bit.

But, as always, I would not spend much effort optimizing the procedure
until you're sure that this step will be slow enough to matter. Though
as you're convolving a matrix with four million elements, it probably
will...

If you're *really* in a hurry, and the input matrices are sparse
enough but the output matrix is dense enough (possible with a
convolution), it might be reasonable to actually use a sparse DFT to
construct the (dense) Fourier transforms, taking advantage of the
sparseness, and then an FFT to reconstruct the convolution. I doubt
it'll be worth the trouble. I've done this sort of thing but only to
avoid having to bin event data.

Anne



More information about the SciPy-Dev mailing list