[SciPy-User] Rotated, Anisotropic Gaussian Filtering (Kernel Density Estimation)

David Baddeley david_baddeley at yahoo.com.au
Thu Oct 18 16:38:34 EDT 2012


github seems to be down & I couldn't look at your code. Following on from what Aronne has said, however, if you can get the desired results by rotating your data and applying ndimage.gaussian_filter, is there anything stopping you just generating an anisotropic rotated kernel and then using ndimage.convolve2d?

cheers,
David


________________________________
 From: Aronne Merrelli <aronne.merrelli at gmail.com>
To: SciPy Users List <scipy-user at scipy.org> 
Sent: Friday, 19 October 2012 1:38 AM
Subject: Re: [SciPy-User] Rotated, Anisotropic Gaussian Filtering (Kernel Density Estimation)
 
On Tue, Oct 16, 2012 at 1:02 PM, Patrick Marsh <patrickmarshwx at gmail.com> wrote:
> Greetings,
>
>
> I know that people on this list are way smarter than I, so hopefully someone
> can help me out here.
>
> I have a gridded dataset of 1s and 0s with which I'm needing to apply a
> rotated, anisotropic Gaussian filter to achieve a kernel density estimate.
> Currently I have some Cython code that I wrote to do this. The code for this
> can be found here: https://gist.github.com/3900591. In essence, I search
> through the grid for a value of 1. When a 1 is found, a weighting function
> is applied to all the surrounding grid points. Their are rotation matrices
> at work throughout the code to handle the fact the axes of the anisotropic
> Gaussian kernel can be off-cartesian axes. The code works, and is reasonably
> efficient for small domains or large grid spacing. However, as grid spacing
> decreases, the performance takes a substantial hit.
>

Patrick,

I pulled down the gist code to run the cython annotate on it, and
found that there were some declarations missing. I added these at the
top of the file:

ctypedef np.float64_t DTYPE64_t
DTYPE64 = np.float
from libc.math cimport exp, sin, cos

And it compiles and looks OK to me; there isn't anything obvious that
would make it slow. However, depending on how you defined exp, sin,
cos, in the file you are actually running, if you are linking those
back to the numpy versions instead of the C versions, this code would
be pretty slow.

Otherwise, just after skimming the cython code, it looks like the
gaussian kernel (partweight in the cython code) is fixed, so this is
really just a convolution. If you compute partweight once, in python,
then you can just use the convolution function in scipy. This should
be as fast as any simple cython code, I'd think, and it is a lot
simpler.

If you try that, is it enough? If not, can you be more specific as to
what cases you have where the performance is bad? Specifically: what
size is the data array? what size is the kernel? what number of points
are non zero in the data array?

HTH,
Aronne
_______________________________________________
SciPy-User mailing list
SciPy-User at scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.scipy.org/pipermail/scipy-user/attachments/20121018/5473762c/attachment.html>


More information about the SciPy-User mailing list