[Numpy-discussion] Optimized half-sizing of images?

Sebastian Haase seb.haase at gmail.com
Fri Aug 7 03:23:56 EDT 2009


On Fri, Aug 7, 2009 at 3:46 AM, Zachary Pincus<zachary.pincus at yale.edu> wrote:
>> We have a need to to generate half-size version of RGB images as
>> quickly
>> as possible.
>
> How good do these need to look? You could just throw away every other
> pixel... image[::2, ::2].
>
> Failing that, you could also try using ndimage's convolve routines to
> run a 2x2 box filter over the image, and then throw away half of the
> pixels. But this would be slower than optimal, because the kernel
> would be convolved over every pixel, not just the ones you intend to
> keep.
>
> Really though, I'd just bite the bullet and write a C extension (or
> cython, whatever, an extension to work for a defined-dimensionality,
> defined-dtype array is pretty simple), or as suggested before, do it
> on the GPU. (Though I find that readback from the GPU can be slow
> enough that C code can beat it in some cases.)
>
> Zach

Chris,
regarding your concerns of doing to fancy interpolation at the cost of
speed, I would guess the overall bottle neck is rather the memory
access than the extra CPU cycles needed for interpolation.
Regarding ndimage.zoom it should be able to "not zoom" the color-axis
but the others in one call.

Cheers,

--
Sebastian Haase



More information about the NumPy-Discussion mailing list