[Numpy-discussion] optimizing ndarray.__setitem__

Mark Wiebe mwwiebe at gmail.com
Wed May 4 11:57:42 EDT 2011


On Wed, May 4, 2011 at 6:19 AM, Christoph Groth <cwg at falma.de> wrote:

> Dear numpy experts,
>
> I have noticed that with Numpy 1.5.1 the operation
>
> m[::2] += 1.0
>
> takes twice as long as
>
> t = m[::2]
> t += 1.0
>
> where "m" is some large matrix.  This is of course because the first
> snippet is equivalent to
>
> t = m[::2]
> t += 1.0
> m[::2] = t
>
> I wonder whether it would not be a good idea to optimize
> ndarray.__setitem__ to not execute an assignment of a slice onto itself.
> Is there any good reason why this is not being done already?
>
> best,

Christoph
>

You'd better time this in 1.6 too. ;)


https://github.com/numpy/numpy/commit/f60797ba64ccf33597225d23b893b6eb11149860

The case of boolean mask indexing can't benefit so easily from this
optimization, but I think could see a big performance benefit if combined
__index__ + __i<op>__ operators were added to Python. Something to consider,
anyway.

-Mark
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20110504/01ebe8d9/attachment.html>


More information about the NumPy-Discussion mailing list