[Numpy-discussion] optimizing ndarray.__setitem__

Robert Kern robert.kern at gmail.com
Wed May 4 11:45:38 EDT 2011


On Wed, May 4, 2011 at 08:19, Christoph Groth <cwg at falma.de> wrote:
> Dear numpy experts,
>
> I have noticed that with Numpy 1.5.1 the operation
>
> m[::2] += 1.0
>
> takes twice as long as
>
> t = m[::2]
> t += 1.0
>
> where "m" is some large matrix.  This is of course because the first
> snippet is equivalent to
>
> t = m[::2]
> t += 1.0
> m[::2] = t
>
> I wonder whether it would not be a good idea to optimize
> ndarray.__setitem__ to not execute an assignment of a slice onto itself.
> Is there any good reason why this is not being done already?

We didn't think of it. If you can write up a patch that works safely
and shows a performance improvement, it's probably worth putting in.
It's probably not *that* common of a bottleneck, though.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
  -- Umberto Eco



More information about the NumPy-Discussion mailing list