[Matrix-SIG] Shifted array ops (was: RE: [Matrix-SIG] QUERY: Array indexing)

Frank Horowitz frank@ned.dem.csiro.au
Fri, 12 Mar 1999 18:25:35 +0800


At 3:29 PM +0800 12/3/99, Tim Peters wrote:
>[Frank Horowitz chews & chews, but is still hungry]
>> My applications require local references in the array (somewhat akin to
>> finite difference/convolution templates) but the operations I need to
>> perform are more general than multiply-and-sum, hence I can't use the
>> recent n-d convolve routines from Travis Oliphant (or was it someone
>> else; senility is obviously setting in...)
>>
>> My approach to date has been to build shifted/rolled versions of the
>> array, and operate with them (code available upon request, if anyone is
>> silly enough to want it :-).  Obviously, this comes at a severe cost in
>> memory ...
>
>I don't know the best way to approach this in NumPy, perhaps because I
>haven't yet installed it <wink>.  In straight Python I do stuff like this by
>writing thin wrapper classes that leave the original matrices alone, merely
>permuting the *indices* in __setitem__/__getitem__  (instead of physically
>permuting the *data*).  For example, if your need to transpose a 2D array,
>just swap the indices in the wrappers before passing them on to the actual
>(still untransposed) array.
>
>Here's an overly simplistic but complete example:

<some stuff deleted>

>class VectorRotate:
>    def __init__(self, data, shift):
>        """Make data act as if it had been rotated left by shift slots"""
>        self.x = data
>        self.n = len(data)
>        self.shift = shift
>
>    def __getitem__(self, i):
>        return self.x[(i + self.shift) % self.n]
>
>    def __setitem__(self, i, val):
>        self.x[(i + self.shift) % self.n] = val
>
>    def __len__(self):
>        return self.n
>
>    def getdata(self):
>        return self.x


>
>If the above was suggestive enough, I expect you'll find this kind of thing
>easier than gather/scatter.  Yes?

Err, I'm still not sure I quite understand (more chewing sounds evident in
the background; do I feel like a cow, or what? :-).

If I understand your suggestion correctly, you're doing some trickery with
indexing *for an individual element* (i.e. your __getitem__ __setitem__
routines index a specific element).  Does the fact that they're overriding
the operator class methods imply that the result of the calls to getdata
index through the array at C speeds (courtesy of the NumPy __getitem__
support code)?  Or is the result that the getdata indexes through the array
at interpreter speeds?

Remember, I'm contemplating applying this stuff (over and over :-) to
arrays with ~12 million elements.  If it runs at interpreter speeds, I'll
have to look for a different answer.

>
>> ...
>> Wisdom anyone? (Tim? :-)
>
>Sorry, you didn't suck up enough to get a wise answer!
>
OK, then thanks for the half-wise answer! (And I grovel in your general
direction, which I believe must more or less be straight down from here in
Perth :-)

	Cheers,
		Frank Horowitz


--
Frank Horowitz                                      frank@ned.dem.csiro.au
Australian Geodynamics Cooperative Research Centre, and
CSIRO-Exploration & Mining, PO Box 437, Nedlands, WA 6009,       AUSTRALIA
Direct: +61 8 9284 8431;  FAX: +61 8 9389 1906; Reception: +61 8 9389 8421