[Matrix-SIG] matrix transformations on vector graphics

David Ascher da@skivs.ski.org
Wed, 15 Apr 1998 13:07:26 -0700 (PDT)


On Wed, 15 Apr 1998, Warren Focke wrote:

> On Wed, 15 Apr 1998, David Ascher wrote:
> 
> > On Wed, 15 Apr 1998, Just van Rossum wrote:
> > > 
> > > Is there a convenient way to apply the matrix to the vector array?
> > 
> ...
> > And then, you can just use the 'dot()' function:
> > 
> > 	>>> Numeric.dot(v, tm)
> > 	array([[   0.,    0.],
> > 	       [  10.,    0.],
> > 	       [ 100.,  -75.]])
> > 
> ...
> > > If I turn the x and y values into separate arrays, I could probably do
> > > something like this:
> > >
> > > [x, y] = Numeric.transpose(v)
> > > xnew = tm[0][0] * x + tm[0][1] * y + tm[2][0]
> > > ynew = tm[1][0] * x + tm[1][1] * y + tm[2][1]
> > >
> > > But if I in general would prefer xy pairs, I would have to do
> > > Numeric.transpose() before and after I do this. Or is transpose()
> > > relatively cheap? Did I just answer my own question?
> > 
> > transpose is very cheap, since it doesn't move any of the data, just the
> > description of the data.
> 
> But PyArray_ContiguousFromObject eventually gets called on both arguments
> to Numeric.dot.  Does this not copy the data of transposed arrays,
> negating the ``cheapness'' in this case (on the input side, at least)?

Good point, but note that I didn't say 'dot()' was cheap, I said
'transpose()' was cheap.  Whenever you're doing an 'en masse' computation,
I believe that before the operation (be it innerproduct, elementwise
multiplication, addition, etc.), a contiguous version of the array is
obtained -- the argument being that the cost of the eventual memory
relocation is offset by the speedup of the inner loop accessing sequential
memory locations.  A quick glance at the code seems to confirm that this
happens.  I'd guess that Jim did some testing to verify that it is
efficient.  

--david 'the group is alive! it's alive!' ascher