[Numpy-discussion] efficient usage of tensordot

Nadav Horesh nadavh at visionsense.com
Sun Jan 18 03:38:31 EST 2009


This is not the first time this issue is raised here.
You may try this piece of code, which may take less memory:

(A*x).sum(axis=1).T

  Nadav



-----הודעה מקורית-----
מאת: numpy-discussion-bounces at scipy.org בשם Gideon Simpson
נשלח: א 18-ינואר-09 07:30
אל: Discussion of Numerical Python
נושא: [Numpy-discussion] efficient usage of tensordot
 
This is related to a question I posted earlier.

Suppose I have array A with dimensions n x m x l and array x with  
dimensions m x l.  Interpret this as an array of l nxm matrices and  
and array of l m dimensional vectors.  I wish to compute the matrix- 
vector product A[:,:,k] x[:,k] for each k = 0,... l -1.  I discovered  
that I could accomplish this with the command

np.diagonal(np.tensordot(A, k, axes=(1,0)), axis1= 1, axis2 = 2)

The tensordot command gives me

A_{ijk}x_{jl} = C_{ikl}

And the diagonal command grabs the entries in array C where k=l.

Is this the "optimal" way to make this calculation in numpy?  It  
certainly makes for nice, clean code, but is it the fastest I can get?


-gideon

_______________________________________________
Numpy-discussion mailing list
Numpy-discussion at scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 3412 bytes
Desc: not available
URL: <http://mail.python.org/pipermail/numpy-discussion/attachments/20090118/9997a328/attachment.bin>


More information about the NumPy-Discussion mailing list