[Numpy-discussion] Tensordot and memory consumption

Nicolas Rougier Nicolas.Rougier at loria.fr
Wed Feb 18 02:23:44 EST 2009


Hello,

I'm using tensordot in some computation and while I've been amazed by  
the speed, I'm now trying to reduce memory consumption in some very  
particular cases:

Let S be a 2 dims array of size (s1,s2)
Let D be a 2 dims array of size (d1,d2)
Let W be a 4 dims array of size (d1,d2,s1,s2)

Currently, I'm computing D as tensordot(W,S,2) and it works really  
fine and fast.

However, in some cases, W is based solely on a single 2 dims array K and
each of the W[i,j] is a slice of K. I would like to know if there is a
way to build W such that memory footprint is reduced ? Or maybe there is
other ways to perform the same computation ? For example, I know that
when S and D are of same shape, I can use 2d convolution, but I would
need the general case.

Since I imagine my explanations are not sot clear, I join a small
example. 


Nicolas

---

import numpy

n = 3

# Source
S = numpy.random.random((n,n))

# Destination (at this stage, only D shape matters)
D = numpy.zeros((2*n,2*n))

# Kernel
K = numpy.zeros((S.shape[0]*n+1,S.shape[1]*n+1))
K[S.shape[0],S.shape[1]] = 1

# Kernel decomposition for computing tensordot
W = numpy.zeros(D.shape + S.shape)
for i in range(W.shape[0]):
     for j in range(W.shape[1]):
         x = int(i/float(W.shape[0])*float(S.shape[0]))
         y = int(j/float(W.shape[1])*float(S.shape[1]))
         W[i,j] = K[n-x:n-x+S.shape[0],n-y:n-y+S.shape[1]]

D = numpy.tensordot(W,S,2)
print S
print
print D





More information about the NumPy-Discussion mailing list