[SciPy-user] Sparse with fast element-wise multiply?
David Warde-Farley
dwf at cs.toronto.edu
Thu Dec 20 03:56:27 EST 2007
On 17-Dec-07, at 11:41 PM, Nathan Bell wrote:
> Currently elementwise multiplication is exposed through A**B where A
> and B are
> csr_matrix or csc_matrix objects. You can expect similar
> performance to A+B.
Whoa, you're not kidding:
In [19]: time multiply_coo(k1,k2)
CPU times: user 0.77 s, sys: 0.08 s, total: 0.85 s
Wall time: 0.86
Out[19]:
<7083x7083 sparse matrix of type '<type 'numpy.float64'>'
with 24226 stored elements in COOrdinate format>
In [20]: time k1csr ** k2csr
CPU times: user 0.02 s, sys: 0.00 s, total: 0.02 s
Wall time: 0.02
Out[20]:
<7083x7083 sparse matrix of type '<type 'numpy.float64'>'
with 24226 stored elements in Compressed Sparse Row format>
Actually it's about 5 times FASTER than adding the two of them.
Probably because in the latter it's the union of the elements that is
the result, rather than the (typically sparser) intersection.
> I don't know why ** was chosen, it was that way before I started
> working on
> scipy.sparse.
It seems sensible enough to me; I don't know how often I've had to
exponentiate a matrix (much less a sparse one) and if you want to do
any serious exponentiation it's usually cheaper (and I'd expect more
numerically stable) to diagonalize it once & exponentiate the
eigenvalues.
Is this ** behaviour documented anywhere?
> I've added a .multiply() method to the sparse matrix base class
> that goes through csr_matrix:
> http://projects.scipy.org/scipy/scipy/changeset/3682
Muchos gracias.
DWF
More information about the SciPy-User
mailing list