[SciPy-dev] Scipy-dev Digest, Vol 49, Issue 14

Nathan Bell wnbell at gmail.com
Tue Nov 27 16:52:33 EST 2007


On Nov 27, 2007 2:02 PM, Anand Patil <anand.prabhakar.patil at gmail.com> wrote:
> Of course, and it's great for general sparse matrices but I didn't see
> anything in there for triangular matrices. I didn't see anything in linsolve
> either, though quite a bit could no doubt be extracted from SuperLU.

Do you just want to backsolve with a triangular matrix?  If so, that
could be added to sparsetools without much trouble.  You'd still need
to decide where to expose the functionality, and how to handle
potential errors (e.g. zero diagonals), but the backend would be
simple.

> Also, I just like working with the BLAS rather than higher-level interfaces
> when it's code that I'm going to reuse, so as to have more control over when
> things get overwritten as opposed to copied, etc.

Funny you should mention that :)

I was planning to change sparsetools to be more BLAS-like where
possible.  Currently the library returns newly allocated memory for
most operations instead of accepting a preallocated array.  As you
suggest, mimicking BLAS's y = a*A*x + b*y instead of the current y =
A*x is helpful is some situations.
 The "level 3" operations such as matrix matrix products are trickier
since you can't anticipate the memory cost of the output in advance.
I'm going to see if there's any benefit to breaking these operations
up into two pass algorithms (like the SMMP algorithm does).

Now that most compilers support OpenMP, I'd also like to parallelize
matrix vector multiplication and perhaps other operations.  Block
CSR/CSC matrices, diagonal matrices, and some simple benchmarking are
others item on the todo list.

Anyway, if you'd like to improve sparsetools I'd greatly appreciate
your help and advice.

-- 
Nathan Bell wnbell at gmail.com



More information about the SciPy-Dev mailing list