[SciPy-user] can NumPy use parallel linear algebra libraries?

Robert Kern robert.kern at gmail.com
Wed May 14 17:52:36 EDT 2008


On Wed, May 14, 2008 at 4:41 PM, Camillo Lafleche
<camillo.lafleche at yahoo.com> wrote:
> Hi!
> Besides some discussions about mpi4py for SciPy I couldn't find much
> information whether SciPy is ready to use parallel numerical algorithms.

Nothing in scipy has been particularly tailored for parallel computation, no.

> With pBLAS (http://www.netlib.org/scalapack/pblas_qref.html) a parallel
> linear algebra library is available. Because NumPy is built on top of BLAS,
> I wonder whether you could accelerate cluster computations by using pBLAS?
> Higher level linear algebra routines than (p)BLAS should give the same
> advantages for NumPy functions like solve() and svd().

Unfortunately, pBLAS is not an implementation of the BLAS interfaces
which we use. Rather, it is a different set of interfaces covering the
same functionality, but with the obvious additions to the subroutine
signatures to describe the distributed matrices.

> Am I naive if I assume that small changes in the libraries used for the
> NumPy compilation can enable parallel computing? Or is this option already
> available?

Not across processes or machines, no. ATLAS can be compiled to use
threads, though.

> I'm grateful to receive any information about how to perform efficient
> linear algebra with NumPy on an MPI cluster.

Brian Granger is working on a distributed array type which can sit on
top of MPI. You may also want to look at petsc4py and slepc4py (which
uses petsc4py). PETSc's high-level focus is parallel PDEs, but it
includes many lower-level tools for parallel linear algebra.

-- 
Robert Kern

"I have come to believe that the whole world is an enigma, a harmless
enigma that is made terrible by our own mad attempt to interpret it as
though it had an underlying truth."
 -- Umberto Eco



More information about the SciPy-User mailing list