Python/SWIG: Namespace collisions in multiple modules.

Jakob Schiotz schiotz1 at yahoo.com
Mon May 22 11:57:16 EDT 2000


--- Konrad Hinsen <hinsen at cnrs-orleans.fr> wrote:
	[ .... ]
> In my experience, the only portable solution is to have all modules
> that directly call MPI routines linked statically with the modified
> Python executable. This is the solution I have adopted for the MPI
> module in ScientificPython
>
(ftp://dirac.cnrs-orleans.fr/pub/ScientificPython/ScientificPython-2.1.1.tar.gz).
> To make life easier for MPI application programs, ScientificPython
> provides a complete C API to its functions which includes more or
> less
> direct MPI calls as well as higher-level functions. This means that
> although ScientificPython requires a special Python executable (which
> is produced automatically during installation), application modules
> can use MPI from shared libraries without any problems; they needn't
> even be linked with the MPI library.

Dear Konrad Hinsen,

I have been looking at the code in Scientific.MPI, it looks like you
have been thinking about what you are doing.  :-)

I am now considering three options:
1) Proceed with my own MPI module, using tricks from Scientific.MPI. 
Undoubtedly I'll learn most from reinventing the wheel :-)
2) Using Scientific.MPI as it is.  I am afraid I am going to miss some
functionality, such as nonblocking send/receive, and reduce operations.
3) Extend Scientific.MPI with those functions.  In that case I'll of
course send whatever I do to you.

Before I proceed, I have a few questions for you:

The functions in the C API are called PyMPI_XXXX, some of them are just
wrappers around the usual MPI_XXXX functions (but with some reordering
of the argument list).  If I make direct calls to MPI from a compiled
library, do I have to go through these calls, or can I call MPI_XXXX
directly?  The problem is that I am SWIGing a large application, and I
do not want to rewrite every call to MPI (the reordering of the
arguments in the PyMPI_XXXX functions makes it harder just to use
macros).  I understand that the C API functionality is nessecary if the
MPI module is dynamically loaded, but if I build a static executable
with Scientific.MPI as you recommend, can I then use the "ordinary" MPI
calls?

You have chosen to make all MPI functions methods of the communicator
in Python.  Is there any compelling reason for doing this, apart from
"being object oriented"?  Not that I have anything against it, but I
was wondering.

Best regards,

Jakob



=====
Jakob Schiotz, CAMP and Department of Physics, Tech. Univ. of Denmark,
DK-2800 Lyngby, Denmark.  http://www.fysik.dtu.dk/~schiotz/
This email address is used for newsgroups and mailing lists
(spam protection).  Official email: schiotz @ fysik . dtu . dk
When spammed too much, I'll move on to schiotz2 at yahoo.com

__________________________________________________
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/




More information about the Python-list mailing list