Parallel Python

Nick Maclaren nmm1 at cus.cam.ac.uk
Thu Jan 11 11:58:19 EST 2007


In article <eo5gtj$6bs$1 at news.albasani.net>,
robert <no-spam at no-spam-no-spam.invalid> writes:
|> 
|> Thus there are different levels of parallelization:
|> 
|> 1 file/database based; multiple batch jobs
|> 2 Message Passing, IPC, RPC, ...
|> 3 Object Sharing 
|> 4 Sharing of global data space (Threads)
|> 5 Local parallelism / Vector computing, MMX, 3DNow,...
|> 
|> There are good reasons for all of these levels.

Well, yes, but to call them "levels" is misleading, as they are closer
to communication methods of a comparable level.

|> > This does not mean that MPI is inherently slower than threads however,
|> > as there are overhead associated with thread synchronization as well.
|> 
|> level 2 communication is slower. Just for selected apps it won't matter a lot.

That is false.  It used to be true, but that was a long time ago.  The
reasons why what seems to be a more heavyweight mechanism (message
passing) can be faster than an apparently lightweight one (data sharing)
are both subtle and complicated.


Regards,
Nick Maclaren.



More information about the Python-list mailing list