Parallel Python

Nick Maclaren nmm1 at cus.cam.ac.uk
Thu Jan 11 05:44:12 EST 2007


In article <eo528r$jq3$1 at news.albasani.net>,
robert <no-spam at no-spam-no-spam.invalid> writes:
|> 
|> Most threads on this planet are not used for number crunching jobs,
|> but for "organization of execution".

That is true, and it is effectively what POSIX and Microsoft threads
are suitable for.  With reservations, even there.

|> Things like MPI, IPC are just for the area of "small message, big job"
|> - typically sci number crunching, where you collect the results "at
|> the end of day". Its more a slow network technique.

That is completely false.  Most dedicated HPC systems use MPI for high
levels of message passing over high-speed networks.

|> > They use it for the communication, but don't expose it to the
|> > programmer.  It is therefore easy to put the processes on different
|> > CPUs, and get the memory consistency right.
|> 
|> Thus communicated data is "serialized" - not directly used as with
|> threads or with custom shared memory techniques like POSH object
|> sharing.

It is not used as directly with threads as you might think.  Even
POSIX and Microsoft threads require synchronisation primitives, and
threading models like OpenMP and BSP have explicit control.

Also, MPI has asynchronous (non-blocking) communication.


Regards,
Nick Maclaren.



More information about the Python-list mailing list