multithreading concept

sturlamolden sturlamolden at yahoo.no
Fri Feb 9 17:36:22 EST 2007


On Feb 9, 4:00 pm, "S.Mohideen" <m... at blackhole.labs.rootshell.ws>
wrote:

> I am sorry if I sound foolish.
> Suppose I split my Net application code using parallel python into several
> processes based upon the number of CPU available. That means a single socket
> descriptor is distributed across all processes. Is parallelity can be
> acheived using the processes send/recv on the single socket multiplexed
> across all the processes.. I haven't tried it yet - would like to have any
> past experience related to this.

Is CPU or network the speed limiting factor in your application? There
are two kinds of problems: You have a 'CPU-bound problem' if you need
to worry about 'flops'. You have an 'I/O bound' problem if you worry
about 'bits per second'.

If your application is I/O bound, adding more CPUs to the task will
not help. The network connection does not become any faster just
because two CPUs share the few computations that need to be performed.
Python releases the GIL around all i/o operations in the standard
library, such as reading from a socket or writing to socket. If this
is what you need to 'parallelize', you can just use threads and ignore
the GIL. Python's threads can handle concurrent I/O perfectly well.
Remember that Google and YouTube use Python, and the GIL is not a show
stopper for them.

The GIL locks the process to one CPU. You need to get around this if
the power of one CPU or CPU core limits the speed of the application.
This can be the case in e.g. digital image processing, certain
computer games, and scientific programming. I have yet to see a CPU-
bound 'Net application', though.

If you are running Windows: take a look at the CPU usage in the task
manager. Does it say that one of the CPUs is running at full speed for
longer periods of time? If not, there is noting to gained from using
multiple CPUs.










More information about the Python-list mailing list