Python threading (was: Re: global interpreter lock not working as it should)

Bengt Richter bokr at oz.net
Thu Aug 8 12:23:29 EDT 2002


On 08 Aug 2002 09:19:26 +0200, martin at v.loewis.de (Martin v. Loewis) wrote:

>bokr at oz.net (Bengt Richter) writes:
>
>> If you're talking about the compute-bound situation, as we have been,
>> yes, but typically all is not computation. I don't expect you mean that
>> in general multithreading always slows down a *system* ;-)
>
>Compared to what? A single-threaded solution? I do think that
Yes, compared to a single-threaded system, when the programmer does not
have the patience or skill to satisfy all the if's you mention below ;-)

>multi-threading creates a higher CPU load, and if you manage not to
>block in system calls when there is work to and, and to avoid
Don't forget that a disk controller is effectively blocking and waiting
for attention if you don't give it work to do when there is disk work to do
(although that can be mitigated with OS/file system readahead for sequential
access etc.) So part of managing "not to block in system calls" may be getting
the disk controller to start filling a new buffer in parallel with your single
thread as soon as it's ready to, so by the time you need the data, you won't block.

In a single thread, the code to do that will likely be ugly and/or inefficient.
Polling is effectively a time-distributed busy wait, so if you need to do that
in order to keep i/o going, you are not really avoiding busy waiting, you are
just diluting it with added latency. And worse, if you do it by writing Python
code to poll, you will be hugely more inefficient than letting ceval.c do it
in the byte code loop, even if the latter is not as optimum as it could be.

>busy-waits, that a single-threaded application will have higher
>performance than a equivalent multi-threaded one.
Yes, but again, to avoid blocking you need pretty much vanilla sequential i/o
that the OS can anticipate your needs with, and to be compute-bound otherwise.
>
>Threads are for convenience, not for performance.
Yes, but for many situations convenience is crucial in getting programmers
to deal with problems of managing parallel system activity so as to have
at least one unblocked thread available most of the time to keep the CPU busy.

Of course you are right that there is nothing to be gained from chopping up
what would otherwise be an unbroken stream of computation ;-)

Regards,
Bengt Richter



More information about the Python-list mailing list