Python threading (was: Re: global interpreter lock not working as it should)

Bengt Richter bokr at oz.net
Sat Aug 3 14:39:41 EDT 2002


On Sat, 03 Aug 2002 09:58:51 +0100, Jonathan Hogg <jonathan at onegoodidea.com> wrote:

>On 2/8/2002 20:38, in article
>ddc19db7.0208021138.5e3d1a61 at posting.google.com, "Armin Steinhoff"
><a-steinhoff at web.de> wrote:
>
>> If you run the Python interpreter with FIFO or RR scheduling ... the
>> following code from ceval.c make no sense, because the priority of the
>> interpreter is static.
>
>Now I'm confused as to what you mean. The interpreter in Python is not a
I think he is implicitly referring to whatever OS thread is executing the
interpreter, but since that can change, the priority can change, so the
"priority of [the thread executing] the intepreter" can't be assumed
to be "static." So I don't know what he's thinking either.

>thread and thus has no priority of any kind. The interpreter in Python is
>effectively a critical region of code. Sections of the interpreter that
It's critical and protected by the interpreter lock, but (like your description
below for a medium priority thread preempting a lower one) if a clock/timer interrupt
happens and says a time slice has expired, I presume the currently
running (OS) "thread"  may be suspended, even though it is within the critical
section and holding the interpreter lock. That would let other non-blocked
threads run up to a time slice apiece, and then execution would resume within
the critical section. But there is apparently nothing saying "hurry up and get
out of the way and release the lock in deference to waiting higher priority threads."

The cpu-bound byte codes will continue to get executed until ten[1] of them have
been done, and only then will the lock be released for a possible context switch,
even if there are higher priority threads waiting for the lock. Maybe the current
checkinterval count should be slammed to zero when a higher priority thread blocks
on the interpreter lock. OTOH, for threads of equal priority, ten ordinary byte codes
get executed in a hurry, so that might actually be some real overhead, unless releasing
and reacquiring the lock (and context switches when other threads are ready) are
very efficient. Seems like one could peek at whether any threads were waiting, and
just skip the release/reacquire if there was none. Maybe a reason to custom wrap
that lock mechanism together with a little convenient state and methods. That could
handle zapping the countdown also.

>might block are placed outside of the region so that other threads can enter
>it while that thread is blocked. Similarly, every thread is forced
>periodically to leave the region and re-enter it in order to allow the
>thread scheduler to re-schedule as necessary (the piece you posted).
[1] That periodic yield is the key to switching away from a cpu-bound thread
to another blocking on the GIL. It looks like every 10 byte code instructions,
unless set to something else by sys.setcheckinterval(something_else). Seems like
ideally it should be dependent on actual time and relative priority of waiting threads,
not just an absolute countdown.
>
>With FIFO or RR scheduling, and assuming no blocking I/O, all threads will
>be available to run when the GIL is released. If the thread previously
>running before the GIL was released still has timeslice left it will be
>allowed to continue. If it has run out of timeslice then the next thread in
>line will be switched to. It will acquire the GIL and begin executing.
>
>There is nothing wrong with the code in ceval.c in this regard.
>
>> That means, if a Python thread with a higher priority acquieres the
>> GIL, the running thread with the lower priority will inherit the
>> priority of the other thread (if priority inheritance is supported, if
>> not you will have the case of priority inversion ...) in order to
>> complete its task as soon as possible.
>> In such a case we will see a context switch between the realease_lock
>> and the acquire_lock calls.
>
>Priority inversion is actually extremely unlikely in Python because the main
>shared resources is the GIL. There is only one of them so all threads that
>are not waiting on I/O require it. Therefore a medium priority thread will
>be unable to pre-empt the lower priority thread (or more accurately it will
>pre-empt the lower priority thread, immediately attempt to obtain the GIL,
>and block allowing the lower-priority thread which holds the GIL to
>continue) until the GIL is released, at which point the highest priority
Yes, but if a higher priority thread is waiting for the GIL, IWT that ideally
the lower priority thread should not complete a full interval countdown before
it releases the GIL. And whether or not some OS might adjust its priority to that
of the highest-priority thread blocking on its held lock(s) to help it finish is
another matter. It would still be good to cut the countdown short IMO. Ideally always
a bit before time slice expiration (to avoid a full round of scheduling most
of the time before yielding), and immediately when a waiter appears with or becomes
higher priority.
>thread will be scheduled.
>
>> Hope it's now clear how strong the thread behavior depends on the OS
>> and the used scheduling startegies.
>
>But that's pretty much what everyone was trying to say all along. Python's
>threads are just like any other thread on the system and rely on the native
>thread scheduler to do what it thinks is best. Because of this, there is
>largely nothing that Python can (or indeed should) do to affect this
>scheduling.
>
Seems like the countdown and voluntary yield could be modified, though.
The OS can't know about that, so it's up to Python to define policy there.

Regards,
Bengt Richter



More information about the Python-list mailing list