Critical sections and mutexes

brueckd at tbye.com brueckd at tbye.com
Wed Oct 24 16:06:50 EDT 2001


On Wed, 24 Oct 2001, Cliff Wells wrote:

> True, I may have overstated it a bit, however, I would expect that the
> typical multi-threaded application is going to do things that require more
> than a single-line of python to manipulate data

You might *expect* that, but it often turns out not to be the case. :) It
all depends on the type of program you're writing, but I've found that
very often (maybe even 50% of the time or more) multithreaded apps get by
just fine without additional work. Even a few really large programs boiled
down to unidirectional work queues. In those cases I often use a semaphore
as a "work ready" signal, but that has nothing to do with maintaing data
integrity and everything to do with performance (avoiding busy waits).

>, i.e. len() prior to append()
> or pop() or whatever.

Hence the try..pop..except construct. With checking for length before
appending, it's often the case that there is no firm requirement on max
list size, only that you don't want it to be too big (ie - it's a fuzzy
requirement). That being true, the worst case for a non-locking
implementation would be a list that has (n-1) too many objects in it where
n is the number of producers.

For example, if you have a work queue that you don't want to grow to some
extreme length you may decide to cap its size at 1000 elements. With 10
threads adding work to the list, occasionally they'll drop or hold a
packet of work too long or occasionally your list will grow to 1009
elements. No big deal: the drop/hold problem exists regardless of any sort
of locking, and you don't care that your list is 9 elements too big
because your requirement was simply "don't let it grow without bounds".

-Dave





More information about the Python-list mailing list