Avoiding deadlocks in concurrent programming

Steve Horsley steve.horsley at gmail.com
Wed Jun 22 19:01:55 EDT 2005


Eloff wrote:
> Hi Paul,
> 
>>If the 100 threads are blocked waiting for the lock, they shouldn't
>>get awakened until the lock is released.  So this approach is
>>reasonable if you can minimize the lock time for each transaction.
> 
> 
> Now that is interesting, because if 100 clients have to go through the
> system in a second, the server clearly is capable of sending 100
> clients through in a second, and it doesn't matter if they all go
> through "at once" or one at a time so long as nobody gets stuck waiting
> for much longer than a few seconds. It would be very simple and
> painless for me to send them all through one at a time. It is also
> possible that certain objects are never accessed in the same action,
> and those could have seperate locks as an optimization (this would
> require carefull analysis of the different actions.)

It is my understanding that Pythons multithreading is done at the 
interpteter level and that the interpreter itself is single 
threaded. In this case, you cannot have multiple threads running 
truly concurrently even on a multi-CPU machine, so as long as you 
avoid I/O work while holding the lock, I don't think there should 
be any performance hit using a single lock. The backup thread may 
be an issue though.

Steve



More information about the Python-list mailing list