Avoiding deadlocks in concurrent programming

Eloff eloff777 at yahoo.com
Wed Jun 22 17:09:42 EDT 2005


Hi Paul,

>Do you mean a few records of 20+ MB each, or millions of records of a
>few dozen bytes, or what?

Well they're objects with lists and dictionaries and data members and
other objects inside of them. Some are very large, maybe bigger than
20MB, while others are very numerous and small (a hundred thousand tiny
objects in a large list). Some of the large objects could have many
locks inside of them to allow simultaneous access to different parts,
while others would need to be accessed as a unit.

>If the 100 threads are blocked waiting for the lock, they shouldn't
>get awakened until the lock is released.  So this approach is
>reasonable if you can minimize the lock time for each transaction.

Now that is interesting, because if 100 clients have to go through the
system in a second, the server clearly is capable of sending 100
clients through in a second, and it doesn't matter if they all go
through "at once" or one at a time so long as nobody gets stuck waiting
for much longer than a few seconds. It would be very simple and
painless for me to send them all through one at a time. It is also
possible that certain objects are never accessed in the same action,
and those could have seperate locks as an optimization (this would
require carefull analysis of the different actions.)

>You're doing what every serious database implementation needs to do ...
>Are you sure you don't want to just use an RDBMS?

It was considered, but we decided that abstracting the data into tables
to be manipulated with SQL queries is substantially more complex for
the programmer and way too expensive to the system since the average
action would require 20-100 queries.

Thanks,
-Dan




More information about the Python-list mailing list