Avoiding deadlocks in concurrent programming

Paul Rubin http
Wed Jun 22 16:26:51 EDT 2005


"Eloff" <eloff777 at yahoo.com> writes:
> I have a shared series of objects in memory that may be > 100MB. Often
> to perform a task for a client several of these objects must be used.

Do you mean a few records of 20+ MB each, or millions of records of a
few dozen bytes, or what?

> However imagine what would happen if there's 100 clients in 100
> threads waiting for access to that lock. One could be almost finished
> with it, and then 100 threads could get switched in and out, all doing
> nothing since they're all waiting for the lock held by the one thread.

If the 100 threads are blocked waiting for the lock, they shouldn't
get awakened until the lock is released.  So this approach is
reasonable if you can minimize the lock time for each transaction.

> No doubt this is a common problem, how would you people deal with it?

You're doing what every serious database implementation needs to do,
and whole books have been written about approaches.  One approach is
to use a transaction and rollback system like a database does.  
A lot depends on the particulars of what you're doing.  Are you sure
you don't want to just use an RDBMS?



More information about the Python-list mailing list