A lock that prioritizes acquire()s?
MRAB
python at mrabarnett.plus.com
Wed Oct 24 15:41:37 EDT 2012
On 2012-10-24 19:54, David M Chess wrote:
>
> Okay, next silly question. :)
>
> We have a very simple multi-threaded system where a request comes in,
> starts running in a thread, and then (zero, one, or two times per
> request) gets to a serialization point, where the code does:
>
> with lock:
> do_critical_section_stuff_that_might_take_awhile()
>
> and then continues.
>
> Which is almost the same as:
>
> lock.acquire()
> try:
> do_critical_section_stuff_that_might_take_awhile()
> finally:
> lock.release()
>
> Now we discover that It Would Be Nice if some requests got priority over
> others, as in something like:
>
> lock.acquire(importance=request.importance)
> try:
> do_critical_section_stuff_that_might_take_awhile()
> finally:
> lock.release()
>
> and when lock.release() occurs, the next thread that gets to run is one
> of the most important ones currently waiting in acquire() (that's the
> exciting new thing).
>
> Other requirements are that the code to do this be as simple as
> possible, and that it not mess anything else up. :)
>
> My first thought was something like a new lock-ish class that would do
> roughly:
>
> class PriorityLock(object):
>
> def __init__(self):
> self._lock = threading.Lock()
> self._waiter_map = {} # maps TIDs to importance
>
> def acquire(self,importance=0):
> this_thread = threading.currentThread()
> self._waiter_map[this_thread] = importance # I want in
> while True:
> self._lock.acquire()
> if ( max( self._waiter_map.values())<=importance ): # we win
> del self._waiter_map[this_thread] # not waiting anymore
> return # return with lock acquired
> self._lock.release() # We are not most impt: release/retry
>
> def release(self):
> self._lock.release()
>
> (Hope the mail doesn't garble that too badly.)
>
> Basically the acquire() method just immediately releases and tries again
> if it finds that someone more important is waiting.
>
> I think this is semantically correct, as long as the underlying lock
> implementation doesn't have starvation issues, and it's nice and simple,
> but on the other hand it looks eyerollingly inefficient.
>
> Seeking any thoughts on other/better ways to do this, or whether the
> inefficiency will be too eyerolling if we get say one request per second
> with an average service time a bit under a second but maximum service
> time well over a second, and most of them are importance zero, but every
> (many) seconds there will be one or two with higher importance.
>
Here's my take on it:
class PriorityLock(object):
def __init__(self):
self._lock = threading.Lock()
self._waiter_queue = []
self._queue_lock = threading.Lock()
def acquire(self, importance=0):
this_thread = threading.currentThread()
# Add this thread to the queue
with self._queue_lock:
self._waiter_queue.append((importance, this_thread))
self._waiter_queue.sort(reverse=True, key=lambda pair:
pair[0]) # Move the most important to the start.
# Acquire and retain the lock when this thread is at the start
of the queue.
while True:
self._lock.acquire()
with self._queue_lock:
if self._waiter_queue[0][1] == this_thread: # We win.
del self._waiter_queue[0] # Not waiting anymore.
return # Return with lock acquired.
self._lock.release() # We are not most important: release
and retry.
time.sleep(0.01) # Give the other threads a chance.
def release(self):
self._lock.release()
More information about the Python-list
mailing list