[issue4660] multiprocessing.JoinableQueue task_done() issue
Brian
report at bugs.python.org
Tue Dec 23 07:40:36 CET 2008
Brian <brian at merrells.org> added the comment:
Here are a few stabs at how this might be addressed.
1) As originally suggested. Allow task_done() to block waiting to
acquire _unfinished_tasks. This will allow the put() process to resume,
release() _unfinished_tasks at which point task_done() will unblock. No
harm, no foul but you do lose some error checking (and maybe some
performance?)
2) One can't protect JoinableQueue.put() by simply acquiring _cond
before calling Queue.put(). Fixed size queues will block if the queue
is full, causing deadlock when task_done() can't acquire _cond. The
most obvious solution would seem to be reimplementing
JoinableQueue.put() (not simply calling Queue.put()) and then inserting self._unfinished_tasks.release() into a protected portion. Perhaps:
def put(self, obj, block=True, timeout=None):
assert not self._closed
if not self._sem.acquire(block, timeout):
raise Full
self._notempty.acquire()
self._cond.acquire()
try:
if self._thread is None:
self._start_thread()
self._buffer.append(obj)
self._unfinished_tasks.release()
self._notempty.notify()
finally:
self._cond.release()
self._notempty.release()
We may be able to get away with not acquiring _cond as _notempty would
provide some protection. However its relationship to get() isn't
entirely clear to me so I am not sure if this would be sufficient.
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue4660>
_______________________________________
More information about the Python-bugs-list
mailing list