[issue34140] Possible multiprocessing deadlock when placing too many objects in Queue()

Antoine Pitrou report at bugs.python.org
Tue Jul 17 12:51:17 EDT 2018


Antoine Pitrou <pitrou at free.fr> added the comment:

The problem is you're joining the child processes before draining the queue in the parent.

Generally, instead of building your own kind of synchronization like this, I would recommend you use the higher-level abstractions provided by multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor.

By the way, this issue is mentioned precisely in the documentation:

"""
As mentioned above, if a child process has put items on a queue (and it has not used JoinableQueue.cancel_join_thread), then that process will not terminate until all buffered items have been flushed to the pipe.

This means that if you try joining that process you may get a deadlock unless you are sure that all items which have been put on the queue have been consumed. Similarly, if the child process is non-daemonic then the parent process may hang on exit when it tries to join all its non-daemonic children.
"""

(from https://docs.python.org/3/library/multiprocessing.html#pipes-and-queues)

----------

_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue34140>
_______________________________________


More information about the Python-bugs-list mailing list