[issue6721] Locks in python standard library should be sanitized on fork

Charles-François Natali report at bugs.python.org
Sun May 15 21:39:56 CEST 2011


Charles-François Natali <neologix at free.fr> added the comment:

> Is it possible the following issue is related to this one?

It's hard to tell, the original report is rather vague.
But the comment about the usage of the maxtasksperchild argument reminds me of issue #10332 "Multiprocessing maxtasksperchild results in hang": basically, there's a race window in the Pool shutdown code where worker threads having completed their work can exit without being replaced.
So the connection with the current issue does not strike me, but you never know (that's the problem with those nasty race conditions ;-).

Concerning this issue, here's an updated patch.
I removed calls to pthread_mutex_destroy/pthread_condition_destroy/sem_destroy from the reinit functions: the reason is that I experienced a deadlock in test_concurrent_futures with the emulated semaphore code on Linux/NPTL inside pthread_condition_destroy: the new version strictly mimics what's done in glibc's malloc, and just calls pthrad_mutex_init and friends. It's safe, and shouldn't leak resources (and even if it does, it's way better than a deadlock).
I also placed the note on the interaction between locks and fork() at the top of Python/thread_pthread.h.

----------
Added file: http://bugs.python.org/file22005/reinit_locks.diff

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________


More information about the Python-bugs-list mailing list