[issue6721] Locks in python standard library should be sanitized on fork

Charles-François Natali report at bugs.python.org
Thu Jun 30 14:55:57 CEST 2011


Charles-François Natali <neologix at free.fr> added the comment:

> The way I see it is that Charles-François' patch trades a possibility of a deadlock for a possibility of a child process running with inconsistent states due to forcibly reinitialized locks.
>

Yeah, that's why I let this stale: that's really an unsolvable problem
in the general case. Don't mix fork() and threads, that's it.

> I don't like increasing complexity with fine-grained locking either. While the general case is unsolvable what Giovanni proposed at least solves the specific case where only the basic IO code is involved after a fork. In hindsight the only real life use-case I can find that it would solve is doing an exec() right after a fork().
>

Antoine seems to think that you can't release the I/O locks around I/O
syscalls (when the GIL is released). But I'm sure that if you come up
with a working patch it'll get considered for inclusion ;-)

> Since I think we agree you can't just disable fork() when multiple threads are running, how about at least issuing a warning in that case? That would be a two-line change in threading.py.

You mean a runtime warning? That would be ugly and clumsy.
A warning is probably a good idea, but it should be added somewhere in
os.fork() and threading documentation.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue6721>
_______________________________________


More information about the Python-bugs-list mailing list