[issue21998] asyncio: support fork

Yury Selivanov report at bugs.python.org
Tue May 26 20:40:52 CEST 2015


Yury Selivanov added the comment:

> I would therefore, in the child after a fork, close the loop without 
> breaking the selector state (closing without unregister()'ing fds), unset 
> the default loop so get_event_loop() would create a new loop, then raise 
> RuntimeError. 
>
> I can elaborate on the use case I care about, but in a nutshell, doing so
> would allow to spawn worker processes able to create their own loop without
> requiring an idle "blank" child process that would be used as a base for
> the workers. It adds the benefit, for instance, of allowing to share data
> between the parent and the child leveraging OS copy-on-write.

The only solution to safely fork a process is to fix loop.close() to
check if it's called from a forked process and to close the loop in
a safe way (to avoid breaking the master process).  In this case
we don't even need to throw a RuntimeError.  But we won't have a 
chance to guarantee that all resources will be freed correctly (right?)

So the idea is (I guess it's the 5th option):

1. If the forked child doesn't call loop.close() immediately after
forking we raise RuntimeError on first loop operation.

2. If the forked child calls (explicitly) loop.close() -- it's fine, 
we just close it, the error won't be raised.  When we close we only 
close the selector (without unregistering or re-regestering any FDs),
we cleanup callback queues without trying to close anything).

Guido, do you still think that raising a "RuntimeError" in a child
process in an unavoidable way is a better option?

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue21998>
_______________________________________


More information about the Python-bugs-list mailing list