[issue6653] Potential memory leak in multiprocessing

Richard Oudkerk report at bugs.python.org
Sun Mar 31 01:12:37 CET 2013


Richard Oudkerk added the comment:

I don't think this is a bug -- processes started with fork() should nearly always be exited with _exit().  And anyway, using sys.exit() does *not* guarantee that all deallocators will be called.  To be sure of cleanup at exit you could use (the undocumented) multiprocessing.util.Finalize().

Note that Python 3.4 on Unix will probably offer the choice of using os.fork()/os._exit() or _posixsubprocess.fork_exec()/sys.exit() for starting/exiting processes on Unix.

Sturla's scheme for doing reference counting of shared memory is also flawed because reference counts can fall to zero while a shared memory object is in a pipe/queue, causing the memory to be prematurely deallocated.

I think a more reliable scheme would be to use fds created using shm_open(), immediately unlinking the name with shm_unlink().  Then one could use the existing infrastructure for fd passing and let the operating system handle the reference counting.  This would prevent leaked shared memory (unless the process is killed in between shm_open() and shm_unlink()).  I would like to add something like this to multiprocessing.

----------

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue6653>
_______________________________________


More information about the Python-bugs-list mailing list