[New-bugs-announce] [issue32937] Multiprocessing worker functions not terminating with a large number of processes and a manager

Eric Gorr report at bugs.python.org
Sat Feb 24 09:17:03 EST 2018


New submission from Eric Gorr <ericgorr at gmail.com>:

I have the following code:

    import multiprocessing
    from multiprocessing import Pool, Manager
    import time
    import random

    def worker_function( index, messages ):

        print( "%d: Entered" % index )
        time.sleep( random.randint( 3, 15 ) )
        messages.put( "From: %d" % index )
        print( "%d: Exited" % index )

    manager = Manager()
    messages = manager.Queue()

    with Pool( processes = None ) as pool:

        for x in range( 30 ):
            pool.apply_async( worker_function, [ x, messages ] )

        pool.close()
        pool.join()

It does not terminate -- all entered messages are printed, but not all exited messages are printed.

If I remove all the code related to the Manager and Queue, it will terminate properly with all messages printed.

If I assign processes explicitly, I can continue to increase the number assigned to processes and have it continue to work until I reach a value of 20 or 21. > 20, it fails all of the time. With a value == 20 it fails some of the time. With a value of < 20, it always succeeds.

multiprocessing.cpu_count() returns 24 for my MacPro.

----------
components: Library (Lib), macOS
messages: 312718
nosy: Eric Gorr, ned.deily, ronaldoussoren
priority: normal
severity: normal
status: open
title: Multiprocessing worker functions not terminating with a large number of processes and a manager
versions: Python 3.6

_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue32937>
_______________________________________


More information about the New-bugs-announce mailing list