multiprocessing deadlock

Ishwor Gurung ishwor.gurung at gmail.com
Sat Oct 24 03:22:07 EDT 2009


Hi Brian,
I think there could be a slight problem (if I've understood your code).

> import multiprocessing
> import queue
>
> def _process_worker(q):
>    while True:
do you really want to run it indefinitely here?

>        try:
>            something = q.get(block=True, timeout=0.1)
>        except queue.Empty:
>            return
So, if your queue is empty, why do you want to just return?

>        else:
>            print('Grabbed item from queue:', something)
>
>
> def _make_some_processes(q):
>    processes = []
>    for _ in range(10):
This is going to loop q*10 many times. Do you really want that?

>        p = multiprocessing.Process(target=_process_worker, args=(q,))
OK.
>        p.start()
>        processes.append(p)
Here. Do you want to add it to processes list? why?

>    return processes
OK.

> def _do(i):
>    print('Run:', i)
>    q = multiprocessing.Queue()
>    for j in range(30):
>        q.put(i*30+j)
30 items in the queue for each i (i*30). Cool.

>    processes = _make_some_processes(q)
>
>    while not q.empty():
>        pass
why are you checking q.empty( ) here again?

> #    The deadlock only occurs on Mac OS X and only when these lines
> #    are commented out:
> #    for p in processes:
> #        p.join()
Why are you joining down here? Why not in the loop itself? I tested it
on Linux and Win32. Works fine for me. I don't know about OSX.

> for i in range(100):
>    _do(i)
_do(i) is ran 100 times. Why? Is that what you want?

> Output (on Mac OS X using the svn version of py3k):
> % ~/bin/python3.2 moprocessmoproblems.py
> Run: 0
> Grabbed item from queue: 0
> Grabbed item from queue: 1
> Grabbed item from queue: 2
> ...
> Grabbed item from queue: 29
> Run: 1

And this is strange. Now, I happened to be hacking away some
multiprocessing code meself. I saw your thread so I whipped up
something that you can have a look.

> At this point the script produces no additional output. If I uncomment the
> lines above then the script produces the expected output. I don't see any
> docs that would explain this problem and I don't know what the rule would be
> e.g. you just join every process that uses a queue before  the queue is
> garbage collected.

You join the processes when you want it to return with or without
optional parameter `timeout' in this case. Let me be more specific.
The doc for Process says:

"join([timeout])
Block the calling thread until the process whose join() method is
called terminates or until the optional timeout occurs."

Right. Now, the join( ) here is going to be the the called process'
join( ). If this particular join( ) waits indefinitely, then your
parent process' join( ) will _also_ wait indefinitely waiting for the
child processes' join( ) to finish up __unless__ you define a timeout
value.

> Any ideas why this is happening?

Have a look below. If I've understood your code, then it will be
reflective of your situation but with different take on the
implementation side of things (ran on Python2.6):

from multiprocessing import Process, Queue;
from Queue import Empty;
import sys;

def _process_worker(q):
    try:
        something = q.get(block=False, timeout=None);
    except Empty:
        print sys.exc_info();
    else:
        print 'Removed %d from the queue' %something;

def _make_some_processes():
    q = Queue();

    for i in range(3):
        for j in range(3):
            q.put(i*30+j);

    while not q.empty():
            p = Process(target=_process_worker, args=(q,));
            p.start();
            p.join();

if __name__ == "__main__":
    _make_some_processes();

'''
    Removed 0 from the queue
    Removed 1 from the queue
    Removed 2 from the queue
    Removed 30 from the queue
    Removed 31 from the queue
    Removed 32 from the queue
    Removed 60 from the queue
    Removed 61 from the queue
    Removed 62 from the queue
'''
-- 
Regards,
Ishwor Gurung



More information about the Python-list mailing list