Multiprocessing.Pipe in a daemon

Falcolas garrickp at gmail.com
Wed Dec 21 13:48:49 EST 2011


So, I'm running into a somewhat crazy bug.

I am running several workers using multiprocessing to handle gearman
requests. I'm using pipes to funnel log messages from each of the
workers back to the controlling process, which iterates over the other
end of the pipe, looking for messages with poll(), and logging them
using the parent's log handler.

This works really well when I'm not daemonizing the entire thing.
However, when I daemonize the process (which happens well prior to any
setup of the pipes & multiprocess.Process'es), a pipe which has
nothing in it return True for the poll(), and then blocks on the
pipe.recv() command. The gearman workers are still operational and
responsive, and only starting one worker resolves the problem.

Has anybody seen anything like this?

    #Trim

    # Create all of the end points
    endpoints = []
    log_pipes = []
    for w_num in range(5):

        (recv, snd) = multiprocessing.Pipe(False)
        # Start the worker
        logger.debug("Creating the worker {0}".format(w_num))
        worker = Worker(w_num, job_name, gm_servers, snd)

        # Add the worker to the list of endpoints so it can be started
        endpoints.append(worker)
        log_pipes.append(recv)

    # Trim starting endpoints

    try:
        while True:
            time.sleep(1)

            pipe_logger(logger, log_pipes)
    except (KeyboardInterrupt, SignalQuit):
        pass

    # Trim cleanup

    def pipe_logger(logger_obj, pipes):
        done =
False
        while not done:
            done =
True
            for p in
pipes:
                if p.poll(): # <-- Returning true after a previous
pipe actually had data
 
try:
                        log_level, log_msg = p.recv() # <-- Hanging
here
                    except EOFError:
                        continue
                    done = False



More information about the Python-list mailing list