How to properly implement worker processes

Dennis Jacobfeuerborn djacobfeuerborn at gmail.com
Wed Aug 22 13:29:49 EDT 2012


Hi,
I'm trying to implement a system for periodically checking URLs and I've run into problems with some of the implementation details. The URLs are supposed to be checked continuously until the config for an URL is explicitly removed.

The plan is to spawn a worker process for each URL that sends the status of the last check to its parent which keeps track of the state of all URLs. When a URL is no longer supposed to be checked the parent process should shutdown/kill the respective worker process.

What I've been going for so far is that the parent process creates a global queue that is passed to all children upon creation which they use to send status messages to the parent. Then for each process a dedicated queue is created that the parent uses to issue commands to the child.

The issue is that since the child processes spent some time in sleep() when a command from the parent comes they cannot respond immediately which is rather undesirable. What I would rather like to do is have the parent simply kill the child instead which is instantaneous and more reliable.

My problem is that according to the multiprocessing docs if I kill the child while it uses the queue to send a status to the parent then the queue becomes corrupted and since that queue is shared that means the whole thing pretty much stops working.

How can I get around this problem and receive status updates from all children efficiently without a shared queue and with the ability to simply kill the child process when it's no longer needed?

Regards,
  Dennis



More information about the Python-list mailing list