[issue38988] Killing asyncio subprocesses on timeout?

Caleb Hattingh report at bugs.python.org
Sat Feb 1 23:54:57 EST 2020


Caleb Hattingh <caleb.hattingh at gmail.com> added the comment:

@dontbugme This is a very old problem with threads and sub-processes.  In the general case (cross-platform, etc) it is difficult to kill threads and sub-processes from the outside. The traditional solution is to somehow send a message to the thread or subprocess to tell it to finish up. Then, you have to write the code running the thread or subprocess to notice such a message, and then shut itself down. With threads, the usual solution is to pass `None` on a queue, and have the thread pull data off that queue. When it receives a `None` it knows that it's time to shut down, and the thread terminates itself. This model can also be used with the multiprocessing module because there is a Queue instance provided there that works across the inter-process boundary.  Unfortunately, we don't have that feature in the asyncio subprocess machinery yet. For subprocesses, there are three options available:

1) Send a "shutdown" sentinal via STDIN (asyncio.subprocess.Process.communicate)
2) Send a process signal (via asyncio.subprocess.Process.send_signal)
3) Pass messages between main process and child process via socket connections

My experience has been that (3) is the most practical, esp. in a cross-platform sense. The added benefit of (3) is that this also works, unchanged, if the "worker" process is running on a different machine. There are probably things we can do to make (3) easier. Not sure.

I don't know if my comment helps, but I feel your pain. You are correct that `wait_for` is not an alternative to `timeout` because there is no actual cancellation that happen.

----------
nosy: +cjrh

_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue38988>
_______________________________________


More information about the Python-bugs-list mailing list