[Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation

Yuval Greenfield ubershmekel at gmail.com
Sat Jan 12 12:03:26 CET 2013


On Sat, Jan 12, 2013 at 1:41 AM, Guido van Rossum <guido at python.org> wrote:

> [...]

def hogger():
>     tulip.get_event_loop().call_soon(hogger)
>
> Because call_soon() appends the handler to the _ready queue, the while
> loop will never finish.
>
> [...]
> However this implies that we go back to the I/O polling code more
> frequently. While the I/O polling code sets the timeout to zero when
> there's anything in the _ready queue, so it won't block, it still
> isn't free; it's an expensive system call that we'd like to put off
> until we have nothing better to do.
>
> I can imagine various patterns where handlers append other handlers to
> the _ready queue for immediate execution, and I'd make such patterns
> efficient (i.e. the user shouldn't have to worry about the cost of the
> I/O poll compared to the amount of work appended to the _ready queue).
>
>
I read your statements as:
* I don't want the user to cause IO starvation
* I want the user to cause IO starvation

Which means you have two options:
* Make an opinionated decision that won't be perfect for everyone (not as
bad as it sounds)
* Allow configurability

IMO core event loops need this configurability but not on a daily basis,
e.g. Windows XP's event loop gave priority to the foreground process (i.e.
UI events) and Windows Server 2003 gave priority to background processes.

e.g. (warning unoptimized pseudocode follows)

while True:
    for i in range(io_weight):
        pop_io()
    for i in range(event_weight):
        pop_ready()

# note one of the weights can be zero
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-ideas/attachments/20130112/591141c1/attachment.html>


More information about the Python-ideas mailing list