From guido at python.org Sun Feb 3 20:02:11 2019 From: guido at python.org (Guido van Rossum) Date: Sun, 3 Feb 2019 17:02:11 -0800 Subject: [Async-sig] AI_V4MAPPED on Windows In-Reply-To: <45f87bfbfa9270fc2ba533ef0dbd818a@thenybble.de> References: <45f87bfbfa9270fc2ba533ef0dbd818a@thenybble.de> Message-ID: Hi Jan, Thanks for your feedback. However, this list is not for reporting asyncio bugs. Please use the Python bug tracker to report issues in asyncio. bugs.python.org. --Guido On Sun, Feb 3, 2019 at 5:02 AM Jan Seeger wrote: > Greetings! > > I'm working with aiocoap, which uses the AI_V4MAPPED flag to use > IPv4-mapped > addresses for dual-stack support. When trying to run on Windows, > creating a > connection fails, because the socket option IPV6_V6ONLY is set to true > per default on Windows, whereas the value is configurable on Linux. > I've attached a file that should reproduce the error. > > A possible fix would be calling socket.setsockopt(socket.IPPROTO_IPV6, > socket.IPV6_V6ONLY, False) when V4-mapped addresses have been requested > (this bug can also appear on Linux when /proc/sys/net/ipv6/bindv6only > contains 1). > > If you require any more information, feel free to contact me! > > Best Regards, > > Jan Seeger > > PS: I am not a subscriber of this list, so please leave my address on > any > replies you send._______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From yusuke at tsutsumi.io Fri Feb 8 17:09:53 2019 From: yusuke at tsutsumi.io (Yusuke Tsutsumi) Date: Fri, 8 Feb 2019 14:09:53 -0800 Subject: [Async-sig] set_blocking_signal_threshold equivalent for asyncio? Message-ID: Hi, In tornado, there's a really nice feature called set_blocking_signal_threshold, which sets a signal that fires if a coroutine has been running for too long without returning control back to the main loop: https://www.tornadoweb.org/en/stable/ioloop.html?highlight=signal#tornado.ioloop.IOLoop.set_blocking_signal_threshold In tornado this will then log the traceback of the coroutine in question. This has been a very valuable tool when a developer accidentally introduces code that blocks the event loop for way too long of a time. Is there an equivalent in asyncio? I have a sketch in my head of how to implement that, but wanted to see if it existed somewhere first. Thanks! -Yusuke -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at bendarnell.com Fri Feb 8 20:37:22 2019 From: ben at bendarnell.com (Ben Darnell) Date: Fri, 8 Feb 2019 20:37:22 -0500 Subject: [Async-sig] set_blocking_signal_threshold equivalent for asyncio? In-Reply-To: References: Message-ID: Asyncio's debug mode does this (and a few more things). Call `asyncio.get_event_loop().set_debug()` to enable it. https://docs.python.org/3/library/asyncio-dev.html#debug-mode -Ben On Fri, Feb 8, 2019 at 5:10 PM Yusuke Tsutsumi wrote: > Hi, > > In tornado, there's a really nice feature called > set_blocking_signal_threshold, which sets a signal that fires if a > coroutine has been running for too long without returning control back to > the main loop: > > > https://www.tornadoweb.org/en/stable/ioloop.html?highlight=signal#tornado.ioloop.IOLoop.set_blocking_signal_threshold > > In tornado this will then log the traceback of the coroutine in question. > This has been a very valuable tool when a developer accidentally introduces > code that blocks the event loop for way too long of a time. > > Is there an equivalent in asyncio? I have a sketch in my head of how to > implement that, but wanted to see if it existed somewhere first. > > Thanks! > -Yusuke > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yusuke at tsutsumi.io Wed Feb 13 02:08:34 2019 From: yusuke at tsutsumi.io (Yusuke Tsutsumi) Date: Tue, 12 Feb 2019 23:08:34 -0800 Subject: [Async-sig] set_blocking_signal_threshold equivalent for asyncio? In-Reply-To: References: Message-ID: Great, thank you! I'll try it out, looks promising. On Fri, Feb 8, 2019 at 5:37 PM Ben Darnell wrote: > Asyncio's debug mode does this (and a few more things). Call > `asyncio.get_event_loop().set_debug()` to enable it. > > https://docs.python.org/3/library/asyncio-dev.html#debug-mode > > -Ben > > On Fri, Feb 8, 2019 at 5:10 PM Yusuke Tsutsumi wrote: > >> Hi, >> >> In tornado, there's a really nice feature called >> set_blocking_signal_threshold, which sets a signal that fires if a >> coroutine has been running for too long without returning control back to >> the main loop: >> >> >> https://www.tornadoweb.org/en/stable/ioloop.html?highlight=signal#tornado.ioloop.IOLoop.set_blocking_signal_threshold >> >> In tornado this will then log the traceback of the coroutine in question. >> This has been a very valuable tool when a developer accidentally introduces >> code that blocks the event loop for way too long of a time. >> >> Is there an equivalent in asyncio? I have a sketch in my head of how to >> implement that, but wanted to see if it existed somewhere first. >> >> Thanks! >> -Yusuke >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yusuke at tsutsumi.io Wed Feb 13 15:48:53 2019 From: yusuke at tsutsumi.io (Yusuke Tsutsumi) Date: Wed, 13 Feb 2019 12:48:53 -0800 Subject: [Async-sig] set_blocking_signal_threshold equivalent for asyncio? In-Reply-To: References: Message-ID: hi, followup question: Is there an impact to performance when using debug mode? On Tue, Feb 12, 2019 at 11:08 PM Yusuke Tsutsumi wrote: > Great, thank you! I'll try it out, looks promising. > > On Fri, Feb 8, 2019 at 5:37 PM Ben Darnell wrote: > >> Asyncio's debug mode does this (and a few more things). Call >> `asyncio.get_event_loop().set_debug()` to enable it. >> >> https://docs.python.org/3/library/asyncio-dev.html#debug-mode >> >> -Ben >> >> On Fri, Feb 8, 2019 at 5:10 PM Yusuke Tsutsumi >> wrote: >> >>> Hi, >>> >>> In tornado, there's a really nice feature called >>> set_blocking_signal_threshold, which sets a signal that fires if a >>> coroutine has been running for too long without returning control back to >>> the main loop: >>> >>> >>> https://www.tornadoweb.org/en/stable/ioloop.html?highlight=signal#tornado.ioloop.IOLoop.set_blocking_signal_threshold >>> >>> In tornado this will then log the traceback of the coroutine in >>> question. This has been a very valuable tool when a developer accidentally >>> introduces code that blocks the event loop for way too long of a time. >>> >>> Is there an equivalent in asyncio? I have a sketch in my head of how to >>> implement that, but wanted to see if it existed somewhere first. >>> >>> Thanks! >>> -Yusuke >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Feb 19 14:53:44 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 11:53:44 -0800 Subject: [Async-sig] killing tasks that won't cancel Message-ID: I have an asyncio question. In Python 3.7, is there a way to reliably end a task after having already tried calling cancel() on it and waiting for it to end? In Python 3.6, I did this with task.set_exception(), but in 3.7 that method was removed. --Chris From andrew.svetlov at gmail.com Tue Feb 19 15:10:07 2019 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 19 Feb 2019 22:10:07 +0200 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: Message-ID: If the task's function swallows CancelledError exception -- it is a programming error. The same as generator object technically can swallow GeneratorExit (but such code is most likely buggy). On Tue, Feb 19, 2019 at 9:55 PM Chris Jerdonek wrote: > > I have an asyncio question. > > In Python 3.7, is there a way to reliably end a task after having > already tried calling cancel() on it and waiting for it to end? > > In Python 3.6, I did this with task.set_exception(), but in 3.7 that > method was removed. > > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ -- Thanks, Andrew Svetlov From chris.jerdonek at gmail.com Tue Feb 19 15:23:10 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 12:23:10 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: Message-ID: On Tue, Feb 19, 2019 at 12:10 PM Andrew Svetlov wrote: > If the task's function swallows CancelledError exception -- it is a > programming error. I was asking if there is a way to end such a task. Is there? The only approach I can think of without having something like set_exception() is to keep calling cancel() in a loop and waiting (but even that can fail under certain code), but I'm not sure off-hand if the API supports calling cancel() more than once. Also, I can see this happening even when there is no bug. Maybe the coroutine was properly written to cancel gracefully, but the caller doesn't want to continue waiting past a certain time. --Chris > The same as generator object technically can swallow GeneratorExit > (but such code is most likely buggy). > > On Tue, Feb 19, 2019 at 9:55 PM Chris Jerdonek wrote: > > > > I have an asyncio question. > > > > In Python 3.7, is there a way to reliably end a task after having > > already tried calling cancel() on it and waiting for it to end? > > > > In Python 3.6, I did this with task.set_exception(), but in 3.7 that > > method was removed. > > > > --Chris > > _______________________________________________ > > Async-sig mailing list > > Async-sig at python.org > > https://mail.python.org/mailman/listinfo/async-sig > > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > -- > Thanks, > Andrew Svetlov From andrew.svetlov at gmail.com Tue Feb 19 15:25:26 2019 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 19 Feb 2019 22:25:26 +0200 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: Message-ID: Let's continue discussion on the bug tracker: https://bugs.python.org/issue32363 On Tue, Feb 19, 2019 at 10:23 PM Chris Jerdonek wrote: > > On Tue, Feb 19, 2019 at 12:10 PM Andrew Svetlov > wrote: > > If the task's function swallows CancelledError exception -- it is a > > programming error. > > I was asking if there is a way to end such a task. Is there? The only > approach I can think of without having something like set_exception() > is to keep calling cancel() in a loop and waiting (but even that can > fail under certain code), but I'm not sure off-hand if the API > supports calling cancel() more than once. > > Also, I can see this happening even when there is no bug. Maybe the > coroutine was properly written to cancel gracefully, but the caller > doesn't want to continue waiting past a certain time. > > --Chris > > > > The same as generator object technically can swallow GeneratorExit > > (but such code is most likely buggy). > > > > On Tue, Feb 19, 2019 at 9:55 PM Chris Jerdonek wrote: > > > > > > I have an asyncio question. > > > > > > In Python 3.7, is there a way to reliably end a task after having > > > already tried calling cancel() on it and waiting for it to end? > > > > > > In Python 3.6, I did this with task.set_exception(), but in 3.7 that > > > method was removed. > > > > > > --Chris > > > _______________________________________________ > > > Async-sig mailing list > > > Async-sig at python.org > > > https://mail.python.org/mailman/listinfo/async-sig > > > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > > > > > -- > > Thanks, > > Andrew Svetlov -- Thanks, Andrew Svetlov From yselivanov at gmail.com Tue Feb 19 15:27:15 2019 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 19 Feb 2019 15:27:15 -0500 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: Message-ID: <3F2ABAF3-3F36-47B8-B2C8-FBB2610E85C6@gmail.com> FYI Chris has started a parallel discussion on the same topic here: https://bugs.python.org/issue32363. Chris, let's keep this discussion in one place (now it's this list, I guess). It's hard to handle the same discussion in two different places. Please don't split discussions like this. I'll summarize what I said in the above referenced bpo here: 1. Task.set_result() and Task.set_exception() have never ever worked properly. They never actually communicated the set result/exception to the underlying coroutine. The fact that they were exposed at all was a simple oversight. I can guess how Task.set_exception() can be implemented in theory: the exception would be thrown into the wrapped coroutine. But I don't quite understand how Task.set_result() can be implemented at all. 2. Task and coroutine maintain a simple relationship: Task wraps its coroutine. The result of the coroutine is the result of the Task (not the other way around). The Task can request its coroutine to cancel. The coroutine may ignore that request by ignoring the asyncio.CancelledError exception. If the latter happens, the Task cannot terminate the coroutine, this is by design. Moreover, you can always write while True: try: await asyncio.sleep(1) except: pass and then nothing can terminate your coroutine. IOW, if your code chooses to ignore CancelledError the Task can do nothing about it. 3. For proper bi-directional communication between coroutines asyncio has queues. One can easily implement a message queue to implement injection of an exception or result into a coroutine. [Chris] > I was asking if there is a way to end such a task. Is there? No, there's no way to end tasks like that. The key question here is: is this a theoretical problem you're concerned with? Or is this something that happens in real-world framework/library/code that you're dealing with? Yury > On Feb 19, 2019, at 2:53 PM, Chris Jerdonek wrote: > > I have an asyncio question. > > In Python 3.7, is there a way to reliably end a task after having > already tried calling cancel() on it and waiting for it to end? > > In Python 3.6, I did this with task.set_exception(), but in 3.7 that > method was removed. > > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Tue Feb 19 15:33:41 2019 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 19 Feb 2019 15:33:41 -0500 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: Message-ID: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> > On Feb 19, 2019, at 3:23 PM, Chris Jerdonek wrote: > > On Tue, Feb 19, 2019 at 12:10 PM Andrew Svetlov > wrote: >> If the task's function swallows CancelledError exception -- it is a >> programming error. > > I was asking if there is a way to end such a task. Is there? The only > approach I can think of without having something like set_exception() > is to keep calling cancel() in a loop and waiting (but even that can > fail under certain code), but I'm not sure off-hand if the API > supports calling cancel() more than once. Unfortunately asyncio isn't super flexible around "cancellation with a timeout" kind of scenarios. The current assumption is that once the cancellation is requested, the Task will start cancelling and will do so in a timely manner. Imposing a second layer of timeouts on the cancellation process itself isn't natively supported. But to properly address this we don't need a very broadly defined Task.set_exception(); we need to rethink the cancellation in asyncio (perhaps draw some inspiration from Trio and other frameworks). Yury From chris.jerdonek at gmail.com Tue Feb 19 15:36:47 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 12:36:47 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: <3F2ABAF3-3F36-47B8-B2C8-FBB2610E85C6@gmail.com> References: <3F2ABAF3-3F36-47B8-B2C8-FBB2610E85C6@gmail.com> Message-ID: On Tue, Feb 19, 2019 at 12:27 PM Yury Selivanov wrote: > > FYI Chris has started a parallel discussion on the same topic here: https://bugs.python.org/issue32363. Chris, let's keep this discussion in one place (now it's this list, I guess). It's hard to handle the same discussion in two different places. Please don't split discussions like this. My apologies. My first comment was on the tracker, but then I realized this is a broader discussion, so I moved to this list. For example, it seems related to a discussion happening on the trio tracker re: graceful shutdown and "hard" and "soft" cancellation, etc: https://github.com/python-trio/trio/issues/147 I'm not sure how similar or different cancellation is across various async frameworks. --Chris From yselivanov at gmail.com Tue Feb 19 15:39:19 2019 From: yselivanov at gmail.com (Yury Selivanov) Date: Tue, 19 Feb 2019 15:39:19 -0500 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: <3F2ABAF3-3F36-47B8-B2C8-FBB2610E85C6@gmail.com> Message-ID: <6BE0A148-D258-4CA7-9354-39CED3EBFC08@gmail.com> Thanks for referencing that issue, I'll check it out. I'm also quite curious what Nathaniel thinks about this problem and how he thinks he'll handle it in Trio. Yury > On Feb 19, 2019, at 3:36 PM, Chris Jerdonek wrote: > > On Tue, Feb 19, 2019 at 12:27 PM Yury Selivanov wrote: >> >> FYI Chris has started a parallel discussion on the same topic here: https://bugs.python.org/issue32363. Chris, let's keep this discussion in one place (now it's this list, I guess). It's hard to handle the same discussion in two different places. Please don't split discussions like this. > > My apologies. My first comment was on the tracker, but then I realized > this is a broader discussion, so I moved to this list. For example, it > seems related to a discussion happening on the trio tracker re: > graceful shutdown and "hard" and "soft" cancellation, etc: > https://github.com/python-trio/trio/issues/147 > I'm not sure how similar or different cancellation is across various > async frameworks. > > --Chris From chris.jerdonek at gmail.com Tue Feb 19 15:48:28 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 12:48:28 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> References: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> Message-ID: On Tue, Feb 19, 2019 at 12:33 PM Yury Selivanov wrote: > Unfortunately asyncio isn't super flexible around "cancellation with a > timeout" kind of scenarios. The current assumption is that once the > cancellation is requested, the Task will start cancelling and will do so in > a timely manner. Imposing a second layer of timeouts on the cancellation > process itself isn't natively supported. But to properly address this we > don't need a very broadly defined Task.set_exception(); Yes, I agree. I was just using Task.set_exception() because that is all that was available. (And I agree set_result() isn't needed.) --Chris > we need to rethink the cancellation in asyncio (perhaps draw some > inspiration from Trio and other frameworks). > > Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Feb 19 18:53:01 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 15:53:01 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> References: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> Message-ID: On Tue, Feb 19, 2019 at 12:33 PM Yury Selivanov wrote: > Unfortunately asyncio isn't super flexible around "cancellation with a > timeout" kind of scenarios. The current assumption is that once the > cancellation is requested, the Task will start cancelling and will do so in > a timely manner. Imposing a second layer of timeouts on the cancellation > process itself isn't natively supported. But to properly address this we > don't need a very broadly defined Task.set_exception(); we need to rethink > the cancellation in asyncio (perhaps draw some inspiration from Trio and > other frameworks). > What options have you already considered for asyncio's API? A couple naive things that occur to me are adding task.kill() with higher priority than task.cancel() (or equivalently task.graceful_cancel() with lower priority). Was task.cancel() meant more to have the meaning of "kill" or "graceful cancel"? In addition, the graceful version of the two (whichever that may be) could accept a timeout argument -- after which the exception of higher priority is raised. I realize this is a more simplistic model compared to the options trio is considering, but asyncio has already gone down the path of the simpler approach. --Chris > > Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Tue Feb 19 22:41:00 2019 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 19 Feb 2019 19:41:00 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> Message-ID: I'm not sure what a "higher priority" exception is...raising an exception is hard to miss. There are a few things Trio does differently here that might be relevant, but it depends on why Chris is having trouble cancelling tasks. 1. Trio's cancellation exception, trio.Cancelled, inherits from BaseException instead of Exception, like KeyboardInterrupt or StopIteration. So 'except Exception' doesn't catch it by accident. 2. Trio's cancellation is "stateful": if your code is in the cancelled state, then every time you try to do an async operation then it raises trio.Cancelled again. So you avoid the case where a tasks gets stuck, someone forces it to raise CancelledError, and then it has a 'finally' block that tries to do some cleanup... but the 'finally' block also gets stuck. In trio the 'finally' block can't accidentally get stuck. 3. Both of these features are somewhat dependent on trio using "delimited" cancellation. Before you can cancel something, you have to say how far you want to unwind. This means that there's never any reason for anyone to try to catch 'Cancelled' on purpose, because trio will catch it for you at the appropriate moment. And it's hard to do 'stateful' cancellation if you don't know how long the state is supposed to persist. And, you avoid cases where some code thinks it just threw in a CancelledError and is supposed to catch it, but actually it was thrown in from some other stack frame, and it ends up confusedly catching the wrong exception. I'm not sure how much of this could be adapted for asyncio. The obvious change would be to make asyncio.CancelledError a BaseException, though it seems borderline to me from a back-compat perspective. I think I remember Yury was thinking about changing it anyway, though? That would definitely help with the 'except Exception' kind of mistake. But the other issues are deeper. If you don't have a solid system for keeping track of what exactly is supposed to be cancelled, then it's easy to accidentally cancel too much, or cancel too little. Solving that requires a systematic approach. And unfortunately, asyncio already has 2 different sets of cancellation semantics (Future.cancel -> takes effect immediately, irrevocable & idempotent, doesn't necessarily cause the underlying machinery to stop processing, just stops it from reporting its result; Task.cancel -> doesn't take effect immediately or necessarily at all, can be called repeatedly and injects one CancelledError per call, tries to stop the underlying machinery, chains to other tasks/futures that the first task is await'ing). So if our goal is to make the system as a whole as reliable and predictable as possible within the constraints of back-compat... I don't know whether adding a third set of semantics would actually help, or make more code confused about what it was supposed to be catching. And I don't know if any of these actually address whatever problem you're having with uncancellable tasks. It's certainly possible to make an uncancellable task in Trio too. We just try to make it hard to do by accident. -n On Tue, Feb 19, 2019 at 3:53 PM Chris Jerdonek wrote: > > On Tue, Feb 19, 2019 at 12:33 PM Yury Selivanov wrote: >> >> Unfortunately asyncio isn't super flexible around "cancellation with a timeout" kind of scenarios. The current assumption is that once the cancellation is requested, the Task will start cancelling and will do so in a timely manner. Imposing a second layer of timeouts on the cancellation process itself isn't natively supported. But to properly address this we don't need a very broadly defined Task.set_exception(); we need to rethink the cancellation in asyncio (perhaps draw some inspiration from Trio and other frameworks). > > > What options have you already considered for asyncio's API? A couple naive things that occur to me are adding task.kill() with higher priority than task.cancel() (or equivalently task.graceful_cancel() with lower priority). Was task.cancel() meant more to have the meaning of "kill" or "graceful cancel"? In addition, the graceful version of the two (whichever that may be) could accept a timeout argument -- after which the exception of higher priority is raised. I realize this is a more simplistic model compared to the options trio is considering, but asyncio has already gone down the path of the simpler approach. > > --Chris > > >> >> >> Yury > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ -- Nathaniel J. Smith -- https://vorpus.org From chris.jerdonek at gmail.com Tue Feb 19 23:03:29 2019 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 19 Feb 2019 20:03:29 -0800 Subject: [Async-sig] killing tasks that won't cancel In-Reply-To: References: <73A8E287-45C6-4CC3-BB01-71764F947028@gmail.com> Message-ID: On Tue, Feb 19, 2019 at 7:41 PM Nathaniel Smith wrote: > I'm not sure what a "higher priority" exception is...raising an > exception is hard to miss. > Just quickly on this one point: that was just my colloquial way of saying superclass (or at least not a subclass), to emphasize that if you're e.g. catching CancelledError, this new exception would bubble up whereas CancelledError wouldn't. A similar example is KeyboardInterrupt not being caught by Exception. In the case of CancelledError, we probably would want the two exceptions to be comparable to one another rather than incomparable. I seem to recall for example there being an old discussion as to whether CancelledError should inherit from Exception or not. --Chris > > There are a few things Trio does differently here that might be > relevant, but it depends on why Chris is having trouble cancelling > tasks. > > 1. Trio's cancellation exception, trio.Cancelled, inherits from > BaseException instead of Exception, like KeyboardInterrupt or > StopIteration. So 'except Exception' doesn't catch it by accident. > > 2. Trio's cancellation is "stateful": if your code is in the cancelled > state, then every time you try to do an async operation then it raises > trio.Cancelled again. So you avoid the case where a tasks gets stuck, > someone forces it to raise CancelledError, and then it has a 'finally' > block that tries to do some cleanup... but the 'finally' block also > gets stuck. In trio the 'finally' block can't accidentally get stuck. > > 3. Both of these features are somewhat dependent on trio using > "delimited" cancellation. Before you can cancel something, you have to > say how far you want to unwind. This means that there's never any > reason for anyone to try to catch 'Cancelled' on purpose, because trio > will catch it for you at the appropriate moment. And it's hard to do > 'stateful' cancellation if you don't know how long the state is > supposed to persist. And, you avoid cases where some code thinks it > just threw in a CancelledError and is supposed to catch it, but > actually it was thrown in from some other stack frame, and it ends up > confusedly catching the wrong exception. > > I'm not sure how much of this could be adapted for asyncio. The > obvious change would be to make asyncio.CancelledError a > BaseException, though it seems borderline to me from a back-compat > perspective. I think I remember Yury was thinking about changing it > anyway, though? That would definitely help with the 'except Exception' > kind of mistake. > > But the other issues are deeper. If you don't have a solid system for > keeping track of what exactly is supposed to be cancelled, then it's > easy to accidentally cancel too much, or cancel too little. Solving > that requires a systematic approach. And unfortunately, asyncio > already has 2 different sets of cancellation semantics (Future.cancel > -> takes effect immediately, irrevocable & idempotent, doesn't > necessarily cause the underlying machinery to stop processing, just > stops it from reporting its result; Task.cancel -> doesn't take effect > immediately or necessarily at all, can be called repeatedly and > injects one CancelledError per call, tries to stop the underlying > machinery, chains to other tasks/futures that the first task is > await'ing). So if our goal is to make the system as a whole as > reliable and predictable as possible within the constraints of > back-compat... I don't know whether adding a third set of semantics > would actually help, or make more code confused about what it was > supposed to be catching. > > And I don't know if any of these actually address whatever problem > you're having with uncancellable tasks. It's certainly possible to > make an uncancellable task in Trio too. We just try to make it hard to > do by accident. > > -n > > On Tue, Feb 19, 2019 at 3:53 PM Chris Jerdonek > wrote: > > > > On Tue, Feb 19, 2019 at 12:33 PM Yury Selivanov > wrote: > >> > >> Unfortunately asyncio isn't super flexible around "cancellation with a > timeout" kind of scenarios. The current assumption is that once the > cancellation is requested, the Task will start cancelling and will do so in > a timely manner. Imposing a second layer of timeouts on the cancellation > process itself isn't natively supported. But to properly address this we > don't need a very broadly defined Task.set_exception(); we need to rethink > the cancellation in asyncio (perhaps draw some inspiration from Trio and > other frameworks). > > > > > > What options have you already considered for asyncio's API? A couple > naive things that occur to me are adding task.kill() with higher priority > than task.cancel() (or equivalently task.graceful_cancel() with lower > priority). Was task.cancel() meant more to have the meaning of "kill" or > "graceful cancel"? In addition, the graceful version of the two (whichever > that may be) could accept a timeout argument -- after which the exception > of higher priority is raised. I realize this is a more simplistic model > compared to the options trio is considering, but asyncio has already gone > down the path of the simpler approach. > > > > --Chris > > > > > >> > >> > >> Yury > > > > _______________________________________________ > > Async-sig mailing list > > Async-sig at python.org > > https://mail.python.org/mailman/listinfo/async-sig > > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0zeroth at gmail.com Mon Feb 25 14:06:20 2019 From: 0zeroth at gmail.com (Josh Quigley) Date: Tue, 26 Feb 2019 06:06:20 +1100 Subject: [Async-sig] Reliably make unhandled exceptions crash the event loop Message-ID: Hi, I have been trying to make unhandled exceptions reliably crash the event loop (eg replicated behaviour of boost::asio, for those familiar with that C++ library). I'm aiming to have any exception bubble up from run_forever or run_until_complete style functions. I had thought I had a perfectly acceptable solution and then hit a strange case that threw my understanding of the way the loop worked. In 'test_b' below, the only difference is we keep a reference to the created task. This test hangs - the exception is raised, but the custom exception handler is never called. I'd be very interested to understand exactly why this happens. I'd also appreciate any feedback on the best way to reliably crash the event loop on unhandled exceptions (my next attempt will be to replace AbstractEventLoop.call_exception and see what happens). # example.py import asyncio class CustomException(RuntimeError): pass class UnhandledExceptionError(RuntimeError): pass def run_until_unhandled_exception(*, loop=None): """Run the event until there is an unhandled error in a callback This function sets the exception handler on the loop """ loop = loop if loop is not None else asyncio.get_event_loop() ex = [] def handler(loop, context): print('handler') loop.default_exception_handler(context) loop.stop() ex.append(context.get('exception')) loop.set_exception_handler(handler) loop.run_forever() if len(ex) > 0: raise UnhandledExceptionError('Unhandled exception in loop') from ex[0] async def fail_after(delay): await asyncio.sleep(delay) print('raise CustomException(...)') raise CustomException(f'fail_after(delay={delay})') async def finish_after(delay): await asyncio.sleep(delay) return delay def test_a(event_loop): event_loop.create_task(fail_after(0.01)) run_until_unhandled_exception(loop=event_loop) def test_b(event_loop): task = event_loop.create_task(fail_after(0.01)) run_until_unhandled_exception(loop=event_loop) def run_test(test): try: test(asyncio.get_event_loop()) except Exception as ex: print(ex) if __name__ == '__main__': run_test(test_a) run_test(test_b) # This hangs -------------- next part -------------- An HTML attachment was scrubbed... URL: From 0zeroth at gmail.com Mon Feb 25 19:14:47 2019 From: 0zeroth at gmail.com (Josh Quigley) Date: Tue, 26 Feb 2019 11:14:47 +1100 Subject: [Async-sig] Reliably make unhandled exceptions crash the event loop In-Reply-To: References: Message-ID: I've realised the error of my ways: because Task separates the scheduling from the response handling, you cannot know if an exception is unhandled until the task is deleted. So in my example the reference means the task is not deleted, so the exception is not yet unhandled. This is in contrast to APIs like call_soon(callable, success_callback, error_callback) where there the possibility of delayed error handling is not present. In that case the loop can reliably crash if either callback raises an exception. So, the 'solution' to this use-case is to always attach error handers to Tasks. A catch-all solution cannot catch every error case. On Tue., 26 Feb. 2019, 6:06 am Josh Quigley, <0zeroth at gmail.com> wrote: > Hi, > > I have been trying to make unhandled exceptions reliably crash the event > loop (eg replicated behaviour of boost::asio, for those familiar with > that C++ library). I'm aiming to have any exception bubble up from run_forever > or run_until_complete style functions. I had thought I had a perfectly > acceptable solution and then hit a strange case that threw my understanding > of the way the loop worked. > > In 'test_b' below, the only difference is we keep a reference to the > created task. This test hangs - the exception is raised, but the custom > exception handler is never called. > > I'd be very interested to understand exactly why this happens. I'd also > appreciate any feedback on the best way to reliably crash the event loop on > unhandled exceptions (my next attempt will be to replace AbstractEventLoop.call_exception > and see what happens). > > > # example.py > import asyncio > > class CustomException(RuntimeError): > pass > > > class UnhandledExceptionError(RuntimeError): > pass > > def run_until_unhandled_exception(*, loop=None): > """Run the event until there is an unhandled error in a callback > > This function sets the exception handler on the loop > """ > loop = loop if loop is not None else asyncio.get_event_loop() > ex = [] > > def handler(loop, context): > print('handler') > loop.default_exception_handler(context) > loop.stop() > ex.append(context.get('exception')) > > loop.set_exception_handler(handler) > loop.run_forever() > if len(ex) > 0: > raise UnhandledExceptionError('Unhandled exception in loop') from > ex[0] > > async def fail_after(delay): > await asyncio.sleep(delay) > print('raise CustomException(...)') > raise CustomException(f'fail_after(delay={delay})') > > async def finish_after(delay): > await asyncio.sleep(delay) > return delay > > > def test_a(event_loop): > event_loop.create_task(fail_after(0.01)) > run_until_unhandled_exception(loop=event_loop) > > def test_b(event_loop): > task = event_loop.create_task(fail_after(0.01)) > run_until_unhandled_exception(loop=event_loop) > > def run_test(test): > try: > test(asyncio.get_event_loop()) > except Exception as ex: > print(ex) > > if __name__ == '__main__': > run_test(test_a) > run_test(test_b) # This hangs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From njs at pobox.com Mon Feb 25 20:14:42 2019 From: njs at pobox.com (Nathaniel Smith) Date: Mon, 25 Feb 2019 17:14:42 -0800 Subject: [Async-sig] Reliably make unhandled exceptions crash the event loop In-Reply-To: References: Message-ID: On Mon, Feb 25, 2019 at 4:15 PM Josh Quigley <0zeroth at gmail.com> wrote: > > I've realised the error of my ways: because Task separates the scheduling from the response handling, you cannot know if an exception is unhandled until the task is deleted. So in my example the reference means the task is not deleted, so the exception is not yet unhandled. > > This is in contrast to APIs like call_soon(callable, success_callback, error_callback) where there the possibility of delayed error handling is not present. In that case the loop can reliably crash if either callback raises an exception. > > So, the 'solution' to this use-case is to always attach error handers to Tasks. A catch-all solution cannot catch every error case. That's right. There are other ways to structure async code to avoid running into these cases, that are implemented in Trio, and there are discussions happening (slowly) about adding them into asyncio as well. See: https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ Also, I could swear I saw some library that tried to implement nurseries on asyncio, but I can't find it now... :-/ maybe someone else here knows? -n -- Nathaniel J. Smith -- https://vorpus.org From contact at ovv.wtf Tue Feb 26 03:21:45 2019 From: contact at ovv.wtf (Ovv) Date: Tue, 26 Feb 2019 09:21:45 +0100 Subject: [Async-sig] Reliably make unhandled exceptions crash the event loop In-Reply-To: References: Message-ID: Maybe this one https://github.com/malinoff/aionursery ? On 26/02/19 02:14, Nathaniel Smith wrote: > On Mon, Feb 25, 2019 at 4:15 PM Josh Quigley <0zeroth at gmail.com> wrote: >> I've realised the error of my ways: because Task separates the scheduling from the response handling, you cannot know if an exception is unhandled until the task is deleted. So in my example the reference means the task is not deleted, so the exception is not yet unhandled. >> >> This is in contrast to APIs like call_soon(callable, success_callback, error_callback) where there the possibility of delayed error handling is not present. In that case the loop can reliably crash if either callback raises an exception. >> >> So, the 'solution' to this use-case is to always attach error handers to Tasks. A catch-all solution cannot catch every error case. > That's right. There are other ways to structure async code to avoid > running into these cases, that are implemented in Trio, and there are > discussions happening (slowly) about adding them into asyncio as well. > See: > > https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ > > Also, I could swear I saw some library that tried to implement > nurseries on asyncio, but I can't find it now... :-/ maybe someone > else here knows? > > -n > From 0zeroth at gmail.com Tue Feb 26 18:18:38 2019 From: 0zeroth at gmail.com (Josh Quigley) Date: Wed, 27 Feb 2019 10:18:38 +1100 Subject: [Async-sig] Reliably make unhandled exceptions crash the event loop In-Reply-To: References: Message-ID: That's a great way of thinking about async structure - and perhaps surprisingly (since switching from asyncio or trying fledgling implementations as part of my day job is a no-go ;) immediately useful. A large part of what I do is wrap existing libraries so they can be used with asyncio- the idea being that once wrapped correctly it becomes easy to throw together applications quickly and correctly. For example 'cassandra' or the python gRPC libraries. These both offer async-ish style APIs, smattered with some callback style stuff and a handful of functions that block but probably shouldn't. There are no standard examples of 'best practice' in how to do this - the asyncio docs focus on using existing asyncio components. As a result I end up with stuff that works until I need to worry about error handling, cancellation, restarting components and then I cry my heart out in the mailing lists and generally make a mess. My key takeaway after skimming your blogs is that implementations should 'respect causality'. Aim for await my_implementation() to not spawn any anonymous tasks that can't be controlled, to only complete when it really has finished everything it started under the hood, and to correctly respect cancellation. Limit APIs to coroutines only (ie limit yourself to a 'curio' style) to make things simpler to reason about. If you must spawn tasks, keep them in logical groups - eg within a single function (or nursery if you have such an implementation) and make sure they are all finished before the function ends. It seems to me like these are good guiding principles to knock together robust async/await APIs. At any rate, I'll keep them in mind and see if my next attempt end up with less subtle problems to worry about. Thanks - and I look forward to really getting to grips with the detail of asynchronous design! On Tue, 26 Feb 2019 at 12:14, Nathaniel Smith wrote: > On Mon, Feb 25, 2019 at 4:15 PM Josh Quigley <0zeroth at gmail.com> wrote: > > > > I've realised the error of my ways: because Task separates the > scheduling from the response handling, you cannot know if an exception is > unhandled until the task is deleted. So in my example the reference means > the task is not deleted, so the exception is not yet unhandled. > > > > This is in contrast to APIs like call_soon(callable, success_callback, > error_callback) where there the possibility of delayed error handling is > not present. In that case the loop can reliably crash if either callback > raises an exception. > > > > So, the 'solution' to this use-case is to always attach error handers to > Tasks. A catch-all solution cannot catch every error case. > > That's right. There are other ways to structure async code to avoid > running into these cases, that are implemented in Trio, and there are > discussions happening (slowly) about adding them into asyncio as well. > See: > > > https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/ > > Also, I could swear I saw some library that tried to implement > nurseries on asyncio, but I can't find it now... :-/ maybe someone > else here knows? > > -n > > -- > Nathaniel J. Smith -- https://vorpus.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: