From chris.jerdonek at gmail.com Sat Jul 1 06:11:07 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sat, 1 Jul 2017 03:11:07 -0700 Subject: [Async-sig] async testing question Message-ID: I have a question about testing async code. Say I have a coroutine: async def do_things(): await do_something() await do_more() await do_even_more() And future: task = ensure_future(do_things()) Is there a way to write a test case to check that task.cancel() would behave correctly if, say, do_things() is waiting at the line do_more()? In real life, this situation can happen if a function like the following is called, and an exception happens in one of the given tasks. One of the tasks in the "pending" list could be at the line do_more(). done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_EXCEPTION) But in a testing situation, you don't necessarily have control over where each task ends up when FIRST_EXCEPTION occurs. Thanks, --Chris From njs at pobox.com Sat Jul 1 06:35:00 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 1 Jul 2017 03:35:00 -0700 Subject: [Async-sig] async documentation methods In-Reply-To: References: Message-ID: If we're citing curio and sphinxcontrib-asyncio I guess I'll also mention sphinxcontrib-trio [1], which was inspired by both of them (and isn't in any way specific to trio). I don't know if the python docs can use third-party sphinx extensions, though, and it is a bit opinionated (in particular it calls async functions async functions instead of coroutines). For the original text, I'd probably write something like:: You acquire a lock by calling ``await lock.acquire()``, and release it with ``lock.release()``. -n [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon wrote: > Curio uses `.. asyncfunction:: acquire` and it renders as `await acquire()` > at least in the function definition. > > On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov > wrote: >> >> I like "two methods, `async acquire()` and `release()`" >> >> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] library >> for it. Hmm, README is pretty empty but we do use the library for >> documenting aio-libs and aiohttp [2] itself >> >> We use ".. comethod:: connect(request)" for method and "cofunction" for >> top level functions. >> >> Additional markup for methods that could be used as async context >> managers: >> >> .. comethod:: delete(url, **kwargs) >> :async-with: >> :coroutine: >> >> and `:async-for:` for async iterators. >> >> >> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >> 2. https://github.com/aio-libs/aiohttp >> >> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek wrote: >>> >>> Hi all, >>> >>> I'm working to improve async docs, and I wonder if/how async methods >>> ought to be marked in the documentation, for example >>> library/async-sync.rst: >>> >>> """ ... It [lock] has two basic methods, `acquire()` and `release()`. ... >>> """ >>> >>> In fact, these methods are not symmetric, the earlier is asynchronous >>> and the latter synchronous: >>> >>> Definitions are `async def acquire()` and `def release()`. >>> Likewise user is expected to call `await .acquire()` and `.release()`. >>> >>> This is user-facing documentation, IMO it should be clearer. >>> Although there are examples for this specific case, I'm concerned with >>> general documentation best practice. >>> >>> Should this example read, e.g.: >>> * two methods, `async acquire()` and `release()` >>> or perhaps >>> * two methods, used `await x.acquire()` and `x.release()` >>> or something else? >>> >>> If there's a good example already Python docs or in some 3rd party >>> docs, please tell. >>> >>> Likewise, should there be marks on iterators? async generators? things >>> that ought to be used as context managers? >>> >>> Cheers, >>> d. >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> -- >> Thanks, >> Andrew Svetlov >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Nathaniel J. Smith -- https://vorpus.org From dimaqq at gmail.com Sat Jul 1 06:49:20 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Sat, 1 Jul 2017 12:49:20 +0200 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: Hi Chris, This specific test is easy to write (mock first to return a resolved future, 2nd to block and 3rd to assert False) OTOH complexity of the general case is unbounded and generally exponential. It's akin to testing multithreaded code. (There's an academic publication from Microsoft where they built a runtime that would run each test really many times, where scheduler is rigged to order runnable tasks differently on each run. I hope someone rewrites this for asyncio) Certainty [better] tools are needed, and ultimately it's a tradeoff between sane/understable/maintainable tests and testing deeper/more corner cases. Just my 2c... On Jul 1, 2017 12:11, "Chris Jerdonek" wrote: > I have a question about testing async code. > > Say I have a coroutine: > > async def do_things(): > await do_something() > await do_more() > await do_even_more() > > And future: > > task = ensure_future(do_things()) > > Is there a way to write a test case to check that task.cancel() would > behave correctly if, say, do_things() is waiting at the line > do_more()? > > In real life, this situation can happen if a function like the > following is called, and an exception happens in one of the given > tasks. One of the tasks in the "pending" list could be at the line > do_more(). > > done, pending = await asyncio.wait(tasks, > return_when=asyncio.FIRST_EXCEPTION) > > But in a testing situation, you don't necessarily have control over > where each task ends up when FIRST_EXCEPTION occurs. > > Thanks, > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Sat Jul 1 07:15:43 2017 From: yselivanov at gmail.com (Yury Selivanov) Date: Sat, 1 Jul 2017 07:15:43 -0400 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: > On Jul 1, 2017, at 6:49 AM, Dima Tisnek wrote: > > There's an academic publication from Microsoft where they built a runtime that would run each test really many times, where scheduler is rigged to order runnable tasks differently on each run. I hope someone rewrites this for asyncio Do you have a link to the publication? Yury From dimaqq at gmail.com Sat Jul 1 08:13:08 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Sat, 1 Jul 2017 14:13:08 +0200 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: GAMBIT: Effective Unit Testing for Concurrency Libraries https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/gambit-ppopp2010.pdf There are related publications, but I'm pretty sure that's the right research group. On 1 July 2017 at 13:15, Yury Selivanov wrote: > >> On Jul 1, 2017, at 6:49 AM, Dima Tisnek wrote: >> >> There's an academic publication from Microsoft where they built a runtime that would run each test really many times, where scheduler is rigged to order runnable tasks differently on each run. I hope someone rewrites this for asyncio > > Do you have a link to the publication? > > Yury From chris.jerdonek at gmail.com Sat Jul 1 16:06:24 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sat, 1 Jul 2017 13:06:24 -0700 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: On Sat, Jul 1, 2017 at 3:49 AM, Dima Tisnek wrote: > Hi Chris, > > This specific test is easy to write (mock first to return a resolved future, > 2nd to block and 3rd to assert False) Saying it's easy doesn't necessarily help the questioner. :) Issues around combinatorics I understand. It's more the mechanics of the basic testing pattern I'd like advice on. For example, if I mock the second function to be blocking, how do I invoke the higher-level function in a way so I can continue at the point where the second function blocks? And without introducing brittleness or relying on implementation details of the event loop? (By the way, it seems you wouldn't want to mock the third function in cases like if the proper handling of task.cancel() depends on the behavior of the third function, for example if CancelledError is being caught.) --Chris > > OTOH complexity of the general case is unbounded and generally exponential. > It's akin to testing multithreaded code. > (There's an academic publication from Microsoft where they built a runtime > that would run each test really many times, where scheduler is rigged to > order runnable tasks differently on each run. I hope someone rewrites this > for asyncio) > > Certainty [better] tools are needed, and ultimately it's a tradeoff between > sane/understable/maintainable tests and testing deeper/more corner cases. > > Just my 2c... > > On Jul 1, 2017 12:11, "Chris Jerdonek" wrote: >> >> I have a question about testing async code. >> >> Say I have a coroutine: >> >> async def do_things(): >> await do_something() >> await do_more() >> await do_even_more() >> >> And future: >> >> task = ensure_future(do_things()) >> >> Is there a way to write a test case to check that task.cancel() would >> behave correctly if, say, do_things() is waiting at the line >> do_more()? >> >> In real life, this situation can happen if a function like the >> following is called, and an exception happens in one of the given >> tasks. One of the tasks in the "pending" list could be at the line >> do_more(). >> >> done, pending = await asyncio.wait(tasks, >> return_when=asyncio.FIRST_EXCEPTION) >> >> But in a testing situation, you don't necessarily have control over >> where each task ends up when FIRST_EXCEPTION occurs. >> >> Thanks, >> --Chris >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ From njs at pobox.com Sat Jul 1 16:42:10 2017 From: njs at pobox.com (Nathaniel Smith) Date: Sat, 1 Jul 2017 13:42:10 -0700 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: On Jul 1, 2017 3:11 AM, "Chris Jerdonek" wrote: I have a question about testing async code. Say I have a coroutine: async def do_things(): await do_something() await do_more() await do_even_more() And future: task = ensure_future(do_things()) Is there a way to write a test case to check that task.cancel() would behave correctly if, say, do_things() is waiting at the line do_more()? One possibility for handling this case with a minimum of mocking would be to hook do_more so that it calls task.cancel and then calls the regular do_more. Beyond that it depends on what the actual functions are, I guess. If do_more naturally blocks under some conditions then you might be able to set up those conditions and then call cancel. Or you could try experimenting with tests that call sleep(0) a fixed number of times before issuing the cancel, and repeat with different iteration counts to find different cancel points. (This would benefit from some kind of collaboration with the scheduler, but even a simple hack like this will probably get you more coverage than you had before. It does assume that your test never actually sleeps though.) -n -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Sat Jul 1 17:00:57 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sat, 1 Jul 2017 14:00:57 -0700 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: On Sat, Jul 1, 2017 at 1:42 PM, Nathaniel Smith wrote: > On Jul 1, 2017 3:11 AM, "Chris Jerdonek" wrote: > Is there a way to write a test case to check that task.cancel() would > behave correctly if, say, do_things() is waiting at the line > do_more()? > > One possibility for handling this case with a minimum of mocking would be to > hook do_more so that it calls task.cancel and then calls the regular > do_more. > > Beyond that it depends on what the actual functions are, I guess. If do_more > naturally blocks under some conditions then you might be able to set up > those conditions and then call cancel. Or you could try experimenting with > tests that call sleep(0) a fixed number of times before issuing the cancel, > and repeat with different iteration counts to find different cancel points. Thanks, Nathaniel. The following would be overkill in my case, but your suggestion makes me wonder if it would make sense for there to be testing tools that have functions to do things like "run the event loop until is at ." Do such things exist? This is a little bit related to what Dima was saying about tools. --Chris From dimaqq at gmail.com Mon Jul 3 13:39:07 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Mon, 3 Jul 2017 19:39:07 +0200 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: I'd say mock 2nd to `await time.sleep(1); assert False, "should not happen"` with the earlier just in case test harness or code under test is broken. The tricky part is how to cancel your library function at the right time (i.e. not too early). You could, perhaps, mock 1st call to `ensure_future(async_cancel_task())` but imagine that code under test gets changed to: async to_be_tested(): await first() logging.debug("...") # you don't expect event loop interaction here, but what if? await second() await third() If it's all right for your test to fail on such a change, then fine :) If you consider that unexpected breakage, then I dunno what you can do :P On 1 July 2017 at 22:06, Chris Jerdonek wrote: > On Sat, Jul 1, 2017 at 3:49 AM, Dima Tisnek wrote: >> Hi Chris, >> >> This specific test is easy to write (mock first to return a resolved future, >> 2nd to block and 3rd to assert False) > > Saying it's easy doesn't necessarily help the questioner. :) > > Issues around combinatorics I understand. It's more the mechanics of > the basic testing pattern I'd like advice on. > > For example, if I mock the second function to be blocking, how do I > invoke the higher-level function in a way so I can continue at the > point where the second function blocks? And without introducing > brittleness or relying on implementation details of the event loop? > > (By the way, it seems you wouldn't want to mock the third function in > cases like if the proper handling of task.cancel() depends on the > behavior of the third function, for example if CancelledError is being > caught.) > > --Chris > >> >> OTOH complexity of the general case is unbounded and generally exponential. >> It's akin to testing multithreaded code. >> (There's an academic publication from Microsoft where they built a runtime >> that would run each test really many times, where scheduler is rigged to >> order runnable tasks differently on each run. I hope someone rewrites this >> for asyncio) >> >> Certainty [better] tools are needed, and ultimately it's a tradeoff between >> sane/understable/maintainable tests and testing deeper/more corner cases. >> >> Just my 2c... >> >> On Jul 1, 2017 12:11, "Chris Jerdonek" wrote: >>> >>> I have a question about testing async code. >>> >>> Say I have a coroutine: >>> >>> async def do_things(): >>> await do_something() >>> await do_more() >>> await do_even_more() >>> >>> And future: >>> >>> task = ensure_future(do_things()) >>> >>> Is there a way to write a test case to check that task.cancel() would >>> behave correctly if, say, do_things() is waiting at the line >>> do_more()? >>> >>> In real life, this situation can happen if a function like the >>> following is called, and an exception happens in one of the given >>> tasks. One of the tasks in the "pending" list could be at the line >>> do_more(). >>> >>> done, pending = await asyncio.wait(tasks, >>> return_when=asyncio.FIRST_EXCEPTION) >>> >>> But in a testing situation, you don't necessarily have control over >>> where each task ends up when FIRST_EXCEPTION occurs. >>> >>> Thanks, >>> --Chris >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ From chris.jerdonek at gmail.com Mon Jul 3 20:03:31 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Mon, 3 Jul 2017 17:03:31 -0700 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: On Mon, Jul 3, 2017 at 10:39 AM, Dima Tisnek wrote: > I'd say mock 2nd to `await time.sleep(1); assert False, "should not > happen"` with the earlier just in case test harness or code under test > is broken. > > The tricky part is how to cancel your library function at the right > time (i.e. not too early). So I wound up trying a combination of Dima and Nathaniel's suggestion of mocking / hooking the second function do_more() to cancel the "parent" task, and then just waiting for the cancellation to occur. You can see the result of my efforts here (it is to test the fix of a bug in the websockets library): https://github.com/aaugustin/websockets/pull/194 The test is definitely more complicated than I'd like. And it has a couple asyncio.sleep(0.1)'s that would be nice to get rid of to make the test faster and eliminate flakiness. Dima is right that one tricky thing is how not to call cancel() too early. (That is the reason for one of my sleep(0.1)'s.) I could see tools or patterns being useful here. --Chris > > You could, perhaps, mock 1st call to > `ensure_future(async_cancel_task())` but imagine that code under test > gets changed to: > > async to_be_tested(): > await first() > logging.debug("...") # you don't expect event loop interaction > here, but what if? > await second() > await third() > > If it's all right for your test to fail on such a change, then fine :) > If you consider that unexpected breakage, then I dunno what you can do :P > > On 1 July 2017 at 22:06, Chris Jerdonek wrote: >> On Sat, Jul 1, 2017 at 3:49 AM, Dima Tisnek wrote: >>> Hi Chris, >>> >>> This specific test is easy to write (mock first to return a resolved future, >>> 2nd to block and 3rd to assert False) >> >> Saying it's easy doesn't necessarily help the questioner. :) >> >> Issues around combinatorics I understand. It's more the mechanics of >> the basic testing pattern I'd like advice on. >> >> For example, if I mock the second function to be blocking, how do I >> invoke the higher-level function in a way so I can continue at the >> point where the second function blocks? And without introducing >> brittleness or relying on implementation details of the event loop? >> >> (By the way, it seems you wouldn't want to mock the third function in >> cases like if the proper handling of task.cancel() depends on the >> behavior of the third function, for example if CancelledError is being >> caught.) >> >> --Chris >> >>> >>> OTOH complexity of the general case is unbounded and generally exponential. >>> It's akin to testing multithreaded code. >>> (There's an academic publication from Microsoft where they built a runtime >>> that would run each test really many times, where scheduler is rigged to >>> order runnable tasks differently on each run. I hope someone rewrites this >>> for asyncio) >>> >>> Certainty [better] tools are needed, and ultimately it's a tradeoff between >>> sane/understable/maintainable tests and testing deeper/more corner cases. >>> >>> Just my 2c... >>> >>> On Jul 1, 2017 12:11, "Chris Jerdonek" wrote: >>>> >>>> I have a question about testing async code. >>>> >>>> Say I have a coroutine: >>>> >>>> async def do_things(): >>>> await do_something() >>>> await do_more() >>>> await do_even_more() >>>> >>>> And future: >>>> >>>> task = ensure_future(do_things()) >>>> >>>> Is there a way to write a test case to check that task.cancel() would >>>> behave correctly if, say, do_things() is waiting at the line >>>> do_more()? >>>> >>>> In real life, this situation can happen if a function like the >>>> following is called, and an exception happens in one of the given >>>> tasks. One of the tasks in the "pending" list could be at the line >>>> do_more(). >>>> >>>> done, pending = await asyncio.wait(tasks, >>>> return_when=asyncio.FIRST_EXCEPTION) >>>> >>>> But in a testing situation, you don't necessarily have control over >>>> where each task ends up when FIRST_EXCEPTION occurs. >>>> >>>> Thanks, >>>> --Chris >>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ From alex.gronholm at nextday.fi Tue Jul 4 02:49:35 2017 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 4 Jul 2017 09:49:35 +0300 Subject: [Async-sig] async documentation methods In-Reply-To: References: Message-ID: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> The real question is: why doesn't vanilla Sphinx have any kind of support for async functions which have been part of the language for quite a while? Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: > If we're citing curio and sphinxcontrib-asyncio I guess I'll also > mention sphinxcontrib-trio [1], which was inspired by both of them > (and isn't in any way specific to trio). I don't know if the python > docs can use third-party sphinx extensions, though, and it is a bit > opinionated (in particular it calls async functions async functions > instead of coroutines). > > For the original text, I'd probably write something like:: > > You acquire a lock by calling ``await lock.acquire()``, and release > it with ``lock.release()``. > > -n > > [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ > > On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon wrote: >> Curio uses `.. asyncfunction:: acquire` and it renders as `await acquire()` >> at least in the function definition. >> >> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >> wrote: >>> I like "two methods, `async acquire()` and `release()`" >>> >>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] library >>> for it. Hmm, README is pretty empty but we do use the library for >>> documenting aio-libs and aiohttp [2] itself >>> >>> We use ".. comethod:: connect(request)" for method and "cofunction" for >>> top level functions. >>> >>> Additional markup for methods that could be used as async context >>> managers: >>> >>> .. comethod:: delete(url, **kwargs) >>> :async-with: >>> :coroutine: >>> >>> and `:async-for:` for async iterators. >>> >>> >>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>> 2. https://github.com/aio-libs/aiohttp >>> >>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek wrote: >>>> Hi all, >>>> >>>> I'm working to improve async docs, and I wonder if/how async methods >>>> ought to be marked in the documentation, for example >>>> library/async-sync.rst: >>>> >>>> """ ... It [lock] has two basic methods, `acquire()` and `release()`. ... >>>> """ >>>> >>>> In fact, these methods are not symmetric, the earlier is asynchronous >>>> and the latter synchronous: >>>> >>>> Definitions are `async def acquire()` and `def release()`. >>>> Likewise user is expected to call `await .acquire()` and `.release()`. >>>> >>>> This is user-facing documentation, IMO it should be clearer. >>>> Although there are examples for this specific case, I'm concerned with >>>> general documentation best practice. >>>> >>>> Should this example read, e.g.: >>>> * two methods, `async acquire()` and `release()` >>>> or perhaps >>>> * two methods, used `await x.acquire()` and `x.release()` >>>> or something else? >>>> >>>> If there's a good example already Python docs or in some 3rd party >>>> docs, please tell. >>>> >>>> Likewise, should there be marks on iterators? async generators? things >>>> that ought to be used as context managers? >>>> >>>> Cheers, >>>> d. >>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> -- >>> Thanks, >>> Andrew Svetlov >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > > From chris.jerdonek at gmail.com Tue Jul 4 03:02:53 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 4 Jul 2017 00:02:53 -0700 Subject: [Async-sig] async documentation methods In-Reply-To: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm wrote: > The real question is: why doesn't vanilla Sphinx have any kind of support > for async functions which have been part of the language for quite a while? It looks like this is the issue (which Brett filed in Nov. 2015): https://github.com/sphinx-doc/sphinx/issues/2105 --Chris > > > > Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: >> >> If we're citing curio and sphinxcontrib-asyncio I guess I'll also >> mention sphinxcontrib-trio [1], which was inspired by both of them >> (and isn't in any way specific to trio). I don't know if the python >> docs can use third-party sphinx extensions, though, and it is a bit >> opinionated (in particular it calls async functions async functions >> instead of coroutines). >> >> For the original text, I'd probably write something like:: >> >> You acquire a lock by calling ``await lock.acquire()``, and release >> it with ``lock.release()``. >> >> -n >> >> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ >> >> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon wrote: >>> >>> Curio uses `.. asyncfunction:: acquire` and it renders as `await >>> acquire()` >>> at least in the function definition. >>> >>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >>> wrote: >>>> >>>> I like "two methods, `async acquire()` and `release()`" >>>> >>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] >>>> library >>>> for it. Hmm, README is pretty empty but we do use the library for >>>> documenting aio-libs and aiohttp [2] itself >>>> >>>> We use ".. comethod:: connect(request)" for method and "cofunction" for >>>> top level functions. >>>> >>>> Additional markup for methods that could be used as async context >>>> managers: >>>> >>>> .. comethod:: delete(url, **kwargs) >>>> :async-with: >>>> :coroutine: >>>> >>>> and `:async-for:` for async iterators. >>>> >>>> >>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>>> 2. https://github.com/aio-libs/aiohttp >>>> >>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek wrote: >>>>> >>>>> Hi all, >>>>> >>>>> I'm working to improve async docs, and I wonder if/how async methods >>>>> ought to be marked in the documentation, for example >>>>> library/async-sync.rst: >>>>> >>>>> """ ... It [lock] has two basic methods, `acquire()` and `release()`. >>>>> ... >>>>> """ >>>>> >>>>> In fact, these methods are not symmetric, the earlier is asynchronous >>>>> and the latter synchronous: >>>>> >>>>> Definitions are `async def acquire()` and `def release()`. >>>>> Likewise user is expected to call `await .acquire()` and `.release()`. >>>>> >>>>> This is user-facing documentation, IMO it should be clearer. >>>>> Although there are examples for this specific case, I'm concerned with >>>>> general documentation best practice. >>>>> >>>>> Should this example read, e.g.: >>>>> * two methods, `async acquire()` and `release()` >>>>> or perhaps >>>>> * two methods, used `await x.acquire()` and `x.release()` >>>>> or something else? >>>>> >>>>> If there's a good example already Python docs or in some 3rd party >>>>> docs, please tell. >>>>> >>>>> Likewise, should there be marks on iterators? async generators? things >>>>> that ought to be used as context managers? >>>>> >>>>> Cheers, >>>>> d. >>>>> _______________________________________________ >>>>> Async-sig mailing list >>>>> Async-sig at python.org >>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>> >>>> -- >>>> Thanks, >>>> Andrew Svetlov >>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >>> >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >> >> > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From alex.gronholm at nextday.fi Tue Jul 4 03:33:23 2017 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 4 Jul 2017 10:33:23 +0300 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: Yeah, but that doesn't answer my question :) Chris Jerdonek kirjoitti 04.07.2017 klo 10:02: > On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm wrote: >> The real question is: why doesn't vanilla Sphinx have any kind of support >> for async functions which have been part of the language for quite a while? > It looks like this is the issue (which Brett filed in Nov. 2015): > https://github.com/sphinx-doc/sphinx/issues/2105 > > --Chris > >> >> >> Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: >>> If we're citing curio and sphinxcontrib-asyncio I guess I'll also >>> mention sphinxcontrib-trio [1], which was inspired by both of them >>> (and isn't in any way specific to trio). I don't know if the python >>> docs can use third-party sphinx extensions, though, and it is a bit >>> opinionated (in particular it calls async functions async functions >>> instead of coroutines). >>> >>> For the original text, I'd probably write something like:: >>> >>> You acquire a lock by calling ``await lock.acquire()``, and release >>> it with ``lock.release()``. >>> >>> -n >>> >>> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ >>> >>> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon wrote: >>>> Curio uses `.. asyncfunction:: acquire` and it renders as `await >>>> acquire()` >>>> at least in the function definition. >>>> >>>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >>>> wrote: >>>>> I like "two methods, `async acquire()` and `release()`" >>>>> >>>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] >>>>> library >>>>> for it. Hmm, README is pretty empty but we do use the library for >>>>> documenting aio-libs and aiohttp [2] itself >>>>> >>>>> We use ".. comethod:: connect(request)" for method and "cofunction" for >>>>> top level functions. >>>>> >>>>> Additional markup for methods that could be used as async context >>>>> managers: >>>>> >>>>> .. comethod:: delete(url, **kwargs) >>>>> :async-with: >>>>> :coroutine: >>>>> >>>>> and `:async-for:` for async iterators. >>>>> >>>>> >>>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>>>> 2. https://github.com/aio-libs/aiohttp >>>>> >>>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek wrote: >>>>>> Hi all, >>>>>> >>>>>> I'm working to improve async docs, and I wonder if/how async methods >>>>>> ought to be marked in the documentation, for example >>>>>> library/async-sync.rst: >>>>>> >>>>>> """ ... It [lock] has two basic methods, `acquire()` and `release()`. >>>>>> ... >>>>>> """ >>>>>> >>>>>> In fact, these methods are not symmetric, the earlier is asynchronous >>>>>> and the latter synchronous: >>>>>> >>>>>> Definitions are `async def acquire()` and `def release()`. >>>>>> Likewise user is expected to call `await .acquire()` and `.release()`. >>>>>> >>>>>> This is user-facing documentation, IMO it should be clearer. >>>>>> Although there are examples for this specific case, I'm concerned with >>>>>> general documentation best practice. >>>>>> >>>>>> Should this example read, e.g.: >>>>>> * two methods, `async acquire()` and `release()` >>>>>> or perhaps >>>>>> * two methods, used `await x.acquire()` and `x.release()` >>>>>> or something else? >>>>>> >>>>>> If there's a good example already Python docs or in some 3rd party >>>>>> docs, please tell. >>>>>> >>>>>> Likewise, should there be marks on iterators? async generators? things >>>>>> that ought to be used as context managers? >>>>>> >>>>>> Cheers, >>>>>> d. >>>>>> _______________________________________________ >>>>>> Async-sig mailing list >>>>>> Async-sig at python.org >>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>> -- >>>>> Thanks, >>>>> Andrew Svetlov >>>>> _______________________________________________ >>>>> Async-sig mailing list >>>>> Async-sig at python.org >>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>> >>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>> >>> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ From alex.gronholm at nextday.fi Tue Jul 4 03:38:08 2017 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 4 Jul 2017 10:38:08 +0300 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: <77a7d995-72e2-c701-e8c7-c0638200026c@nextday.fi> For asyncio, you can write your test functions as coroutines if you use pytest-asyncio. You can even write test fixtures using coroutines. Mocking coroutine functions can be done using asynctest, although I've found that library a bit buggy. Chris Jerdonek kirjoitti 02.07.2017 klo 00:00: > On Sat, Jul 1, 2017 at 1:42 PM, Nathaniel Smith wrote: >> On Jul 1, 2017 3:11 AM, "Chris Jerdonek" wrote: >> Is there a way to write a test case to check that task.cancel() would >> behave correctly if, say, do_things() is waiting at the line >> do_more()? >> >> One possibility for handling this case with a minimum of mocking would be to >> hook do_more so that it calls task.cancel and then calls the regular >> do_more. >> >> Beyond that it depends on what the actual functions are, I guess. If do_more >> naturally blocks under some conditions then you might be able to set up >> those conditions and then call cancel. Or you could try experimenting with >> tests that call sleep(0) a fixed number of times before issuing the cancel, >> and repeat with different iteration counts to find different cancel points. > Thanks, Nathaniel. The following would be overkill in my case, but > your suggestion makes me wonder if it would make sense for there to be > testing tools that have functions to do things like "run the event > loop until is at ." Do such things > exist? This is a little bit related to what Dima was saying about > tools. > > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From njs at pobox.com Tue Jul 4 03:55:32 2017 From: njs at pobox.com (Nathaniel Smith) Date: Tue, 4 Jul 2017 00:55:32 -0700 Subject: [Async-sig] async documentation methods In-Reply-To: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm wrote: > The real question is: why doesn't vanilla Sphinx have any kind of support > for async functions which have been part of the language for quite a while? Because no-one's sent them a PR, I assume. They're pretty swamped AFAICT. One of the maintainers has at least expressed interest in integrating something like sphinxcontrib-trio if someone does the work: https://github.com/sphinx-doc/sphinx/issues/3743 -n -- Nathaniel J. Smith -- https://vorpus.org From alex.gronholm at nextday.fi Tue Jul 4 04:57:03 2017 From: alex.gronholm at nextday.fi (=?UTF-8?Q?Alex_Gr=c3=b6nholm?=) Date: Tue, 4 Jul 2017 11:57:03 +0300 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: <51c815aa-f885-8501-65fd-5427a3710a42@nextday.fi> I'm somewhat reluctant to send them any PRs anymore since I sent them a couple of one liner fixes (with tests) which took around 5 months to get merged in spite of me repeatedly reminding them on the Google group. Nathaniel Smith kirjoitti 04.07.2017 klo 10:55: > On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm wrote: >> The real question is: why doesn't vanilla Sphinx have any kind of support >> for async functions which have been part of the language for quite a while? > Because no-one's sent them a PR, I assume. They're pretty swamped AFAICT. > > One of the maintainers has at least expressed interest in integrating > something like sphinxcontrib-trio if someone does the work: > https://github.com/sphinx-doc/sphinx/issues/3743 > > -n > From dimaqq at gmail.com Tue Jul 4 06:03:58 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 4 Jul 2017 12:03:58 +0200 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: Come to think of it, what sane tests need is a custom event loop or clever mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code under test never sleeps. In simple cases actual delay in the event loop would raise an exception. A full solution would synchronise asyncio.sleep and friends with time.time, time.monotonic and friends, so that a if the loop were to delay, it would advance global/virtual time instead. I think I saw such library for synchronous code, probably with limitations... In any case you should not have to add delays in your mocks or fixtures to hack specific order of task execution by the event loop. My 2c, D. On Jul 4, 2017 9:34 AM, "Alex Gr?nholm" wrote: > Yeah, but that doesn't answer my question :) > > > Chris Jerdonek kirjoitti 04.07.2017 klo 10:02: > >> On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm >> wrote: >> >>> The real question is: why doesn't vanilla Sphinx have any kind of support >>> for async functions which have been part of the language for quite a >>> while? >>> >> It looks like this is the issue (which Brett filed in Nov. 2015): >> https://github.com/sphinx-doc/sphinx/issues/2105 >> >> --Chris >> >> >>> >>> Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: >>> >>>> If we're citing curio and sphinxcontrib-asyncio I guess I'll also >>>> mention sphinxcontrib-trio [1], which was inspired by both of them >>>> (and isn't in any way specific to trio). I don't know if the python >>>> docs can use third-party sphinx extensions, though, and it is a bit >>>> opinionated (in particular it calls async functions async functions >>>> instead of coroutines). >>>> >>>> For the original text, I'd probably write something like:: >>>> >>>> You acquire a lock by calling ``await lock.acquire()``, and release >>>> it with ``lock.release()``. >>>> >>>> -n >>>> >>>> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ >>>> >>>> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon wrote: >>>> >>>>> Curio uses `.. asyncfunction:: acquire` and it renders as `await >>>>> acquire()` >>>>> at least in the function definition. >>>>> >>>>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >>>>> wrote: >>>>> >>>>>> I like "two methods, `async acquire()` and `release()`" >>>>>> >>>>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] >>>>>> library >>>>>> for it. Hmm, README is pretty empty but we do use the library for >>>>>> documenting aio-libs and aiohttp [2] itself >>>>>> >>>>>> We use ".. comethod:: connect(request)" for method and "cofunction" >>>>>> for >>>>>> top level functions. >>>>>> >>>>>> Additional markup for methods that could be used as async context >>>>>> managers: >>>>>> >>>>>> .. comethod:: delete(url, **kwargs) >>>>>> :async-with: >>>>>> :coroutine: >>>>>> >>>>>> and `:async-for:` for async iterators. >>>>>> >>>>>> >>>>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>>>>> 2. https://github.com/aio-libs/aiohttp >>>>>> >>>>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek wrote: >>>>>> >>>>>>> Hi all, >>>>>>> >>>>>>> I'm working to improve async docs, and I wonder if/how async methods >>>>>>> ought to be marked in the documentation, for example >>>>>>> library/async-sync.rst: >>>>>>> >>>>>>> """ ... It [lock] has two basic methods, `acquire()` and `release()`. >>>>>>> ... >>>>>>> """ >>>>>>> >>>>>>> In fact, these methods are not symmetric, the earlier is asynchronous >>>>>>> and the latter synchronous: >>>>>>> >>>>>>> Definitions are `async def acquire()` and `def release()`. >>>>>>> Likewise user is expected to call `await .acquire()` and >>>>>>> `.release()`. >>>>>>> >>>>>>> This is user-facing documentation, IMO it should be clearer. >>>>>>> Although there are examples for this specific case, I'm concerned >>>>>>> with >>>>>>> general documentation best practice. >>>>>>> >>>>>>> Should this example read, e.g.: >>>>>>> * two methods, `async acquire()` and `release()` >>>>>>> or perhaps >>>>>>> * two methods, used `await x.acquire()` and `x.release()` >>>>>>> or something else? >>>>>>> >>>>>>> If there's a good example already Python docs or in some 3rd party >>>>>>> docs, please tell. >>>>>>> >>>>>>> Likewise, should there be marks on iterators? async generators? >>>>>>> things >>>>>>> that ought to be used as context managers? >>>>>>> >>>>>>> Cheers, >>>>>>> d. >>>>>>> _______________________________________________ >>>>>>> Async-sig mailing list >>>>>>> Async-sig at python.org >>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>> >>>>>> -- >>>>>> Thanks, >>>>>> Andrew Svetlov >>>>>> _______________________________________________ >>>>>> Async-sig mailing list >>>>>> Async-sig at python.org >>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>> >>>>> >>>>> _______________________________________________ >>>>> Async-sig mailing list >>>>> Async-sig at python.org >>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>> >>>>> >>>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >> > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.svetlov at gmail.com Tue Jul 4 10:40:54 2017 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 04 Jul 2017 14:40:54 +0000 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: Did you look on https://github.com/python/cpython/blob/master/Lib/asyncio/test_utils.py#L265 ? On Tue, Jul 4, 2017 at 1:04 PM Dima Tisnek wrote: > Come to think of it, what sane tests need is a custom event loop or clever > mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code under > test never sleeps. > > In simple cases actual delay in the event loop would raise an exception. > > A full solution would synchronise asyncio.sleep and friends with > time.time, time.monotonic and friends, so that a if the loop were to delay, > it would advance global/virtual time instead. I think I saw such library > for synchronous code, probably with limitations... > > > In any case you should not have to add delays in your mocks or fixtures to > hack specific order of task execution by the event loop. > > My 2c, > D. > > On Jul 4, 2017 9:34 AM, "Alex Gr?nholm" wrote: > >> Yeah, but that doesn't answer my question :) >> >> >> Chris Jerdonek kirjoitti 04.07.2017 klo 10:02: >> >>> On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm >>> wrote: >>> >>>> The real question is: why doesn't vanilla Sphinx have any kind of >>>> support >>>> for async functions which have been part of the language for quite a >>>> while? >>>> >>> It looks like this is the issue (which Brett filed in Nov. 2015): >>> https://github.com/sphinx-doc/sphinx/issues/2105 >>> >>> --Chris >>> >>> >>>> >>>> Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: >>>> >>>>> If we're citing curio and sphinxcontrib-asyncio I guess I'll also >>>>> mention sphinxcontrib-trio [1], which was inspired by both of them >>>>> (and isn't in any way specific to trio). I don't know if the python >>>>> docs can use third-party sphinx extensions, though, and it is a bit >>>>> opinionated (in particular it calls async functions async functions >>>>> instead of coroutines). >>>>> >>>>> For the original text, I'd probably write something like:: >>>>> >>>>> You acquire a lock by calling ``await lock.acquire()``, and >>>>> release >>>>> it with ``lock.release()``. >>>>> >>>>> -n >>>>> >>>>> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ >>>>> >>>>> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon >>>>> wrote: >>>>> >>>>>> Curio uses `.. asyncfunction:: acquire` and it renders as `await >>>>>> acquire()` >>>>>> at least in the function definition. >>>>>> >>>>>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >>>>> > >>>>>> wrote: >>>>>> >>>>>>> I like "two methods, `async acquire()` and `release()`" >>>>>>> >>>>>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] >>>>>>> library >>>>>>> for it. Hmm, README is pretty empty but we do use the library for >>>>>>> documenting aio-libs and aiohttp [2] itself >>>>>>> >>>>>>> We use ".. comethod:: connect(request)" for method and "cofunction" >>>>>>> for >>>>>>> top level functions. >>>>>>> >>>>>>> Additional markup for methods that could be used as async context >>>>>>> managers: >>>>>>> >>>>>>> .. comethod:: delete(url, **kwargs) >>>>>>> :async-with: >>>>>>> :coroutine: >>>>>>> >>>>>>> and `:async-for:` for async iterators. >>>>>>> >>>>>>> >>>>>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>>>>>> 2. https://github.com/aio-libs/aiohttp >>>>>>> >>>>>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek >>>>>>> wrote: >>>>>>> >>>>>>>> Hi all, >>>>>>>> >>>>>>>> I'm working to improve async docs, and I wonder if/how async methods >>>>>>>> ought to be marked in the documentation, for example >>>>>>>> library/async-sync.rst: >>>>>>>> >>>>>>>> """ ... It [lock] has two basic methods, `acquire()` and >>>>>>>> `release()`. >>>>>>>> ... >>>>>>>> """ >>>>>>>> >>>>>>>> In fact, these methods are not symmetric, the earlier is >>>>>>>> asynchronous >>>>>>>> and the latter synchronous: >>>>>>>> >>>>>>>> Definitions are `async def acquire()` and `def release()`. >>>>>>>> Likewise user is expected to call `await .acquire()` and >>>>>>>> `.release()`. >>>>>>>> >>>>>>>> This is user-facing documentation, IMO it should be clearer. >>>>>>>> Although there are examples for this specific case, I'm concerned >>>>>>>> with >>>>>>>> general documentation best practice. >>>>>>>> >>>>>>>> Should this example read, e.g.: >>>>>>>> * two methods, `async acquire()` and `release()` >>>>>>>> or perhaps >>>>>>>> * two methods, used `await x.acquire()` and `x.release()` >>>>>>>> or something else? >>>>>>>> >>>>>>>> If there's a good example already Python docs or in some 3rd party >>>>>>>> docs, please tell. >>>>>>>> >>>>>>>> Likewise, should there be marks on iterators? async generators? >>>>>>>> things >>>>>>>> that ought to be used as context managers? >>>>>>>> >>>>>>>> Cheers, >>>>>>>> d. >>>>>>>> _______________________________________________ >>>>>>>> Async-sig mailing list >>>>>>>> Async-sig at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>>> >>>>>>> -- >>>>>>> Thanks, >>>>>>> Andrew Svetlov >>>>>>> _______________________________________________ >>>>>>> Async-sig mailing list >>>>>>> Async-sig at python.org >>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Async-sig mailing list >>>>>> Async-sig at python.org >>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>> >>>>>> >>>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>> >>> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimaqq at gmail.com Tue Jul 4 10:50:22 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Tue, 4 Jul 2017 16:50:22 +0200 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: That's good start, looks like it would satisfy asyncio-only code :) I haven't noticed that earlier. On 4 July 2017 at 16:40, Andrew Svetlov wrote: > Did you look on > https://github.com/python/cpython/blob/master/Lib/asyncio/test_utils.py#L265 > ? > > On Tue, Jul 4, 2017 at 1:04 PM Dima Tisnek wrote: >> >> Come to think of it, what sane tests need is a custom event loop or clever >> mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code under >> test never sleeps. >> >> In simple cases actual delay in the event loop would raise an exception. >> >> A full solution would synchronise asyncio.sleep and friends with >> time.time, time.monotonic and friends, so that a if the loop were to delay, >> it would advance global/virtual time instead. I think I saw such library for >> synchronous code, probably with limitations... >> >> >> In any case you should not have to add delays in your mocks or fixtures to >> hack specific order of task execution by the event loop. >> >> My 2c, >> D. >> >> On Jul 4, 2017 9:34 AM, "Alex Gr?nholm" wrote: >>> >>> Yeah, but that doesn't answer my question :) >>> >>> >>> Chris Jerdonek kirjoitti 04.07.2017 klo 10:02: >>>> >>>> On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm >>>> wrote: >>>>> >>>>> The real question is: why doesn't vanilla Sphinx have any kind of >>>>> support >>>>> for async functions which have been part of the language for quite a >>>>> while? >>>> >>>> It looks like this is the issue (which Brett filed in Nov. 2015): >>>> https://github.com/sphinx-doc/sphinx/issues/2105 >>>> >>>> --Chris >>>> >>>>> >>>>> >>>>> Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: >>>>>> >>>>>> If we're citing curio and sphinxcontrib-asyncio I guess I'll also >>>>>> mention sphinxcontrib-trio [1], which was inspired by both of them >>>>>> (and isn't in any way specific to trio). I don't know if the python >>>>>> docs can use third-party sphinx extensions, though, and it is a bit >>>>>> opinionated (in particular it calls async functions async functions >>>>>> instead of coroutines). >>>>>> >>>>>> For the original text, I'd probably write something like:: >>>>>> >>>>>> You acquire a lock by calling ``await lock.acquire()``, and >>>>>> release >>>>>> it with ``lock.release()``. >>>>>> >>>>>> -n >>>>>> >>>>>> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ >>>>>> >>>>>> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon >>>>>> wrote: >>>>>>> >>>>>>> Curio uses `.. asyncfunction:: acquire` and it renders as `await >>>>>>> acquire()` >>>>>>> at least in the function definition. >>>>>>> >>>>>>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov >>>>>>> >>>>>>> wrote: >>>>>>>> >>>>>>>> I like "two methods, `async acquire()` and `release()`" >>>>>>>> >>>>>>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] >>>>>>>> library >>>>>>>> for it. Hmm, README is pretty empty but we do use the library for >>>>>>>> documenting aio-libs and aiohttp [2] itself >>>>>>>> >>>>>>>> We use ".. comethod:: connect(request)" for method and "cofunction" >>>>>>>> for >>>>>>>> top level functions. >>>>>>>> >>>>>>>> Additional markup for methods that could be used as async context >>>>>>>> managers: >>>>>>>> >>>>>>>> .. comethod:: delete(url, **kwargs) >>>>>>>> :async-with: >>>>>>>> :coroutine: >>>>>>>> >>>>>>>> and `:async-for:` for async iterators. >>>>>>>> >>>>>>>> >>>>>>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio >>>>>>>> 2. https://github.com/aio-libs/aiohttp >>>>>>>> >>>>>>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Hi all, >>>>>>>>> >>>>>>>>> I'm working to improve async docs, and I wonder if/how async >>>>>>>>> methods >>>>>>>>> ought to be marked in the documentation, for example >>>>>>>>> library/async-sync.rst: >>>>>>>>> >>>>>>>>> """ ... It [lock] has two basic methods, `acquire()` and >>>>>>>>> `release()`. >>>>>>>>> ... >>>>>>>>> """ >>>>>>>>> >>>>>>>>> In fact, these methods are not symmetric, the earlier is >>>>>>>>> asynchronous >>>>>>>>> and the latter synchronous: >>>>>>>>> >>>>>>>>> Definitions are `async def acquire()` and `def release()`. >>>>>>>>> Likewise user is expected to call `await .acquire()` and >>>>>>>>> `.release()`. >>>>>>>>> >>>>>>>>> This is user-facing documentation, IMO it should be clearer. >>>>>>>>> Although there are examples for this specific case, I'm concerned >>>>>>>>> with >>>>>>>>> general documentation best practice. >>>>>>>>> >>>>>>>>> Should this example read, e.g.: >>>>>>>>> * two methods, `async acquire()` and `release()` >>>>>>>>> or perhaps >>>>>>>>> * two methods, used `await x.acquire()` and `x.release()` >>>>>>>>> or something else? >>>>>>>>> >>>>>>>>> If there's a good example already Python docs or in some 3rd party >>>>>>>>> docs, please tell. >>>>>>>>> >>>>>>>>> Likewise, should there be marks on iterators? async generators? >>>>>>>>> things >>>>>>>>> that ought to be used as context managers? >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> d. >>>>>>>>> _______________________________________________ >>>>>>>>> Async-sig mailing list >>>>>>>>> Async-sig at python.org >>>>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>>> >>>>>>>> -- >>>>>>>> Thanks, >>>>>>>> Andrew Svetlov >>>>>>>> _______________________________________________ >>>>>>>> Async-sig mailing list >>>>>>>> Async-sig at python.org >>>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Async-sig mailing list >>>>>>> Async-sig at python.org >>>>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Async-sig mailing list >>>>> Async-sig at python.org >>>>> https://mail.python.org/mailman/listinfo/async-sig >>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >>> >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -- > Thanks, > Andrew Svetlov From andrew.svetlov at gmail.com Tue Jul 4 10:55:55 2017 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 04 Jul 2017 14:55:55 +0000 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> Message-ID: I'm an author of this code but I can confirm -- usage experience is terrible. Mostly because the loop don't advance virtual time only but tries to control every loop sleep. On Tue, Jul 4, 2017 at 5:50 PM Dima Tisnek wrote: > That's good start, looks like it would satisfy asyncio-only code :) > > I haven't noticed that earlier. > > On 4 July 2017 at 16:40, Andrew Svetlov wrote: > > Did you look on > > > https://github.com/python/cpython/blob/master/Lib/asyncio/test_utils.py#L265 > > ? > > > > On Tue, Jul 4, 2017 at 1:04 PM Dima Tisnek wrote: > >> > >> Come to think of it, what sane tests need is a custom event loop or > clever > >> mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code > under > >> test never sleeps. > >> > >> In simple cases actual delay in the event loop would raise an exception. > >> > >> A full solution would synchronise asyncio.sleep and friends with > >> time.time, time.monotonic and friends, so that a if the loop were to > delay, > >> it would advance global/virtual time instead. I think I saw such > library for > >> synchronous code, probably with limitations... > >> > >> > >> In any case you should not have to add delays in your mocks or fixtures > to > >> hack specific order of task execution by the event loop. > >> > >> My 2c, > >> D. > >> > >> On Jul 4, 2017 9:34 AM, "Alex Gr?nholm" > wrote: > >>> > >>> Yeah, but that doesn't answer my question :) > >>> > >>> > >>> Chris Jerdonek kirjoitti 04.07.2017 klo 10:02: > >>>> > >>>> On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm > >>>> wrote: > >>>>> > >>>>> The real question is: why doesn't vanilla Sphinx have any kind of > >>>>> support > >>>>> for async functions which have been part of the language for quite a > >>>>> while? > >>>> > >>>> It looks like this is the issue (which Brett filed in Nov. 2015): > >>>> https://github.com/sphinx-doc/sphinx/issues/2105 > >>>> > >>>> --Chris > >>>> > >>>>> > >>>>> > >>>>> Nathaniel Smith kirjoitti 01.07.2017 klo 13:35: > >>>>>> > >>>>>> If we're citing curio and sphinxcontrib-asyncio I guess I'll also > >>>>>> mention sphinxcontrib-trio [1], which was inspired by both of them > >>>>>> (and isn't in any way specific to trio). I don't know if the python > >>>>>> docs can use third-party sphinx extensions, though, and it is a bit > >>>>>> opinionated (in particular it calls async functions async functions > >>>>>> instead of coroutines). > >>>>>> > >>>>>> For the original text, I'd probably write something like:: > >>>>>> > >>>>>> You acquire a lock by calling ``await lock.acquire()``, and > >>>>>> release > >>>>>> it with ``lock.release()``. > >>>>>> > >>>>>> -n > >>>>>> > >>>>>> [1] https://sphinxcontrib-trio.readthedocs.io/en/latest/ > >>>>>> > >>>>>> On Fri, Jun 30, 2017 at 8:31 AM, Brett Cannon > >>>>>> wrote: > >>>>>>> > >>>>>>> Curio uses `.. asyncfunction:: acquire` and it renders as `await > >>>>>>> acquire()` > >>>>>>> at least in the function definition. > >>>>>>> > >>>>>>> On Fri, 30 Jun 2017 at 03:36 Andrew Svetlov > >>>>>>> > >>>>>>> wrote: > >>>>>>>> > >>>>>>>> I like "two methods, `async acquire()` and `release()`" > >>>>>>>> > >>>>>>>> Regarding to extra markups -- I created sphinxcontrib-asyncio [1] > >>>>>>>> library > >>>>>>>> for it. Hmm, README is pretty empty but we do use the library for > >>>>>>>> documenting aio-libs and aiohttp [2] itself > >>>>>>>> > >>>>>>>> We use ".. comethod:: connect(request)" for method and > "cofunction" > >>>>>>>> for > >>>>>>>> top level functions. > >>>>>>>> > >>>>>>>> Additional markup for methods that could be used as async context > >>>>>>>> managers: > >>>>>>>> > >>>>>>>> .. comethod:: delete(url, **kwargs) > >>>>>>>> :async-with: > >>>>>>>> :coroutine: > >>>>>>>> > >>>>>>>> and `:async-for:` for async iterators. > >>>>>>>> > >>>>>>>> > >>>>>>>> 1. https://github.com/aio-libs/sphinxcontrib-asyncio > >>>>>>>> 2. https://github.com/aio-libs/aiohttp > >>>>>>>> > >>>>>>>> On Fri, Jun 30, 2017 at 1:11 PM Dima Tisnek > >>>>>>>> wrote: > >>>>>>>>> > >>>>>>>>> Hi all, > >>>>>>>>> > >>>>>>>>> I'm working to improve async docs, and I wonder if/how async > >>>>>>>>> methods > >>>>>>>>> ought to be marked in the documentation, for example > >>>>>>>>> library/async-sync.rst: > >>>>>>>>> > >>>>>>>>> """ ... It [lock] has two basic methods, `acquire()` and > >>>>>>>>> `release()`. > >>>>>>>>> ... > >>>>>>>>> """ > >>>>>>>>> > >>>>>>>>> In fact, these methods are not symmetric, the earlier is > >>>>>>>>> asynchronous > >>>>>>>>> and the latter synchronous: > >>>>>>>>> > >>>>>>>>> Definitions are `async def acquire()` and `def release()`. > >>>>>>>>> Likewise user is expected to call `await .acquire()` and > >>>>>>>>> `.release()`. > >>>>>>>>> > >>>>>>>>> This is user-facing documentation, IMO it should be clearer. > >>>>>>>>> Although there are examples for this specific case, I'm concerned > >>>>>>>>> with > >>>>>>>>> general documentation best practice. > >>>>>>>>> > >>>>>>>>> Should this example read, e.g.: > >>>>>>>>> * two methods, `async acquire()` and `release()` > >>>>>>>>> or perhaps > >>>>>>>>> * two methods, used `await x.acquire()` and `x.release()` > >>>>>>>>> or something else? > >>>>>>>>> > >>>>>>>>> If there's a good example already Python docs or in some 3rd > party > >>>>>>>>> docs, please tell. > >>>>>>>>> > >>>>>>>>> Likewise, should there be marks on iterators? async generators? > >>>>>>>>> things > >>>>>>>>> that ought to be used as context managers? > >>>>>>>>> > >>>>>>>>> Cheers, > >>>>>>>>> d. > >>>>>>>>> _______________________________________________ > >>>>>>>>> Async-sig mailing list > >>>>>>>>> Async-sig at python.org > >>>>>>>>> https://mail.python.org/mailman/listinfo/async-sig > >>>>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >>>>>>>> > >>>>>>>> -- > >>>>>>>> Thanks, > >>>>>>>> Andrew Svetlov > >>>>>>>> _______________________________________________ > >>>>>>>> Async-sig mailing list > >>>>>>>> Async-sig at python.org > >>>>>>>> https://mail.python.org/mailman/listinfo/async-sig > >>>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >>>>>>> > >>>>>>> > >>>>>>> _______________________________________________ > >>>>>>> Async-sig mailing list > >>>>>>> Async-sig at python.org > >>>>>>> https://mail.python.org/mailman/listinfo/async-sig > >>>>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >>>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> Async-sig mailing list > >>>>> Async-sig at python.org > >>>>> https://mail.python.org/mailman/listinfo/async-sig > >>>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >>> > >>> > >>> _______________________________________________ > >>> Async-sig mailing list > >>> Async-sig at python.org > >>> https://mail.python.org/mailman/listinfo/async-sig > >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >> > >> _______________________________________________ > >> Async-sig mailing list > >> Async-sig at python.org > >> https://mail.python.org/mailman/listinfo/async-sig > >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > -- > > Thanks, > > Andrew Svetlov > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From brett at python.org Tue Jul 4 12:47:44 2017 From: brett at python.org (Brett Cannon) Date: Tue, 04 Jul 2017 16:47:44 +0000 Subject: [Async-sig] async documentation methods In-Reply-To: <51c815aa-f885-8501-65fd-5427a3710a42@nextday.fi> References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> <51c815aa-f885-8501-65fd-5427a3710a42@nextday.fi> Message-ID: If no one is willing to take the time to send them a PR then the situation is simply not going to change until the project maintainers have both the time and inclination to add async/await markup support to Sphinx, and if they personally aren't doing any async coding then that won't change anytime soon. Probably the best way forward is to reach consensus on the appropriate issue on how the solution should look in a PR they would accept, and then create the PR. But as project maintainer myself I'm not about to hold it against them for having not taken care of this. On Tue, 4 Jul 2017 at 01:57 Alex Gr?nholm wrote: > I'm somewhat reluctant to send them any PRs anymore since I sent them a > couple of one liner fixes (with tests) which took around 5 months to get > merged in spite of me repeatedly reminding them on the Google group. > > > Nathaniel Smith kirjoitti 04.07.2017 klo 10:55: > > On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm > wrote: > >> The real question is: why doesn't vanilla Sphinx have any kind of > support > >> for async functions which have been part of the language for quite a > while? > > Because no-one's sent them a PR, I assume. They're pretty swamped AFAICT. > > > > One of the maintainers has at least expressed interest in integrating > > something like sphinxcontrib-trio if someone does the work: > > https://github.com/sphinx-doc/sphinx/issues/3743 > > > > -n > > > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martius at martiusweb.net Tue Jul 4 04:44:10 2017 From: martius at martiusweb.net (Martin Richard) Date: Tue, 4 Jul 2017 10:44:10 +0200 Subject: [Async-sig] async testing question In-Reply-To: <77a7d995-72e2-c701-e8c7-c0638200026c@nextday.fi> References: <77a7d995-72e2-c701-e8c7-c0638200026c@nextday.fi> Message-ID: Hi, asynctest provides a asynctest.TestCase class, inheriting unittest.TestCase. It supports coroutines as test cases and adds a few other useful features like checking that there are no scheduled callbacks left(see http://asynctest.readthedocs.io/en/latest/asynctest.case.html#asynctest.case.asynctest.fail_on) or ClockedTestCase which allows to to control time in the test. Sorry for the bugs left in asynctest, I try to add as many tests as possible to covers all cases, but I'm having a hard time keeping the library compatible with unittest. A few remarks though: * keeping the behavior of asynctest in sync with unittest sometimes leads to unexpected behaviors making asynctest look more buggy than it is (at least I hope so...), * some libraries are hard to mock correctly because they use advanced features, for instance, aiohttp uses its own coroutine type, I still don't know what can be done with those cases, * I'm also thinking about removing support of python 3.4 (@coroutine decorator, etc) as it's a lot of work. Unfortunately, I only have a few hours here and there to work on my free time, as I'm not sponsored by my employer anymore. And, as many other open-source libraries out there, I don't receive a lot of feedback nor help :) Thanks for using (or at least trying to use) asynctest! Martin 2017-07-04 9:38 GMT+02:00 Alex Gr?nholm : > For asyncio, you can write your test functions as coroutines if you use > pytest-asyncio. You can even write test fixtures using coroutines. Mocking > coroutine functions can be done using asynctest, although I've found that > library a bit buggy. > > > > Chris Jerdonek kirjoitti 02.07.2017 klo 00:00: > >> On Sat, Jul 1, 2017 at 1:42 PM, Nathaniel Smith wrote: >> >>> On Jul 1, 2017 3:11 AM, "Chris Jerdonek" >>> wrote: >>> Is there a way to write a test case to check that task.cancel() would >>> behave correctly if, say, do_things() is waiting at the line >>> do_more()? >>> >>> One possibility for handling this case with a minimum of mocking would >>> be to >>> hook do_more so that it calls task.cancel and then calls the regular >>> do_more. >>> >>> Beyond that it depends on what the actual functions are, I guess. If >>> do_more >>> naturally blocks under some conditions then you might be able to set up >>> those conditions and then call cancel. Or you could try experimenting >>> with >>> tests that call sleep(0) a fixed number of times before issuing the >>> cancel, >>> and repeat with different iteration counts to find different cancel >>> points. >>> >> Thanks, Nathaniel. The following would be overkill in my case, but >> your suggestion makes me wonder if it would make sense for there to be >> testing tools that have functions to do things like "run the event >> loop until is at ." Do such things >> exist? This is a little bit related to what Dima was saying about >> tools. >> >> --Chris >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Martin Richard www.martiusweb.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Jul 4 15:03:44 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 4 Jul 2017 12:03:44 -0700 Subject: [Async-sig] async testing question In-Reply-To: References: Message-ID: [Pulling in comments that were added to a different thread] On Tue, Jul 4, 2017 at 3:03 AM, Dima Tisnek wrote: > Come to think of it, what sane tests need is a custom event loop or clever > mocks around asyncio.sleep, asyncio.Condition.wait, etc. So that code under > test never sleeps. > ... > In any case you should not have to add delays in your mocks or fixtures to > hack specific order of task execution by the event loop. Regarding guaranteeing a certain execution order, and going back to an earlier question of mine, is there a way to introspect a task to find out the name of the function it is currently waiting on? It seems like such a function could go a long way towards guaranteeing a required ordering, and without having to introduce sleeps, etc. Inside a mock, you would be able to wait exactly until needed conditions are satisfied. I was experimenting with task.get_stack() [1], and it seems you can get the line number of where a task is waiting. But using the line number would be more brittle than using the function name. --Chris [1] https://docs.python.org/3/library/asyncio-task.html#asyncio.Task.get_stack From pfreixes at gmail.com Wed Jul 5 17:07:23 2017 From: pfreixes at gmail.com (Pau Freixes) Date: Wed, 5 Jul 2017 23:07:23 +0200 Subject: [Async-sig] Read remaining and ready buffer once it has been closed by your peer Message-ID: Hi guys, The current implementation of StreamReader does not allow to read the remaining buffer if has had an exception and there was no waiters before to read the data. IMHO this is not the best way to handle this situation, the developer should be able to access to the remaining and ready buffer in somehow. For example, this data might bring some important stuff related to the reason of the exception. Ive just opened a bug [1] and made a PR with a simple proposal [2] with a very basic rationale: Meanwhile, there is data in the buffer still to be processed, the StreamReader shouldn't raise the final exception. Thoughts ? [1] http://bugs.python.org/issue30861 [2] https://github.com/python/cpython/pull/2593 -- --pau From vxgmichel at gmail.com Thu Jul 6 08:39:13 2017 From: vxgmichel at gmail.com (Vincent Michel) Date: Thu, 6 Jul 2017 14:39:13 +0200 Subject: [Async-sig] Go-style generators in asyncio Message-ID: Hi all, I've recently been looking into the go concurrency model to see how it compares to asyncio and an interesting concept caught my attention: go generators. It's quite similar to asynchronous generator, with a bit of extra concurrency. You can find a small write-up comparing the two concepts [1] and a possible implementation of go-style generators using asyncio [2]. [1]: https://gist.github.com/vxgmichel/4ea46d3ae5c270260471d304a2c8e97b [2]: https://gist.github.com/vxgmichel/8fc63c02389dc6807206dec7ede9eb99 My conclusion is that go-style generators are quite useful in the context of a pipeline of generators producing and processing values asynchronously. This research was motivated by my work on aiostream [3], an asynchronous version of itertools, that does not benefit from this kind of optimization (yet). [3]: https://github.com/vxgmichel/aiostream Hope you'll find this interesting, Cheers, /Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.svetlov at gmail.com Fri Jul 7 01:13:05 2017 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Fri, 07 Jul 2017 05:13:05 +0000 Subject: [Async-sig] async documentation methods In-Reply-To: References: <1306bfbc-d211-8168-c1b2-d9cff24d1ff2@nextday.fi> <51c815aa-f885-8501-65fd-5427a3710a42@nextday.fi> Message-ID: I think the code itself is not beg deal (I could commit on the task if needed) but we need an agreement about markup and output formatting. On Tue, Jul 4, 2017 at 7:48 PM Brett Cannon wrote: > If no one is willing to take the time to send them a PR then the situation > is simply not going to change until the project maintainers have both the > time and inclination to add async/await markup support to Sphinx, and if > they personally aren't doing any async coding then that won't change > anytime soon. > > Probably the best way forward is to reach consensus on the appropriate > issue on how the solution should look in a PR they would accept, and then > create the PR. But as project maintainer myself I'm not about to hold it > against them for having not taken care of this. > > > On Tue, 4 Jul 2017 at 01:57 Alex Gr?nholm > wrote: > >> I'm somewhat reluctant to send them any PRs anymore since I sent them a >> couple of one liner fixes (with tests) which took around 5 months to get >> merged in spite of me repeatedly reminding them on the Google group. >> >> >> Nathaniel Smith kirjoitti 04.07.2017 klo 10:55: >> > On Mon, Jul 3, 2017 at 11:49 PM, Alex Gr?nholm < >> alex.gronholm at nextday.fi> wrote: >> >> The real question is: why doesn't vanilla Sphinx have any kind of >> support >> >> for async functions which have been part of the language for quite a >> while? >> > Because no-one's sent them a PR, I assume. They're pretty swamped >> AFAICT. >> > >> > One of the maintainers has at least expressed interest in integrating >> > something like sphinxcontrib-trio if someone does the work: >> > https://github.com/sphinx-doc/sphinx/issues/3743 >> > >> > -n >> > >> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From caulagi at gmail.com Fri Jul 7 11:22:32 2017 From: caulagi at gmail.com (Pradip Caulagi) Date: Fri, 7 Jul 2017 17:22:32 +0200 Subject: [Async-sig] no current event loop in thread 'MainThread' Message-ID: I am trying to write a test for a function that uses async. Can I mix using unittest.TestCase and asyncio.test_utils.TestCase? When I do that, the loop in test_utils.TestCase seems to affect the other code I have. This is a simplified example - $ cat foo.py import asyncio import unittest from asyncio import test_utils async def foo(): return True class Foo1Test(test_utils.TestCase): def setUp(self): super().setUp() self.loop = self.new_test_loop() self.set_event_loop(self.loop) def test_foo(self): res = self.loop.run_until_complete(foo()) assert res is True class Foo2Test(unittest.TestCase): def test_foo(self): loop = asyncio.get_event_loop() res = loop.run_until_complete(foo()) assert res is True $ python3 Python 3.6.1 (default, Apr 4 2017, 09:40:21) [GCC 4.2.1 Compatible Apple LLVM 8.1.0 (clang-802.0.38)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> $ python3 -m unittest foo.py .E ====================================================================== ERROR: test_foo (foo.Foo2Test) ---------------------------------------------------------------------- Traceback (most recent call last): File "/private/tmp/foo.py", line 26, in test_foo loop = asyncio.get_event_loop() File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 678, in get_event_loop return get_event_loop_policy().get_event_loop() File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/asyncio/events.py", line 584, in get_event_loop % threading.current_thread().name) RuntimeError: There is no current event loop in thread 'MainThread'. ---------------------------------------------------------------------- Ran 2 tests in 0.001s What am I missing? -- Pradip Caulagi http://caulagi.com From yselivanov at gmail.com Fri Jul 7 11:50:59 2017 From: yselivanov at gmail.com (Yury Selivanov) Date: Fri, 7 Jul 2017 11:50:59 -0400 Subject: [Async-sig] no current event loop in thread 'MainThread' In-Reply-To: References: Message-ID: Hi, `asyncio.test_utils` is a set of internal undocumented asyncio test utilities. We'll likely move them to 'Lib/test' in Python 3.7. Try to use third-party testing packages like asynctest instead. Thanks, Yury On Jul 7, 2017, 11:22 AM -0400, wrote: > > asyncio.test_utils -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Sun Jul 9 23:48:41 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 9 Jul 2017 20:48:41 -0700 Subject: [Async-sig] using asyncio in synchronous applications Message-ID: I have a two-part question. If my application is single-threaded and synchronous (e.g. a web app using Gunicorn with sync workers [1]), and occasionally I need to call functions in a library that requires an event loop, is there any downside to creating and closing the loop on-the-fly only when I call the function? In other words, is creating and destroying loops cheap? Second, if I were to switch to a multi-threaded model (e.g. Gunicorn with async workers), is my only option to start the loop at the beginning of the process, and use loop.call_soon_threadsafe()? Or can I do what I was asking about above and create and close loops on-the-fly in different threads? Is either approach much more efficient than the other? Thanks, --Chris [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers From guido at python.org Mon Jul 10 00:00:13 2017 From: guido at python.org (Guido van Rossum) Date: Sun, 9 Jul 2017 21:00:13 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: Creating and destroying event loops should be pretty cheap. I suspect the biggest cost is creation of the self-pipe. (But if you really want to know, time it first.) Multiple threads can each have their own independent event loop (accessible with get_event_loop() once created), so as long as they don't need to communicate that should be simple too. But the big question is, what is that library doing for you? In the abstract it is hard to give you a good answer. What library is it? What calls are you making? On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek wrote: > I have a two-part question. > > If my application is single-threaded and synchronous (e.g. a web app > using Gunicorn with sync workers [1]), and occasionally I need to call > functions in a library that requires an event loop, is there any > downside to creating and closing the loop on-the-fly only when I call > the function? In other words, is creating and destroying loops cheap? > > Second, if I were to switch to a multi-threaded model (e.g. Gunicorn > with async workers), is my only option to start the loop at the > beginning of the process, and use loop.call_soon_threadsafe()? Or can > I do what I was asking about above and create and close loops > on-the-fly in different threads? Is either approach much more > efficient than the other? > > Thanks, > --Chris > > [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Mon Jul 10 00:07:35 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Sun, 9 Jul 2017 21:07:35 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum wrote: > But the big question is, what is that library doing for you? In the abstract > it is hard to give you a good answer. What library is it? What calls are you > making? It's the websockets library: https://github.com/aaugustin/websockets All I really need to do is occasionally connect briefly to a websocket server as a client from a synchronous app. Since I'm already using the library on the server-side, I thought I'd save myself the trouble of having to use two libraries and just use the same library on the client side as well. --Chris > > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek > wrote: >> >> I have a two-part question. >> >> If my application is single-threaded and synchronous (e.g. a web app >> using Gunicorn with sync workers [1]), and occasionally I need to call >> functions in a library that requires an event loop, is there any >> downside to creating and closing the loop on-the-fly only when I call >> the function? In other words, is creating and destroying loops cheap? >> >> Second, if I were to switch to a multi-threaded model (e.g. Gunicorn >> with async workers), is my only option to start the loop at the >> beginning of the process, and use loop.call_soon_threadsafe()? Or can >> I do what I was asking about above and create and close loops >> on-the-fly in different threads? Is either approach much more >> efficient than the other? >> >> Thanks, >> --Chris >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > > -- > --Guido van Rossum (python.org/~guido) From guido at python.org Mon Jul 10 10:46:06 2017 From: guido at python.org (Guido van Rossum) Date: Mon, 10 Jul 2017 07:46:06 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: OK, then as long as close the connection and the loop properly it shouldn't be a problem, even multi-threaded. (You basically lose all advantage of async, but it seems you're fine with that.) On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek wrote: > On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum wrote: > > But the big question is, what is that library doing for you? In the > abstract > > it is hard to give you a good answer. What library is it? What calls are > you > > making? > > It's the websockets library: https://github.com/aaugustin/websockets > > All I really need to do is occasionally connect briefly to a websocket > server as a client from a synchronous app. > > Since I'm already using the library on the server-side, I thought I'd > save myself the trouble of having to use two libraries and just use > the same library on the client side as well. > > --Chris > > > > > > > > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek > > > wrote: > >> > >> I have a two-part question. > >> > >> If my application is single-threaded and synchronous (e.g. a web app > >> using Gunicorn with sync workers [1]), and occasionally I need to call > >> functions in a library that requires an event loop, is there any > >> downside to creating and closing the loop on-the-fly only when I call > >> the function? In other words, is creating and destroying loops cheap? > >> > >> Second, if I were to switch to a multi-threaded model (e.g. Gunicorn > >> with async workers), is my only option to start the loop at the > >> beginning of the process, and use loop.call_soon_threadsafe()? Or can > >> I do what I was asking about above and create and close loops > >> on-the-fly in different threads? Is either approach much more > >> efficient than the other? > >> > >> Thanks, > >> --Chris > >> > >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers > >> _______________________________________________ > >> Async-sig mailing list > >> Async-sig at python.org > >> https://mail.python.org/mailman/listinfo/async-sig > >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > > > > > > > -- > > --Guido van Rossum (python.org/~guido) > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Jul 11 11:56:09 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 11 Jul 2017 08:56:09 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: There's something I realized about "creating and destroying" ephemeral event loops if you want to create temporary event loops over time in a synchronous application. This wasn't clear to me at the beginning, but it's actually more natural to do the reverse and "destroy and create," and **at the end**: @contextmanager def run_in_loop(): try: yield finally: loop = asyncio.get_event_loop() loop.close() loop = asyncio.new_event_loop() asyncio.set_event_loop(loop) The reason is that at the beginning of an application, the event loop starts out not closed. So if you start out by creating a new loop at the beginning, you'll get a warning like the following: /usr/local/lib/python3.6/asyncio/base_events.py:509: ResourceWarning: unclosed event loop <_UnixSelectorEventLoop running=False closed=False debug=False> It's like the cycle is slightly out of phase. In contrast, if you create a new loop **at the end**, you're returning the application to the neutral state it was at the beginning, namely with a non-None loop that is neither running nor closed. I can think of three use cases for the context manager above: 1) for wrapping the "main" function of an application, 2) for calling async functions from a synchronous app (even from different threads), which is what I was originally asking about, and 3) as part of a decorator around individual unit tests to guarantee loop isolation. This seems like a really simple thing, but I haven't seen the pattern above written down anywhere (e.g. in past discussions of asyncio.run()). --Chris On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum wrote: > OK, then as long as close the connection and the loop properly it shouldn't > be a problem, even multi-threaded. (You basically lose all advantage of > async, but it seems you're fine with that.) > > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek > wrote: >> >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum wrote: >> > But the big question is, what is that library doing for you? In the >> > abstract >> > it is hard to give you a good answer. What library is it? What calls are >> > you >> > making? >> >> It's the websockets library: https://github.com/aaugustin/websockets >> >> All I really need to do is occasionally connect briefly to a websocket >> server as a client from a synchronous app. >> >> Since I'm already using the library on the server-side, I thought I'd >> save myself the trouble of having to use two libraries and just use >> the same library on the client side as well. >> >> --Chris >> >> >> >> >> > >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek >> > >> > wrote: >> >> >> >> I have a two-part question. >> >> >> >> If my application is single-threaded and synchronous (e.g. a web app >> >> using Gunicorn with sync workers [1]), and occasionally I need to call >> >> functions in a library that requires an event loop, is there any >> >> downside to creating and closing the loop on-the-fly only when I call >> >> the function? In other words, is creating and destroying loops cheap? >> >> >> >> Second, if I were to switch to a multi-threaded model (e.g. Gunicorn >> >> with async workers), is my only option to start the loop at the >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or can >> >> I do what I was asking about above and create and close loops >> >> on-the-fly in different threads? Is either approach much more >> >> efficient than the other? >> >> >> >> Thanks, >> >> --Chris >> >> >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers >> >> _______________________________________________ >> >> Async-sig mailing list >> >> Async-sig at python.org >> >> https://mail.python.org/mailman/listinfo/async-sig >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > >> > >> > >> > >> > -- >> > --Guido van Rossum (python.org/~guido) > > > > > -- > --Guido van Rossum (python.org/~guido) From andrew.svetlov at gmail.com Tue Jul 11 13:20:03 2017 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 11 Jul 2017 17:20:03 +0000 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: Why do you call set_event_loop() on Python 3.6 at all? On Tue, Jul 11, 2017, 17:56 Chris Jerdonek wrote: > There's something I realized about "creating and destroying" ephemeral > event loops if you want to create temporary event loops over time in a > synchronous application. > > This wasn't clear to me at the beginning, but it's actually more > natural to do the reverse and "destroy and create," and **at the > end**: > > @contextmanager > def run_in_loop(): > try: > yield > finally: > loop = asyncio.get_event_loop() > loop.close() > loop = asyncio.new_event_loop() > asyncio.set_event_loop(loop) > > The reason is that at the beginning of an application, the event loop > starts out not closed. So if you start out by creating a new loop at > the beginning, you'll get a warning like the following: > > /usr/local/lib/python3.6/asyncio/base_events.py:509: > ResourceWarning: unclosed event loop <_UnixSelectorEventLoop > running=False closed=False debug=False> > > It's like the cycle is slightly out of phase. > > In contrast, if you create a new loop **at the end**, you're returning > the application to the neutral state it was at the beginning, namely > with a non-None loop that is neither running nor closed. > > I can think of three use cases for the context manager above: > > 1) for wrapping the "main" function of an application, > 2) for calling async functions from a synchronous app (even from > different threads), which is what I was originally asking about, and > 3) as part of a decorator around individual unit tests to guarantee > loop isolation. > > This seems like a really simple thing, but I haven't seen the pattern > above written down anywhere (e.g. in past discussions of > asyncio.run()). > > --Chris > > > On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum > wrote: > > OK, then as long as close the connection and the loop properly it > shouldn't > > be a problem, even multi-threaded. (You basically lose all advantage of > > async, but it seems you're fine with that.) > > > > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek > > > wrote: > >> > >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum > wrote: > >> > But the big question is, what is that library doing for you? In the > >> > abstract > >> > it is hard to give you a good answer. What library is it? What calls > are > >> > you > >> > making? > >> > >> It's the websockets library: https://github.com/aaugustin/websockets > >> > >> All I really need to do is occasionally connect briefly to a websocket > >> server as a client from a synchronous app. > >> > >> Since I'm already using the library on the server-side, I thought I'd > >> save myself the trouble of having to use two libraries and just use > >> the same library on the client side as well. > >> > >> --Chris > >> > >> > >> > >> > >> > > >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek > >> > > >> > wrote: > >> >> > >> >> I have a two-part question. > >> >> > >> >> If my application is single-threaded and synchronous (e.g. a web app > >> >> using Gunicorn with sync workers [1]), and occasionally I need to > call > >> >> functions in a library that requires an event loop, is there any > >> >> downside to creating and closing the loop on-the-fly only when I call > >> >> the function? In other words, is creating and destroying loops cheap? > >> >> > >> >> Second, if I were to switch to a multi-threaded model (e.g. Gunicorn > >> >> with async workers), is my only option to start the loop at the > >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or can > >> >> I do what I was asking about above and create and close loops > >> >> on-the-fly in different threads? Is either approach much more > >> >> efficient than the other? > >> >> > >> >> Thanks, > >> >> --Chris > >> >> > >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers > >> >> _______________________________________________ > >> >> Async-sig mailing list > >> >> Async-sig at python.org > >> >> https://mail.python.org/mailman/listinfo/async-sig > >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >> > > >> > > >> > > >> > > >> > -- > >> > --Guido van Rossum (python.org/~guido) > > > > > > > > > > -- > > --Guido van Rossum (python.org/~guido) > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Jul 11 16:12:54 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 11 Jul 2017 13:12:54 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 10:20 AM, Andrew Svetlov wrote: > Why do you call set_event_loop() on Python 3.6 at all? Calling set_event_loop() at the end resets / sets things up for the next invocation. That was part of my point. Without it, I get the following error the next time I try to use the context manager (note that I've chosen a better name for the manager here): with reset_loop_after(): loop = asyncio.get_event_loop() loop.run_until_complete(foo()) with reset_loop_after(): loop = asyncio.get_event_loop() loop.run_until_complete(foo()) Traceback (most recent call last): ... result = loop.run_until_complete(future) File "/usr/local/lib/python3.6/asyncio/base_events.py", line 443, in run_until_complete self._check_closed() File "/usr/local/lib/python3.6/asyncio/base_events.py", line 357, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Remember that two of the three use cases I listed involve calling the function multiple times throughout the process's lifetime. Is there a way that doesn't require calling set_event_loop()? --Chris > On Tue, Jul 11, 2017, 17:56 Chris Jerdonek wrote: >> >> There's something I realized about "creating and destroying" ephemeral >> event loops if you want to create temporary event loops over time in a >> synchronous application. >> >> This wasn't clear to me at the beginning, but it's actually more >> natural to do the reverse and "destroy and create," and **at the >> end**: >> >> @contextmanager >> def run_in_loop(): >> try: >> yield >> finally: >> loop = asyncio.get_event_loop() >> loop.close() >> loop = asyncio.new_event_loop() >> asyncio.set_event_loop(loop) >> >> The reason is that at the beginning of an application, the event loop >> starts out not closed. So if you start out by creating a new loop at >> the beginning, you'll get a warning like the following: >> >> /usr/local/lib/python3.6/asyncio/base_events.py:509: >> ResourceWarning: unclosed event loop <_UnixSelectorEventLoop >> running=False closed=False debug=False> >> >> It's like the cycle is slightly out of phase. >> >> In contrast, if you create a new loop **at the end**, you're returning >> the application to the neutral state it was at the beginning, namely >> with a non-None loop that is neither running nor closed. >> >> I can think of three use cases for the context manager above: >> >> 1) for wrapping the "main" function of an application, >> 2) for calling async functions from a synchronous app (even from >> different threads), which is what I was originally asking about, and >> 3) as part of a decorator around individual unit tests to guarantee >> loop isolation. >> >> This seems like a really simple thing, but I haven't seen the pattern >> above written down anywhere (e.g. in past discussions of >> asyncio.run()). >> >> --Chris >> >> >> On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum >> wrote: >> > OK, then as long as close the connection and the loop properly it >> > shouldn't >> > be a problem, even multi-threaded. (You basically lose all advantage of >> > async, but it seems you're fine with that.) >> > >> > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek >> > >> > wrote: >> >> >> >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum >> >> wrote: >> >> > But the big question is, what is that library doing for you? In the >> >> > abstract >> >> > it is hard to give you a good answer. What library is it? What calls >> >> > are >> >> > you >> >> > making? >> >> >> >> It's the websockets library: https://github.com/aaugustin/websockets >> >> >> >> All I really need to do is occasionally connect briefly to a websocket >> >> server as a client from a synchronous app. >> >> >> >> Since I'm already using the library on the server-side, I thought I'd >> >> save myself the trouble of having to use two libraries and just use >> >> the same library on the client side as well. >> >> >> >> --Chris >> >> >> >> >> >> >> >> >> >> > >> >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek >> >> > >> >> > wrote: >> >> >> >> >> >> I have a two-part question. >> >> >> >> >> >> If my application is single-threaded and synchronous (e.g. a web app >> >> >> using Gunicorn with sync workers [1]), and occasionally I need to >> >> >> call >> >> >> functions in a library that requires an event loop, is there any >> >> >> downside to creating and closing the loop on-the-fly only when I >> >> >> call >> >> >> the function? In other words, is creating and destroying loops >> >> >> cheap? >> >> >> >> >> >> Second, if I were to switch to a multi-threaded model (e.g. Gunicorn >> >> >> with async workers), is my only option to start the loop at the >> >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or >> >> >> can >> >> >> I do what I was asking about above and create and close loops >> >> >> on-the-fly in different threads? Is either approach much more >> >> >> efficient than the other? >> >> >> >> >> >> Thanks, >> >> >> --Chris >> >> >> >> >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers >> >> >> _______________________________________________ >> >> >> Async-sig mailing list >> >> >> Async-sig at python.org >> >> >> https://mail.python.org/mailman/listinfo/async-sig >> >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > --Guido van Rossum (python.org/~guido) >> > >> > >> > >> > >> > -- >> > --Guido van Rossum (python.org/~guido) >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -- > Thanks, > Andrew Svetlov From andrew.svetlov at gmail.com Tue Jul 11 16:25:26 2017 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Tue, 11 Jul 2017 20:25:26 +0000 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: Hmm. After rethinking I see `set_event_loop()` is required in your design. But better to have `run(coro())` API, it could be implemented like def run(coro): loop = asyncio.new_event_loop() loop.run_until_complete(coro) loop.close() The implementation doesn't touch default loop but `asyncio.get_event_loop()` call from `coro` returns a running loop instance. On Tue, Jul 11, 2017 at 10:12 PM Chris Jerdonek wrote: > On Tue, Jul 11, 2017 at 10:20 AM, Andrew Svetlov > wrote: > > Why do you call set_event_loop() on Python 3.6 at all? > > Calling set_event_loop() at the end resets / sets things up for the > next invocation. That was part of my point. Without it, I get the > following error the next time I try to use the context manager (note > that I've chosen a better name for the manager here): > > with reset_loop_after(): > loop = asyncio.get_event_loop() > loop.run_until_complete(foo()) > > with reset_loop_after(): > loop = asyncio.get_event_loop() > loop.run_until_complete(foo()) > > Traceback (most recent call last): > ... > result = loop.run_until_complete(future) > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 443, in run_until_complete > self._check_closed() > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 357, in _check_closed > raise RuntimeError('Event loop is closed') > RuntimeError: Event loop is closed > > Remember that two of the three use cases I listed involve calling the > function multiple times throughout the process's lifetime. > > Is there a way that doesn't require calling set_event_loop()? > > --Chris > > > > On Tue, Jul 11, 2017, 17:56 Chris Jerdonek > wrote: > >> > >> There's something I realized about "creating and destroying" ephemeral > >> event loops if you want to create temporary event loops over time in a > >> synchronous application. > >> > >> This wasn't clear to me at the beginning, but it's actually more > >> natural to do the reverse and "destroy and create," and **at the > >> end**: > >> > >> @contextmanager > >> def run_in_loop(): > >> try: > >> yield > >> finally: > >> loop = asyncio.get_event_loop() > >> loop.close() > >> loop = asyncio.new_event_loop() > >> asyncio.set_event_loop(loop) > >> > >> The reason is that at the beginning of an application, the event loop > >> starts out not closed. So if you start out by creating a new loop at > >> the beginning, you'll get a warning like the following: > >> > >> /usr/local/lib/python3.6/asyncio/base_events.py:509: > >> ResourceWarning: unclosed event loop <_UnixSelectorEventLoop > >> running=False closed=False debug=False> > >> > >> It's like the cycle is slightly out of phase. > >> > >> In contrast, if you create a new loop **at the end**, you're returning > >> the application to the neutral state it was at the beginning, namely > >> with a non-None loop that is neither running nor closed. > >> > >> I can think of three use cases for the context manager above: > >> > >> 1) for wrapping the "main" function of an application, > >> 2) for calling async functions from a synchronous app (even from > >> different threads), which is what I was originally asking about, and > >> 3) as part of a decorator around individual unit tests to guarantee > >> loop isolation. > >> > >> This seems like a really simple thing, but I haven't seen the pattern > >> above written down anywhere (e.g. in past discussions of > >> asyncio.run()). > >> > >> --Chris > >> > >> > >> On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum > >> wrote: > >> > OK, then as long as close the connection and the loop properly it > >> > shouldn't > >> > be a problem, even multi-threaded. (You basically lose all advantage > of > >> > async, but it seems you're fine with that.) > >> > > >> > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek > >> > > >> > wrote: > >> >> > >> >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum > >> >> wrote: > >> >> > But the big question is, what is that library doing for you? In the > >> >> > abstract > >> >> > it is hard to give you a good answer. What library is it? What > calls > >> >> > are > >> >> > you > >> >> > making? > >> >> > >> >> It's the websockets library: https://github.com/aaugustin/websockets > >> >> > >> >> All I really need to do is occasionally connect briefly to a > websocket > >> >> server as a client from a synchronous app. > >> >> > >> >> Since I'm already using the library on the server-side, I thought I'd > >> >> save myself the trouble of having to use two libraries and just use > >> >> the same library on the client side as well. > >> >> > >> >> --Chris > >> >> > >> >> > >> >> > >> >> > >> >> > > >> >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek > >> >> > > >> >> > wrote: > >> >> >> > >> >> >> I have a two-part question. > >> >> >> > >> >> >> If my application is single-threaded and synchronous (e.g. a web > app > >> >> >> using Gunicorn with sync workers [1]), and occasionally I need to > >> >> >> call > >> >> >> functions in a library that requires an event loop, is there any > >> >> >> downside to creating and closing the loop on-the-fly only when I > >> >> >> call > >> >> >> the function? In other words, is creating and destroying loops > >> >> >> cheap? > >> >> >> > >> >> >> Second, if I were to switch to a multi-threaded model (e.g. > Gunicorn > >> >> >> with async workers), is my only option to start the loop at the > >> >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or > >> >> >> can > >> >> >> I do what I was asking about above and create and close loops > >> >> >> on-the-fly in different threads? Is either approach much more > >> >> >> efficient than the other? > >> >> >> > >> >> >> Thanks, > >> >> >> --Chris > >> >> >> > >> >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers > >> >> >> _______________________________________________ > >> >> >> Async-sig mailing list > >> >> >> Async-sig at python.org > >> >> >> https://mail.python.org/mailman/listinfo/async-sig > >> >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >> >> > > >> >> > > >> >> > > >> >> > > >> >> > -- > >> >> > --Guido van Rossum (python.org/~guido) > >> > > >> > > >> > > >> > > >> > -- > >> > --Guido van Rossum (python.org/~guido) > >> _______________________________________________ > >> Async-sig mailing list > >> Async-sig at python.org > >> https://mail.python.org/mailman/listinfo/async-sig > >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > -- > > Thanks, > > Andrew Svetlov > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Tue Jul 11 17:35:40 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Tue, 11 Jul 2017 14:35:40 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 1:25 PM, Andrew Svetlov wrote: > Hmm. After rethinking I see `set_event_loop()` is required in your design. > But better to have `run(coro())` API, it could be implemented like > > def run(coro): > loop = asyncio.new_event_loop() > loop.run_until_complete(coro) > loop.close() Hmm. This was confusing and surprising to me that it works. For example, calling asyncio.get_event_loop() inside run() returns a different loop than when calling from inside coro. Then I remembered seeing something related to this, and found this: https://github.com/python/asyncio/pull/452 ("Make get_event_loop() return the current loop if called from coroutines/callbacks") It might be good for the main get_event_loop() docs: https://docs.python.org/3/library/asyncio-eventloops.html#asyncio.get_event_loop to be updated to say that get_event_loop() returns the currently running loop and not e.g. the loop last passed to set_event_loop(), which is what the function names and current docs seem to suggest. But thank you for your pattern, Andrew. I'm glad I asked. By the way, have people settled on the best practice boilerplate for starting / cleaning up servers and loops, etc (e.g. as a gist)? It's partly what I'm trying to work out. From this issue: https://github.com/python/asyncio/pull/465 it seems like there are some subtle issues that may not have been decided, or maybe the path is clear but the sticking point is just whether it should go in the standard library. Thanks, --Chris > > The implementation doesn't touch default loop but `asyncio.get_event_loop()` > call from `coro` returns a running loop instance. > > > On Tue, Jul 11, 2017 at 10:12 PM Chris Jerdonek > wrote: >> >> On Tue, Jul 11, 2017 at 10:20 AM, Andrew Svetlov >> wrote: >> > Why do you call set_event_loop() on Python 3.6 at all? >> >> Calling set_event_loop() at the end resets / sets things up for the >> next invocation. That was part of my point. Without it, I get the >> following error the next time I try to use the context manager (note >> that I've chosen a better name for the manager here): >> >> with reset_loop_after(): >> loop = asyncio.get_event_loop() >> loop.run_until_complete(foo()) >> >> with reset_loop_after(): >> loop = asyncio.get_event_loop() >> loop.run_until_complete(foo()) >> >> Traceback (most recent call last): >> ... >> result = loop.run_until_complete(future) >> File "/usr/local/lib/python3.6/asyncio/base_events.py", line >> 443, in run_until_complete >> self._check_closed() >> File "/usr/local/lib/python3.6/asyncio/base_events.py", line >> 357, in _check_closed >> raise RuntimeError('Event loop is closed') >> RuntimeError: Event loop is closed >> >> Remember that two of the three use cases I listed involve calling the >> function multiple times throughout the process's lifetime. >> >> Is there a way that doesn't require calling set_event_loop()? >> >> --Chris >> >> >> > On Tue, Jul 11, 2017, 17:56 Chris Jerdonek >> > wrote: >> >> >> >> There's something I realized about "creating and destroying" ephemeral >> >> event loops if you want to create temporary event loops over time in a >> >> synchronous application. >> >> >> >> This wasn't clear to me at the beginning, but it's actually more >> >> natural to do the reverse and "destroy and create," and **at the >> >> end**: >> >> >> >> @contextmanager >> >> def run_in_loop(): >> >> try: >> >> yield >> >> finally: >> >> loop = asyncio.get_event_loop() >> >> loop.close() >> >> loop = asyncio.new_event_loop() >> >> asyncio.set_event_loop(loop) >> >> >> >> The reason is that at the beginning of an application, the event loop >> >> starts out not closed. So if you start out by creating a new loop at >> >> the beginning, you'll get a warning like the following: >> >> >> >> /usr/local/lib/python3.6/asyncio/base_events.py:509: >> >> ResourceWarning: unclosed event loop <_UnixSelectorEventLoop >> >> running=False closed=False debug=False> >> >> >> >> It's like the cycle is slightly out of phase. >> >> >> >> In contrast, if you create a new loop **at the end**, you're returning >> >> the application to the neutral state it was at the beginning, namely >> >> with a non-None loop that is neither running nor closed. >> >> >> >> I can think of three use cases for the context manager above: >> >> >> >> 1) for wrapping the "main" function of an application, >> >> 2) for calling async functions from a synchronous app (even from >> >> different threads), which is what I was originally asking about, and >> >> 3) as part of a decorator around individual unit tests to guarantee >> >> loop isolation. >> >> >> >> This seems like a really simple thing, but I haven't seen the pattern >> >> above written down anywhere (e.g. in past discussions of >> >> asyncio.run()). >> >> >> >> --Chris >> >> >> >> >> >> On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum >> >> wrote: >> >> > OK, then as long as close the connection and the loop properly it >> >> > shouldn't >> >> > be a problem, even multi-threaded. (You basically lose all advantage >> >> > of >> >> > async, but it seems you're fine with that.) >> >> > >> >> > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek >> >> > >> >> > wrote: >> >> >> >> >> >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum >> >> >> wrote: >> >> >> > But the big question is, what is that library doing for you? In >> >> >> > the >> >> >> > abstract >> >> >> > it is hard to give you a good answer. What library is it? What >> >> >> > calls >> >> >> > are >> >> >> > you >> >> >> > making? >> >> >> >> >> >> It's the websockets library: https://github.com/aaugustin/websockets >> >> >> >> >> >> All I really need to do is occasionally connect briefly to a >> >> >> websocket >> >> >> server as a client from a synchronous app. >> >> >> >> >> >> Since I'm already using the library on the server-side, I thought >> >> >> I'd >> >> >> save myself the trouble of having to use two libraries and just use >> >> >> the same library on the client side as well. >> >> >> >> >> >> --Chris >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> > >> >> >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek >> >> >> > >> >> >> > wrote: >> >> >> >> >> >> >> >> I have a two-part question. >> >> >> >> >> >> >> >> If my application is single-threaded and synchronous (e.g. a web >> >> >> >> app >> >> >> >> using Gunicorn with sync workers [1]), and occasionally I need to >> >> >> >> call >> >> >> >> functions in a library that requires an event loop, is there any >> >> >> >> downside to creating and closing the loop on-the-fly only when I >> >> >> >> call >> >> >> >> the function? In other words, is creating and destroying loops >> >> >> >> cheap? >> >> >> >> >> >> >> >> Second, if I were to switch to a multi-threaded model (e.g. >> >> >> >> Gunicorn >> >> >> >> with async workers), is my only option to start the loop at the >> >> >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or >> >> >> >> can >> >> >> >> I do what I was asking about above and create and close loops >> >> >> >> on-the-fly in different threads? Is either approach much more >> >> >> >> efficient than the other? >> >> >> >> >> >> >> >> Thanks, >> >> >> >> --Chris >> >> >> >> >> >> >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers >> >> >> >> _______________________________________________ >> >> >> >> Async-sig mailing list >> >> >> >> Async-sig at python.org >> >> >> >> https://mail.python.org/mailman/listinfo/async-sig >> >> >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > -- >> >> >> > --Guido van Rossum (python.org/~guido) >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > --Guido van Rossum (python.org/~guido) >> >> _______________________________________________ >> >> Async-sig mailing list >> >> Async-sig at python.org >> >> https://mail.python.org/mailman/listinfo/async-sig >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > >> > -- >> > Thanks, >> > Andrew Svetlov > > -- > Thanks, > Andrew Svetlov From lmazuel at microsoft.com Tue Jul 11 18:26:58 2017 From: lmazuel at microsoft.com (Laurent Mazuel) Date: Tue, 11 Jul 2017 22:26:58 +0000 Subject: [Async-sig] Optional async method and best practices Message-ID: Hello, I?m working currently with Brett Cannon to bring asyncio support to our SDK. We wanted to check with you one of the scenario, since we got a loooong discussion on it together ?. And we want to do it using the best reasonable practice with your opinion. We have an api that is clearly async and will gain a lot to be converted to asyncio. However, it's a two-step operation. Operation 1 asks for the creation of a resource and is not async, operation 2 is *optional* and wait for completion of this creation (with nightmare threads currently and I removed a lot of code moving to asyncio - happiness). There is perfectly legit scenarios where operation 2 is not needed and avoid it is better, but it has to be prepared at the same time of operation 1. Current code looks like this: sync_poller = client.create(**parameters) obj = sync_poller.resource() # Get the initial resource information, but the object is not actually created yet. obj = sync_poller.result() # OPTIONAL. This is a blocking call with thread, if you want to wait for actual creation and get updated metadatas My first prototype was to split and return a tuple (resource, coroutine): obj, optional_poller = client.create(**parameters) obj = await optional_poller # OPTIONAL But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. I found 2 solutions to avoid the warning, and I currently prefer solution 2: 1- Return a function to call, and not a coroutine. The "await" statement becomes: obj = await optional_poller() 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): async_poller = client.create(**parameters) obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. obj = await async_poller.result() # OPTIONAL My async_poller object being something like: class PollerOperation: async def result(self): ...async version of previous sync result()... So the questions are: - Does this seem a correct pattern? - Is there a simple way to achieve something like this: obj = await async_poller meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? If it helps, I'm willing to use 3.5 as minimal requirement to get async behavior. Thank you!! Laurent From chris.jerdonek at gmail.com Wed Jul 12 04:17:29 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 12 Jul 2017 01:17:29 -0700 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 3:26 PM, Laurent Mazuel via Async-sig wrote: > But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited > ... > I found 2 solutions to avoid the warning, and I currently prefer solution 2: > 1- Return a function to call, and not a coroutine. The "await" statement becomes: > > obj = await optional_poller() > > 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): > > async_poller = client.create(**parameters) > obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = await async_poller.result() # OPTIONAL Either of those options sounds fine to me. Instead of creating your coroutine object at the very beginning, create your coroutine *function*. Wait until you know you're going to do your second operation, and create your coroutine object then! --Chris > > My async_poller object being something like: > > class PollerOperation: > async def result(self): > ...async version of previous sync result()... > > So the questions are: > - Does this seem a correct pattern? > - Is there a simple way to achieve something like this: > > obj = await async_poller > > meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? > > If it helps, I'm willing to use 3.5 as minimal requirement to get async behavior. > > Thank you!! > > Laurent > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From cory at lukasa.co.uk Wed Jul 12 04:18:25 2017 From: cory at lukasa.co.uk (Cory Benfield) Date: Wed, 12 Jul 2017 09:18:25 +0100 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: > On 11 Jul 2017, at 23:26, Laurent Mazuel via Async-sig wrote: > > Hello, Hi Laurent! A future note: your message got stuck in moderation because you aren?t subscribed to the mailing list. You may find it helpful to subscribe, as your future messages will also get stuck unless you do! > My first prototype was to split and return a tuple (resource, coroutine): > > obj, optional_poller = client.create(**parameters) > obj = await optional_poller # OPTIONAL > > But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited > > I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. The assumption in asyncio, generally speaking, is that you do not create coroutines you do not care about running. This is both for abstract theoretical reasons (if you don?t care if the coroutine is run or not, why not just optimise your code to never create the coroutine and save yourself the CPU cycles?) and for more concrete practical concerns (coroutines may own system resources and do cleanup in `finally` blocks, and if you don?t await a coroutine then you?ll never reach the `finally` block and so will leak system resources). Given that there?s no computational way to be sure that *not* running a coroutine is safe (hello there halting problem), asyncio takes the pedantic model and says that not running a coroutine is a condition that justifies a warning. I think asyncio?s position here is correct, incidentally. > 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): > > async_poller = client.create(**parameters) > obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = await async_poller.result() # OPTIONAL > > My async_poller object being something like: > > class PollerOperation: > async def result(self): > ...async version of previous sync result()... > > So the questions are: > - Does this seem a correct pattern? Yes. This is the simple map to your old API, and is absolutely what I?d recommend doing in the first instance if you want to use coroutines. > - Is there a simple way to achieve something like this: > > obj = await async_poller > > meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? There are a few other patterns you could use. The first is to return a Future, and just always run the ?polling? function in the background to resolve that future. If the caller doesn?t care about the result they can just ignore the Future, and if they do care they can await on it. This has the downside of always requiring the I/O to poll, but is otherwise pretty clean. Another option is to offer two functions, for example `def resource_nowait()` and `async def resource`. The caller can decide which they want to call based on whether they care about finding out the result. This is the clearest approach that doesn?t trigger automatic extra work like the Future does, and it lacks magic: it?s very declarative. This is a nice approach for keeping things clean. Finally, you can create a magic awaitable object. PEP 492 defines several ways to create an ?awaitable?, but one approach is to use what it calls a ?Future-like object?: that is, one with a __await__ method that returns an iterator. In this case, you?d do a very basic extension of the Future object by triggering the work upon the call of __await__ before delegating to the normal behaviour. This is an annoyingly precise thing to do, though technically do-able. Cory From dimaqq at gmail.com Wed Jul 12 06:34:53 2017 From: dimaqq at gmail.com (Dima Tisnek) Date: Wed, 12 Jul 2017 12:34:53 +0200 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: Hi Laurent, I'm still a dilettante, so take my comments with a grain of salt: 1. Target Python 3.6 only. (i.e. drop 3.5; look at 3.7 obv, but you want users now) (i.e. forget `yield from`, none will remember/get it next year) (if 2.7 or 3.3 must be supported, provide synch package) 2. Use futures (unless it's a stream) 3. Shield liberally 4. Provide context managers Naive user code might look like that: req = lib.request(...) await req.ready() # optional return await req.json()["something"] That's sane and pretty similar to http://aiohttp.readthedocs.io/en/stable/client.html thus your users will get it :) A more advanced use will be `[async] with lib.request(...) as r: await r.json()` (you probably want `async with` unless you can ensure synchronous timely termination) Personally I'd prefer `ready` and `json` without parentheses, but it seems I'm in a minority. Cheers, d. On 12 July 2017 at 00:26, Laurent Mazuel via Async-sig wrote: > Hello, > > I?m working currently with Brett Cannon to bring asyncio support to our SDK. We wanted to check with you one of the scenario, since we got a loooong discussion on it together ?. And we want to do it using the best reasonable practice with your opinion. > > We have an api that is clearly async and will gain a lot to be converted to asyncio. However, it's a two-step operation. Operation 1 asks for the creation of a resource and is not async, operation 2 is *optional* and wait for completion of this creation (with nightmare threads currently and I removed a lot of code moving to asyncio - happiness). There is perfectly legit scenarios where operation 2 is not needed and avoid it is better, but it has to be prepared at the same time of operation 1. Current code looks like this: > > sync_poller = client.create(**parameters) > obj = sync_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = sync_poller.result() # OPTIONAL. This is a blocking call with thread, if you want to wait for actual creation and get updated metadatas > > My first prototype was to split and return a tuple (resource, coroutine): > > obj, optional_poller = client.create(**parameters) > obj = await optional_poller # OPTIONAL > > But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited > > I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. > > I found 2 solutions to avoid the warning, and I currently prefer solution 2: > 1- Return a function to call, and not a coroutine. The "await" statement becomes: > > obj = await optional_poller() > > 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): > > async_poller = client.create(**parameters) > obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = await async_poller.result() # OPTIONAL > > My async_poller object being something like: > > class PollerOperation: > async def result(self): > ...async version of previous sync result()... > > So the questions are: > - Does this seem a correct pattern? > - Is there a simple way to achieve something like this: > > obj = await async_poller > > meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? > > If it helps, I'm willing to use 3.5 as minimal requirement to get async behavior. > > Thank you!! > > Laurent > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From lmazuel at microsoft.com Wed Jul 12 12:44:28 2017 From: lmazuel at microsoft.com (Laurent Mazuel) Date: Wed, 12 Jul 2017 16:44:28 +0000 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: Thanks Dima, Chris and Cory! This helps me a lot ? I like the "future" approach, I think this is exactly what I need, more than a coroutine method "result()/ready()" on my object. With a coroutine method, each time you call it you get a new coroutine, but it will poll the same set of values and it's pointless and waste of I/O+CPU. With a future, I control the coroutine to be sure there is only one (that I schedule with ensure_future). I also agree that if the user doesn't want the poll result, save I/O and not poll at all. Using "resource(_nowait)", or a "nowait=True" keyword-only argument, I'm not sure yet. I was also thinking lazy initialization using a property, something like: def __init__(self): self._future = None async def _poll(self): self._future.set_result('Future is Done!') @property def future(self): if self._future is None: self._future = asyncio.Future() asyncio.ensure_future(self._poll()) return self._future result = await poller.future If I don't call the "future" attribute, I don't poll at all. My initial "create" method returns the same object anytime, and I don't need a parameter or another "nowait" method. Do you have any caveat for issues I don?t see in this approach? Thank you very much!!! Laurent PS: Cory, I just subscribed to the mailing list ? -----Original Message----- From: Cory Benfield [mailto:cory at lukasa.co.uk] Sent: Wednesday, July 12, 2017 01:18 To: Laurent Mazuel Cc: async-sig at python.org Subject: Re: [Async-sig] Optional async method and best practices > On 11 Jul 2017, at 23:26, Laurent Mazuel via Async-sig wrote: > > Hello, Hi Laurent! A future note: your message got stuck in moderation because you aren?t subscribed to the mailing list. You may find it helpful to subscribe, as your future messages will also get stuck unless you do! > My first prototype was to split and return a tuple (resource, coroutine): > > obj, optional_poller = client.create(**parameters) > obj = await optional_poller # OPTIONAL > > But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited > > I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. The assumption in asyncio, generally speaking, is that you do not create coroutines you do not care about running. This is both for abstract theoretical reasons (if you don?t care if the coroutine is run or not, why not just optimise your code to never create the coroutine and save yourself the CPU cycles?) and for more concrete practical concerns (coroutines may own system resources and do cleanup in `finally` blocks, and if you don?t await a coroutine then you?ll never reach the `finally` block and so will leak system resources). Given that there?s no computational way to be sure that *not* running a coroutine is safe (hello there halting problem), asyncio takes the pedantic model and says that not running a coroutine is a condition that justifies a warning. I think asyncio?s position here is correct, incidentally. > 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): > > async_poller = client.create(**parameters) > obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = await async_poller.result() # OPTIONAL > > My async_poller object being something like: > > class PollerOperation: > async def result(self): > ...async version of previous sync result()... > > So the questions are: > - Does this seem a correct pattern? Yes. This is the simple map to your old API, and is absolutely what I?d recommend doing in the first instance if you want to use coroutines. > - Is there a simple way to achieve something like this: > > obj = await async_poller > > meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? There are a few other patterns you could use. The first is to return a Future, and just always run the ?polling? function in the background to resolve that future. If the caller doesn?t care about the result they can just ignore the Future, and if they do care they can await on it. This has the downside of always requiring the I/O to poll, but is otherwise pretty clean. Another option is to offer two functions, for example `def resource_nowait()` and `async def resource`. The caller can decide which they want to call based on whether they care about finding out the result. This is the clearest approach that doesn?t trigger automatic extra work like the Future does, and it lacks magic: it?s very declarative. This is a nice approach for keeping things clean. Finally, you can create a magic awaitable object. PEP 492 defines several ways to create an ?awaitable?, but one approach is to use what it calls a ?Future-like object?: that is, one with a __await__ method that returns an iterator. In this case, you?d do a very basic extension of the Future object by triggering the work upon the call of __await__ before delegating to the normal behaviour. This is an annoyingly precise thing to do, though technically do-able. Cory From chris.jerdonek at gmail.com Wed Jul 12 16:27:15 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Wed, 12 Jul 2017 13:27:15 -0700 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: On Wed, Jul 12, 2017 at 9:44 AM, Laurent Mazuel via Async-sig wrote: > @property > def future(self): > if self._future is None: > self._future = asyncio.Future() > asyncio.ensure_future(self._poll()) > return self._future > > result = await poller.future > > If I don't call the "future" attribute, I don't poll at all. My initial "create" method returns the same object anytime, and I don't need a parameter or another "nowait" method. Do you have any caveat for issues I don?t see in this approach? Hi Laurent, it seems like there is still an issue with this approach in that merely accessing / inspecting the property has the side effect of creating a coroutine object (calling self._poll()), and so can trigger the warning in innocent-looking code: loop = asyncio.get_event_loop() poller = PollerOperation() fut = poller.future # creates coroutine object if False: loop.run_until_complete(fut) loop.close() I'm not sure if others feel differently, but property access IMO shouldn't have possible side effects like this. If there are possible negative side effects, it should be a method call to indicate to the user that it is "doing" something that warrants more consideration. --Chris > > Thank you very much!!! > > Laurent > > PS: Cory, I just subscribed to the mailing list ? > > -----Original Message----- > From: Cory Benfield [mailto:cory at lukasa.co.uk] > Sent: Wednesday, July 12, 2017 01:18 > To: Laurent Mazuel > Cc: async-sig at python.org > Subject: Re: [Async-sig] Optional async method and best practices > > >> On 11 Jul 2017, at 23:26, Laurent Mazuel via Async-sig wrote: >> >> Hello, > > Hi Laurent! A future note: your message got stuck in moderation because you aren?t subscribed to the mailing list. You may find it helpful to subscribe, as your future messages will also get stuck unless you do! > >> My first prototype was to split and return a tuple (resource, coroutine): >> >> obj, optional_poller = client.create(**parameters) >> obj = await optional_poller # OPTIONAL >> >> But I got a warning if I decide to do not use this poller, RuntimeWarning: coroutine 'foo' was never awaited >> >> I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. > > The assumption in asyncio, generally speaking, is that you do not create coroutines you do not care about running. This is both for abstract theoretical reasons (if you don?t care if the coroutine is run or not, why not just optimise your code to never create the coroutine and save yourself the CPU cycles?) and for more concrete practical concerns (coroutines may own system resources and do cleanup in `finally` blocks, and if you don?t await a coroutine then you?ll never reach the `finally` block and so will leak system resources). > > Given that there?s no computational way to be sure that *not* running a coroutine is safe (hello there halting problem), asyncio takes the pedantic model and says that not running a coroutine is a condition that justifies a warning. I think asyncio?s position here is correct, incidentally. > >> 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): >> >> async_poller = client.create(**parameters) >> obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. >> obj = await async_poller.result() # OPTIONAL >> >> My async_poller object being something like: >> >> class PollerOperation: >> async def result(self): >> ...async version of previous sync result()... >> >> So the questions are: >> - Does this seem a correct pattern? > > Yes. This is the simple map to your old API, and is absolutely what I?d recommend doing in the first instance if you want to use coroutines. > > >> - Is there a simple way to achieve something like this: >> >> obj = await async_poller >> >> meaning, I can win the "result()" syntax and directly "await" on the object and get the result from magic function. I tried by subclassing some ABC coroutine/awaitable, but wasn't able to find a correct syntax. I'm not even sure this makes sense and respects the zen of Python ? > > There are a few other patterns you could use. > > The first is to return a Future, and just always run the ?polling? function in the background to resolve that future. If the caller doesn?t care about the result they can just ignore the Future, and if they do care they can await on it. This has the downside of always requiring the I/O to poll, but is otherwise pretty clean. > > Another option is to offer two functions, for example `def resource_nowait()` and `async def resource`. The caller can decide which they want to call based on whether they care about finding out the result. This is the clearest approach that doesn?t trigger automatic extra work like the Future does, and it lacks magic: it?s very declarative. This is a nice approach for keeping things clean. > > Finally, you can create a magic awaitable object. PEP 492 defines several ways to create an ?awaitable?, but one approach is to use what it calls a ?Future-like object?: that is, one with a __await__ method that returns an iterator. In this case, you?d do a very basic extension of the Future object by triggering the work upon the call of __await__ before delegating to the normal behaviour. This is an annoyingly precise thing to do, though technically do-able. > > Cory > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From lmazuel at microsoft.com Wed Jul 12 16:40:16 2017 From: lmazuel at microsoft.com (Laurent Mazuel) Date: Wed, 12 Jul 2017 20:40:16 +0000 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: Good point! I didn't see it. It will definitely happen. So I won't do that ? So, my best candidate right now is the "get_resource(nowait)" two methods approach. Thank you! Laurent -----Original Message----- From: Chris Jerdonek [mailto:chris.jerdonek at gmail.com] Sent: Wednesday, July 12, 2017 13:27 To: Laurent Mazuel Cc: Cory Benfield ; async-sig at python.org Subject: Re: [Async-sig] Optional async method and best practices On Wed, Jul 12, 2017 at 9:44 AM, Laurent Mazuel via Async-sig wrote: > @property > def future(self): > if self._future is None: > self._future = asyncio.Future() > asyncio.ensure_future(self._poll()) > return self._future > > result = await poller.future > > If I don't call the "future" attribute, I don't poll at all. My initial "create" method returns the same object anytime, and I don't need a parameter or another "nowait" method. Do you have any caveat for issues I don?t see in this approach? Hi Laurent, it seems like there is still an issue with this approach in that merely accessing / inspecting the property has the side effect of creating a coroutine object (calling self._poll()), and so can trigger the warning in innocent-looking code: loop = asyncio.get_event_loop() poller = PollerOperation() fut = poller.future # creates coroutine object if False: loop.run_until_complete(fut) loop.close() I'm not sure if others feel differently, but property access IMO shouldn't have possible side effects like this. If there are possible negative side effects, it should be a method call to indicate to the user that it is "doing" something that warrants more consideration. --Chris > > Thank you very much!!! > > Laurent > > PS: Cory, I just subscribed to the mailing list ? > > -----Original Message----- > From: Cory Benfield [mailto:cory at lukasa.co.uk] > Sent: Wednesday, July 12, 2017 01:18 > To: Laurent Mazuel > Cc: async-sig at python.org > Subject: Re: [Async-sig] Optional async method and best practices > > >> On 11 Jul 2017, at 23:26, Laurent Mazuel via Async-sig wrote: >> >> Hello, > > Hi Laurent! A future note: your message got stuck in moderation because you aren?t subscribed to the mailing list. You may find it helpful to subscribe, as your future messages will also get stuck unless you do! > >> My first prototype was to split and return a tuple (resource, coroutine): >> >> obj, optional_poller = client.create(**parameters) >> obj = await optional_poller # OPTIONAL >> >> But I got a warning if I decide to do not use this poller, >> RuntimeWarning: coroutine 'foo' was never awaited >> >> I was surprised honestly that I can't do that, since I feel like I'm not leaking anything. I didn't run the operation, so there is no wasted resource at my knowledge. But I remember wasting time because of a forgotten "yield from", so I guess it's fair ?. But I would be curious to understand what I did badly. > > The assumption in asyncio, generally speaking, is that you do not create coroutines you do not care about running. This is both for abstract theoretical reasons (if you don?t care if the coroutine is run or not, why not just optimise your code to never create the coroutine and save yourself the CPU cycles?) and for more concrete practical concerns (coroutines may own system resources and do cleanup in `finally` blocks, and if you don?t await a coroutine then you?ll never reach the `finally` block and so will leak system resources). > > Given that there?s no computational way to be sure that *not* running a coroutine is safe (hello there halting problem), asyncio takes the pedantic model and says that not running a coroutine is a condition that justifies a warning. I think asyncio?s position here is correct, incidentally. > >> 2- Return my initial object with an async method. This allows me to write (something finally close to the current code): >> >> async_poller = client.create(**parameters) >> obj = async_poller.resource() # Get the initial resource information, but the object is not actually created yet. >> obj = await async_poller.result() # OPTIONAL >> >> My async_poller object being something like: >> >> class PollerOperation: >> async def result(self): >> ...async version of previous sync result()... >> >> So the questions are: >> - Does this seem a correct pattern? > > Yes. This is the simple map to your old API, and is absolutely what I?d recommend doing in the first instance if you want to use coroutines. > > >> - Is there a simple way to achieve something like this: >> >> obj = await async_poller >> >> meaning, I can win the "result()" syntax and directly "await" on the >> object and get the result from magic function. I tried by subclassing >> some ABC coroutine/awaitable, but wasn't able to find a correct >> syntax. I'm not even sure this makes sense and respects the zen of >> Python ? > > There are a few other patterns you could use. > > The first is to return a Future, and just always run the ?polling? function in the background to resolve that future. If the caller doesn?t care about the result they can just ignore the Future, and if they do care they can await on it. This has the downside of always requiring the I/O to poll, but is otherwise pretty clean. > > Another option is to offer two functions, for example `def resource_nowait()` and `async def resource`. The caller can decide which they want to call based on whether they care about finding out the result. This is the clearest approach that doesn?t trigger automatic extra work like the Future does, and it lacks magic: it?s very declarative. This is a nice approach for keeping things clean. > > Finally, you can create a magic awaitable object. PEP 492 defines several ways to create an ?awaitable?, but one approach is to use what it calls a ?Future-like object?: that is, one with a __await__ method that returns an iterator. In this case, you?d do a very basic extension of the Future object by triggering the work upon the call of __await__ before delegating to the normal behaviour. This is an annoyingly precise thing to do, though technically do-able. > > Cory > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail. > python.org%2Fmailman%2Flistinfo%2Fasync-sig&data=02%7C01%7Clmazuel%40m > icrosoft.com%7Cc1013246fdbf4f56115808d4c964635a%7C72f988bf86f141af91ab > 2d7cd011db47%7C1%7C0%7C636354880389311043&sdata=js9Q%2BwiV6IgMDgLDQs7e > shN8N0QCZypP7qX0IanKOlU%3D&reserved=0 > Code of Conduct: > https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.p > ython.org%2Fpsf%2Fcodeofconduct%2F&data=02%7C01%7Clmazuel%40microsoft. > com%7Cc1013246fdbf4f56115808d4c964635a%7C72f988bf86f141af91ab2d7cd011d > b47%7C1%7C0%7C636354880389311043&sdata=gmlfMEf3RQ6GXW4WwrPuPlTWWdxAqNa > muq8W%2B01LR6M%3D&reserved=0 From njs at pobox.com Wed Jul 12 19:44:10 2017 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 12 Jul 2017 16:44:10 -0700 Subject: [Async-sig] Optional async method and best practices In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 3:26 PM, Laurent Mazuel via Async-sig wrote: > Hello, > > I?m working currently with Brett Cannon to bring asyncio support to our SDK. We wanted to check with you one of the scenario, since we got a loooong discussion on it together ?. And we want to do it using the best reasonable practice with your opinion. > > We have an api that is clearly async and will gain a lot to be converted to asyncio. However, it's a two-step operation. Operation 1 asks for the creation of a resource and is not async, operation 2 is *optional* and wait for completion of this creation (with nightmare threads currently and I removed a lot of code moving to asyncio - happiness). There is perfectly legit scenarios where operation 2 is not needed and avoid it is better, but it has to be prepared at the same time of operation 1. Current code looks like this: > > sync_poller = client.create(**parameters) > obj = sync_poller.resource() # Get the initial resource information, but the object is not actually created yet. > obj = sync_poller.result() # OPTIONAL. This is a blocking call with thread, if you want to wait for actual creation and get updated metadatas My advice would be to - think how you'd write the API if it were synchronous, and then do exactly that, except marking the blocking functions/methods as async - pretend that the only allowed syntax for using await or coroutines is 'await fn(...)', and treat the 'await obj' syntax as something that only exists for compatibility with legacy code that uses explicit Future/Deferred objects. One of Python's famous design principles is "there should be one (and preferably only one) obvious way to do it". The word "obvious" there is important -- because programming is so flexible, and Python in particular is so flexible, there generally are tons and tons of ways to do anything in Python. Like, if you want to make an http request, that could be a function call, `requests.get(url)`. Or you could have a class that does the request when you access some property, `Request(url).doit`. Or you could have a class whose `__str__` method does the request, and you monkeypatch a StringIO in place of sys.stdout and use print, like `sys.stdout = io.StringIO(); print(Request(url), end=""); body = sys.stdout.getvalue()`. Why not, the computer doesn't care! These are all equally compliant with the Python language specification. Fortunately, we have some strong conventions about these things, and most of these options would never even occur to most people. *Of course* the obvious way to make an HTTP request is to call a function. The other options are ridiculous. And that makes life much easier, because we don't need to stop every time we implement some trivial function and be like "hmm, what if I did this using monkeypatching and two metaclasses? Would that be a good idea?", and it means that when you use a new library you (mostly) don't have to worry that they're secretly using monkeypatching and metaclasses to implement some trivial functionality, etc. Yay conventions. But *un*fortunately, async in Python is so new and shiny that we currently have all this flexibility, but we don't have conventions yet, so people try all kinds of wacky stuff and no-one's sure what to recommend. There's lots of ways to do it, but no-one knows which one is obvious. The nice thing about my rules above is that they give you one obvious way to do it, and minimize the delta between sync and async code. You already know how functions and synchronous APIs work, your users already know how functions and synchronous APIs work, all you need to add is some 'async' and 'await' keywords and you're good to go. I think forcing users to know what "coroutine objects" are before they can write async I/O code ? or even deal with the word "coroutine" at all ? is like forcing users to understand the nuances of metaclass property lookup and the historical mess around nb_add/sq_concat before they can use a+b to add two numbers. Obviously in both cases it's important that the full details are available for those who want to dig into it, but you shouldn't need that just to make some HTTP requests. tl;dr: Treat async functions as a kind of function with a funny calling convention. -n -- Nathaniel J. Smith -- https://vorpus.org From chris.jerdonek at gmail.com Fri Jul 14 22:14:08 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 14 Jul 2017 19:14:08 -0700 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: On Tue, Jul 11, 2017 at 2:35 PM, Chris Jerdonek wrote: > On Tue, Jul 11, 2017 at 1:25 PM, Andrew Svetlov > wrote: >> Hmm. After rethinking I see `set_event_loop()` is required in your design. >> But better to have `run(coro())` API, it could be implemented like >> >> def run(coro): >> loop = asyncio.new_event_loop() >> loop.run_until_complete(coro) >> loop.close() > > Hmm. This was confusing and surprising to me that it works. For > example, calling asyncio.get_event_loop() inside run() returns a > different loop than when calling from inside coro. > > Then I remembered seeing something related to this, and found this: > https://github.com/python/asyncio/pull/452 > ("Make get_event_loop() return the current loop if called from > coroutines/callbacks") > > It might be good for the main get_event_loop() docs: > https://docs.python.org/3/library/asyncio-eventloops.html#asyncio.get_event_loop > to be updated to say that get_event_loop() returns the currently > running loop and not e.g. the loop last passed to set_event_loop(), > which is what the function names and current docs seem to suggest. For the record, I filed an issue about this here: http://bugs.python.org/issue30935 --Chris > > But thank you for your pattern, Andrew. I'm glad I asked. > > By the way, have people settled on the best practice boilerplate for > starting / cleaning up servers and loops, etc (e.g. as a gist)? It's > partly what I'm trying to work out. From this issue: > https://github.com/python/asyncio/pull/465 > it seems like there are some subtle issues that may not have been > decided, or maybe the path is clear but the sticking point is just > whether it should go in the standard library. > > Thanks, > --Chris > > >> >> The implementation doesn't touch default loop but `asyncio.get_event_loop()` >> call from `coro` returns a running loop instance. >> >> >> On Tue, Jul 11, 2017 at 10:12 PM Chris Jerdonek >> wrote: >>> >>> On Tue, Jul 11, 2017 at 10:20 AM, Andrew Svetlov >>> wrote: >>> > Why do you call set_event_loop() on Python 3.6 at all? >>> >>> Calling set_event_loop() at the end resets / sets things up for the >>> next invocation. That was part of my point. Without it, I get the >>> following error the next time I try to use the context manager (note >>> that I've chosen a better name for the manager here): >>> >>> with reset_loop_after(): >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(foo()) >>> >>> with reset_loop_after(): >>> loop = asyncio.get_event_loop() >>> loop.run_until_complete(foo()) >>> >>> Traceback (most recent call last): >>> ... >>> result = loop.run_until_complete(future) >>> File "/usr/local/lib/python3.6/asyncio/base_events.py", line >>> 443, in run_until_complete >>> self._check_closed() >>> File "/usr/local/lib/python3.6/asyncio/base_events.py", line >>> 357, in _check_closed >>> raise RuntimeError('Event loop is closed') >>> RuntimeError: Event loop is closed >>> >>> Remember that two of the three use cases I listed involve calling the >>> function multiple times throughout the process's lifetime. >>> >>> Is there a way that doesn't require calling set_event_loop()? >>> >>> --Chris >>> >>> >>> > On Tue, Jul 11, 2017, 17:56 Chris Jerdonek >>> > wrote: >>> >> >>> >> There's something I realized about "creating and destroying" ephemeral >>> >> event loops if you want to create temporary event loops over time in a >>> >> synchronous application. >>> >> >>> >> This wasn't clear to me at the beginning, but it's actually more >>> >> natural to do the reverse and "destroy and create," and **at the >>> >> end**: >>> >> >>> >> @contextmanager >>> >> def run_in_loop(): >>> >> try: >>> >> yield >>> >> finally: >>> >> loop = asyncio.get_event_loop() >>> >> loop.close() >>> >> loop = asyncio.new_event_loop() >>> >> asyncio.set_event_loop(loop) >>> >> >>> >> The reason is that at the beginning of an application, the event loop >>> >> starts out not closed. So if you start out by creating a new loop at >>> >> the beginning, you'll get a warning like the following: >>> >> >>> >> /usr/local/lib/python3.6/asyncio/base_events.py:509: >>> >> ResourceWarning: unclosed event loop <_UnixSelectorEventLoop >>> >> running=False closed=False debug=False> >>> >> >>> >> It's like the cycle is slightly out of phase. >>> >> >>> >> In contrast, if you create a new loop **at the end**, you're returning >>> >> the application to the neutral state it was at the beginning, namely >>> >> with a non-None loop that is neither running nor closed. >>> >> >>> >> I can think of three use cases for the context manager above: >>> >> >>> >> 1) for wrapping the "main" function of an application, >>> >> 2) for calling async functions from a synchronous app (even from >>> >> different threads), which is what I was originally asking about, and >>> >> 3) as part of a decorator around individual unit tests to guarantee >>> >> loop isolation. >>> >> >>> >> This seems like a really simple thing, but I haven't seen the pattern >>> >> above written down anywhere (e.g. in past discussions of >>> >> asyncio.run()). >>> >> >>> >> --Chris >>> >> >>> >> >>> >> On Mon, Jul 10, 2017 at 7:46 AM, Guido van Rossum >>> >> wrote: >>> >> > OK, then as long as close the connection and the loop properly it >>> >> > shouldn't >>> >> > be a problem, even multi-threaded. (You basically lose all advantage >>> >> > of >>> >> > async, but it seems you're fine with that.) >>> >> > >>> >> > On Sun, Jul 9, 2017 at 9:07 PM, Chris Jerdonek >>> >> > >>> >> > wrote: >>> >> >> >>> >> >> On Sun, Jul 9, 2017 at 9:00 PM, Guido van Rossum >>> >> >> wrote: >>> >> >> > But the big question is, what is that library doing for you? In >>> >> >> > the >>> >> >> > abstract >>> >> >> > it is hard to give you a good answer. What library is it? What >>> >> >> > calls >>> >> >> > are >>> >> >> > you >>> >> >> > making? >>> >> >> >>> >> >> It's the websockets library: https://github.com/aaugustin/websockets >>> >> >> >>> >> >> All I really need to do is occasionally connect briefly to a >>> >> >> websocket >>> >> >> server as a client from a synchronous app. >>> >> >> >>> >> >> Since I'm already using the library on the server-side, I thought >>> >> >> I'd >>> >> >> save myself the trouble of having to use two libraries and just use >>> >> >> the same library on the client side as well. >>> >> >> >>> >> >> --Chris >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> > >>> >> >> > On Sun, Jul 9, 2017 at 8:48 PM, Chris Jerdonek >>> >> >> > >>> >> >> > wrote: >>> >> >> >> >>> >> >> >> I have a two-part question. >>> >> >> >> >>> >> >> >> If my application is single-threaded and synchronous (e.g. a web >>> >> >> >> app >>> >> >> >> using Gunicorn with sync workers [1]), and occasionally I need to >>> >> >> >> call >>> >> >> >> functions in a library that requires an event loop, is there any >>> >> >> >> downside to creating and closing the loop on-the-fly only when I >>> >> >> >> call >>> >> >> >> the function? In other words, is creating and destroying loops >>> >> >> >> cheap? >>> >> >> >> >>> >> >> >> Second, if I were to switch to a multi-threaded model (e.g. >>> >> >> >> Gunicorn >>> >> >> >> with async workers), is my only option to start the loop at the >>> >> >> >> beginning of the process, and use loop.call_soon_threadsafe()? Or >>> >> >> >> can >>> >> >> >> I do what I was asking about above and create and close loops >>> >> >> >> on-the-fly in different threads? Is either approach much more >>> >> >> >> efficient than the other? >>> >> >> >> >>> >> >> >> Thanks, >>> >> >> >> --Chris >>> >> >> >> >>> >> >> >> [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers >>> >> >> >> _______________________________________________ >>> >> >> >> Async-sig mailing list >>> >> >> >> Async-sig at python.org >>> >> >> >> https://mail.python.org/mailman/listinfo/async-sig >>> >> >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >> >> > >>> >> >> > >>> >> >> > >>> >> >> > >>> >> >> > -- >>> >> >> > --Guido van Rossum (python.org/~guido) >>> >> > >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > --Guido van Rossum (python.org/~guido) >>> >> _______________________________________________ >>> >> Async-sig mailing list >>> >> Async-sig at python.org >>> >> https://mail.python.org/mailman/listinfo/async-sig >>> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> > >>> > -- >>> > Thanks, >>> > Andrew Svetlov >> >> -- >> Thanks, >> Andrew Svetlov From gmludo at gmail.com Mon Jul 17 06:49:40 2017 From: gmludo at gmail.com (Ludovic Gasc) Date: Mon, 17 Jul 2017 12:49:40 +0200 Subject: [Async-sig] using asyncio in synchronous applications In-Reply-To: References: Message-ID: Hi Chris, I don't know your technical or efficiency constraints, however, if you want to be 100% sure that you won't have any side effects between sync and async code, you might have two daemons, one for each pattern, and use microservice approach to exchanges messages. Especially if you plan to use WebSockets, it would help you. Up to you to decide the easiest approach for you. Regards. -- Ludovic Gasc (GMLudo) Lead Developer Architect at ALLOcloud https://be.linkedin.com/in/ludovicgasc 2017-07-10 5:48 GMT+02:00 Chris Jerdonek : > I have a two-part question. > > If my application is single-threaded and synchronous (e.g. a web app > using Gunicorn with sync workers [1]), and occasionally I need to call > functions in a library that requires an event loop, is there any > downside to creating and closing the loop on-the-fly only when I call > the function? In other words, is creating and destroying loops cheap? > > Second, if I were to switch to a multi-threaded model (e.g. Gunicorn > with async workers), is my only option to start the loop at the > beginning of the process, and use loop.call_soon_threadsafe()? Or can > I do what I was asking about above and create and close loops > on-the-fly in different threads? Is either approach much more > efficient than the other? > > Thanks, > --Chris > > [1] http://docs.gunicorn.org/en/latest/design.html#sync-workers > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Thu Jul 27 23:24:15 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 27 Jul 2017 20:24:15 -0700 Subject: [Async-sig] question re: loop.shutdown_asyncgens() Message-ID: I have a question about PEP 525 (Asynchronous Generators) which I'm sure has a simple answer, but I didn't see it in the PEP or final discussion: https://mail.python.org/pipermail/python-dev/2016-September/146265.html Basically, why is the API such that loop.shutdown_asyncgens() must be called manually? For example, why can't it be called automatically as part of close(), which seems like it would be a friendlier API and more helpful to the common case? I was trying asynchronous iterators in my code and getting the following error: Exception ignored in: Traceback (most recent call last): File "/usr/local/lib/python3.6/asyncio/queues.py", line 169, in get getter.cancel() # Just in case getter is not done yet. File "/usr/local/lib/python3.6/asyncio/base_events.py", line 574, in call_soon self._check_closed() File "/usr/local/lib/python3.6/asyncio/base_events.py", line 357, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Calling loop.shutdown_asyncgens() made the error go away, but it seems a little obscure that by adding an asynchronous iterator somewhere in your code, you have to remember to check that that line is present before loop.close() is called (and the exception message doesn't provide a good hint). Is there any disadvantage to always calling loop.shutdown_asyncgens() (i.e. even if it's not needed)? And why might someone need to call it at a different time? Thanks, --Chris From yselivanov at gmail.com Thu Jul 27 23:40:40 2017 From: yselivanov at gmail.com (Yury Selivanov) Date: Thu, 27 Jul 2017 23:40:40 -0400 Subject: [Async-sig] question re: loop.shutdown_asyncgens() In-Reply-To: References: Message-ID: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> One of the design decisions about `loop.close()` is that it doesn't do a single event loop iteration, making its behaviour highly predictable. To make `loop.close()` to run `loop.shutdown_asyncgens()` (which is a coroutine), we would have needed to change that. One of the ways we want to mitigate this problem in Python 3.7 is to add a new function to bootstrap asyncio and run top-level coroutines: `asyncio.run()`. ?You can read more about it here: [1]. I'm working on a new PEP that will summarize asyncio changes in 3.7. I don't have a concrete ETA for it, but I'll try to get the first draft out by mid September. [1]?https://github.com/python/asyncio/pull/465 Thanks, Yury On Jul 27, 2017, 11:24 PM -0400, Chris Jerdonek , wrote: > I have a question about PEP 525 (Asynchronous Generators) which I'm > sure has a simple answer, but I didn't see it in the PEP or final > discussion: > https://mail.python.org/pipermail/python-dev/2016-September/146265.html > > Basically, why is the API such that loop.shutdown_asyncgens() must be > called manually? For example, why can't it be called automatically as > part of close(), which seems like it would be a friendlier API and > more helpful to the common case? > > I was trying asynchronous iterators in my code and getting the following error: > > Exception ignored in: Traceback (most recent call last): > File "/usr/local/lib/python3.6/asyncio/queues.py", line 169, in get > getter.cancel() # Just in case getter is not done yet. > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 574, in call_soon > self._check_closed() > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 357, in _check_closed > raise RuntimeError('Event loop is closed') > RuntimeError: Event loop is closed > > Calling loop.shutdown_asyncgens() made the error go away, but it seems > a little obscure that by adding an asynchronous iterator somewhere in > your code, you have to remember to check that that line is present > before loop.close() is called (and the exception message doesn't > provide a good hint). > > Is there any disadvantage to always calling loop.shutdown_asyncgens() > (i.e. even if it's not needed)? And why might someone need to call it > at a different time? > > Thanks, > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Fri Jul 28 16:38:22 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 28 Jul 2017 13:38:22 -0700 Subject: [Async-sig] question re: loop.shutdown_asyncgens() In-Reply-To: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> References: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> Message-ID: Thanks, Yury. Have you also considered including recommended setup / cleanup boilerplate in a place where it's easy for asyncio users to find, like in the asyncio docs here? https://docs.python.org/3/library/asyncio-eventloop.html#run-an-event-loop One example of a Python module using this approach is itertools: https://docs.python.org/3/library/itertools.html#itertools-recipes Currently, even the example snippet provided for loop.shutdown_asyncgens(): https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.shutdown_asyncgens is incomplete because it doesn't execute shutdown_asyncgens() in a try-finally like you do in your latest patch posted on PR #465. Also, even if run() is added to Python 3.7, Python 3.6 users would still need / benefit from being able to find blessed boilerplate in a central place. Thanks, --Chris On Thu, Jul 27, 2017 at 8:40 PM, Yury Selivanov wrote: > One of the design decisions about `loop.close()` is that it doesn't > do a single event loop iteration, making its behaviour highly predictable. > To make `loop.close()` to run `loop.shutdown_asyncgens()` (which is a > coroutine), we would have needed to change that. > > One of the ways we want to mitigate this problem in Python 3.7 is to > add a new function to bootstrap asyncio and run top-level coroutines: > `asyncio.run()`. You can read more about it here: [1]. > > I'm working on a new PEP that will summarize asyncio changes in 3.7. > I don't have a concrete ETA for it, but I'll try to get the first draft out > by mid September. > > [1] https://github.com/python/asyncio/pull/465 > > Thanks, > Yury > > On Jul 27, 2017, 11:24 PM -0400, Chris Jerdonek , > wrote: > > I have a question about PEP 525 (Asynchronous Generators) which I'm > sure has a simple answer, but I didn't see it in the PEP or final > discussion: > https://mail.python.org/pipermail/python-dev/2016-September/146265.html > > Basically, why is the API such that loop.shutdown_asyncgens() must be > called manually? For example, why can't it be called automatically as > part of close(), which seems like it would be a friendlier API and > more helpful to the common case? > > I was trying asynchronous iterators in my code and getting the following > error: > > Exception ignored in: Traceback (most recent call last): > File "/usr/local/lib/python3.6/asyncio/queues.py", line 169, in get > getter.cancel() # Just in case getter is not done yet. > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 574, in call_soon > self._check_closed() > File "/usr/local/lib/python3.6/asyncio/base_events.py", line > 357, in _check_closed > raise RuntimeError('Event loop is closed') > RuntimeError: Event loop is closed > > Calling loop.shutdown_asyncgens() made the error go away, but it seems > a little obscure that by adding an asynchronous iterator somewhere in > your code, you have to remember to check that that line is present > before loop.close() is called (and the exception message doesn't > provide a good hint). > > Is there any disadvantage to always calling loop.shutdown_asyncgens() > (i.e. even if it's not needed)? And why might someone need to call it > at a different time? > > Thanks, > --Chris > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Fri Jul 28 16:45:40 2017 From: yselivanov at gmail.com (Yury Selivanov) Date: Fri, 28 Jul 2017 16:45:40 -0400 Subject: [Async-sig] question re: loop.shutdown_asyncgens() In-Reply-To: References: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> Message-ID: <1aeafffe-d40c-4007-9a08-15d0cdadb2ce@Spark> Thanks, Yury On Jul 28, 2017, 4:38 PM -0400, Chris Jerdonek , wrote: > Thanks, Yury. Have you also considered including recommended setup / cleanup boilerplate in a place where it's easy for asyncio users to find, like in the asyncio docs here? > https://docs.python.org/3/library/asyncio-eventloop.html#run-an-event-loop Yes, a PR would be welcome! > > One example of a Python module using this approach is itertools: > https://docs.python.org/3/library/itertools.html#itertools-recipes > > Currently, even the example snippet provided for loop.shutdown_asyncgens(): > https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.shutdown_asyncgens > is incomplete because it doesn't execute shutdown_asyncgens() in a try-finally like you do in your latest patch posted on PR #465. > > Also, even if run() is added to Python 3.7, Python 3.6 users would still need / benefit from being able to find blessed boilerplate in a central place. I was going to release a new module on PyPI called "asyncio_next" or something with backports (and to experiment with the proposed APIs before 3.7 is out). Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Fri Jul 28 16:57:11 2017 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Fri, 28 Jul 2017 13:57:11 -0700 Subject: [Async-sig] question re: loop.shutdown_asyncgens() In-Reply-To: <1aeafffe-d40c-4007-9a08-15d0cdadb2ce@Spark> References: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> <1aeafffe-d40c-4007-9a08-15d0cdadb2ce@Spark> Message-ID: Thanks! I'll try to find time to propose a PR. Also, for suggestions around the new API, would you prefer that be posted to PR #465, or can it be done here? --Chris On Fri, Jul 28, 2017 at 1:45 PM, Yury Selivanov wrote: > > Thanks, > Yury > > On Jul 28, 2017, 4:38 PM -0400, Chris Jerdonek , > wrote: > > Thanks, Yury. Have you also considered including recommended setup / > cleanup boilerplate in a place where it's easy for asyncio users to find, > like in the asyncio docs here? > https://docs.python.org/3/library/asyncio-eventloop.html#run-an-event-loop > > > > Yes, a PR would be welcome! > > > One example of a Python module using this approach is itertools: > https://docs.python.org/3/library/itertools.html#itertools-recipes > > Currently, even the example snippet provided for loop.shutdown_asyncgens(): > https://docs.python.org/3/library/asyncio-eventloop.html#asyncio. > AbstractEventLoop.shutdown_asyncgens > is incomplete because it doesn't execute shutdown_asyncgens() in a > try-finally like you do in your latest patch posted on PR #465. > > Also, even if run() is added to Python 3.7, Python 3.6 users would still > need / benefit from being able to find blessed boilerplate in a central > place. > > > I was going to release a new module on PyPI called "asyncio_next" or > something with backports (and to experiment with the proposed APIs before > 3.7 is out). > > Yury > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yselivanov at gmail.com Fri Jul 28 16:58:03 2017 From: yselivanov at gmail.com (Yury Selivanov) Date: Fri, 28 Jul 2017 16:58:03 -0400 Subject: [Async-sig] question re: loop.shutdown_asyncgens() In-Reply-To: References: <37549fdf-0c2d-482a-8ebb-317600e66117@Spark> <1aeafffe-d40c-4007-9a08-15d0cdadb2ce@Spark> Message-ID: On Jul 28, 2017, 4:57 PM -0400, Chris Jerdonek , wrote: > Thanks! I'll try to find time to propose a PR. > > Also, for suggestions around the new API, would you prefer that be posted to PR #465, or can it be done here? I think we can discuss it here, but up to you. Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: