From twisteroid.ambassador at gmail.com Thu May 3 12:03:19 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Fri, 4 May 2018 00:03:19 +0800 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop Message-ID: Hi, tl;dr: coroutine functions and regular functions returning Futures behave differently: the latter may start running immediately without being scheduled on a loop, or even with no loop running. This might be bad since the two are sometimes advertised to be interchangeable. I find that sometimes I want to construct a coroutine object, store it for some time, and run it later. Most times it works like one would expect: I call a coroutine function which gives me a coroutine object, I hold on to the coroutine object, I later await it or use loop.create_task(), asyncio.gather(), etc. on it, and only then it starts to run. However, I have found some cases where the "coroutine" starts running immediately. The first example is loop.run_in_executor(). I guess this is somewhat unsurprising since the passed function don't actually run in the event loop. Demonstrated below with strace and the interactive console: $ strace -e connect -f python3 Python 3.6.5 (default, Apr 4 2018, 15:01:18) [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import asyncio >>> import socket >>> s = socket.socket() >>> loop = asyncio.get_event_loop() >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >>> loop.run_until_complete(asyncio.sleep(1)) >>> task = loop.create_task(coro) >>> loop.run_until_complete(asyncio.sleep(1)) connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >>> s.close() >>> s = socket.socket() >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) strace: Process 13739 attached >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >>> coro2 ._call_check_cancel() at /usr/lib64/python3.6/asyncio/futures.py:403]> >>> loop.run_until_complete(asyncio.sleep(1)) >>> coro2 >>> Note that with loop.sock_connect(), the connect syscall is only run after loop.create_task() is called on the coroutine AND the loop is running. On the other hand, as soon as loop.run_in_executor() is called on socket.connect, the connect syscall gets called, without the event loop running at all. Another such case is with Python 3.4.2, where even loop.sock_connect() will run immediately: $ strace -e connect -f python3 Python 3.4.2 (default, Oct 8 2014, 10:45:20) [GCC 4.9.1] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> import asyncio >>> loop = asyncio.get_event_loop() >>> s = socket.socket() >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) connect(7, {sa_family=AF_INET, sin_port=htons(82), sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection refused) >>> c >>> In both these cases, the misbehaving "coroutine" aren't actually defined as coroutine functions, but regular functions returning a Future, which is probably why they don't act like coroutines. However, coroutine functions and regular functions returning Futures are often used interchangeably: Python docs Section 18.5.3.1 even says: > Note: In this documentation, some methods are documented as coroutines, even if they are plain Python functions returning a Future. This is intentional to have a freedom of tweaking the implementation of these functions in the future. In particular, both run_in_executor() and sock_connect() are documented as coroutines. If an asyncio API may change from a function returning Future to a coroutine function and vice versa any time, then one cannot rely on the behavior of creating the "coroutine object" not running the coroutine immediately. This seems like an important Gotcha waiting to bite someone. Back to the scenario in the beginning. If I want to write a function that takes coroutine objects and schedule them to run later, and some coroutine objects turn out to be misbehaving like above, then they will run too early. To avoid this, I could either 1. pass the coroutine functions and their arguments separately "callback style", 2. use functools.partial or lambdas, or 3. always pass in real coroutine objects returned from coroutine functions defined with "async def". Does this sound right? Thanks, twistero From andrew.svetlov at gmail.com Thu May 3 12:37:59 2018 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 03 May 2018 16:37:59 +0000 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: What real problem do you want to solve? Correct code should always use `await loop.sock_connect(sock, addr)`, it this case the behavior difference never hurts you. On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador < twisteroid.ambassador at gmail.com> wrote: > Hi, > > tl;dr: coroutine functions and regular functions returning Futures > behave differently: the latter may start running immediately without > being scheduled on a loop, or even with no loop running. This might be > bad since the two are sometimes advertised to be interchangeable. > > > I find that sometimes I want to construct a coroutine object, store it > for some time, and run it later. Most times it works like one would > expect: I call a coroutine function which gives me a coroutine object, > I hold on to the coroutine object, I later await it or use > loop.create_task(), asyncio.gather(), etc. on it, and only then it > starts to run. > > However, I have found some cases where the "coroutine" starts running > immediately. The first example is loop.run_in_executor(). I guess this > is somewhat unsurprising since the passed function don't actually run > in the event loop. Demonstrated below with strace and the interactive > console: > > $ strace -e connect -f python3 > Python 3.6.5 (default, Apr 4 2018, 15:01:18) > [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> import asyncio > >>> import socket > >>> s = socket.socket() > >>> loop = asyncio.get_event_loop() > >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) > >>> loop.run_until_complete(asyncio.sleep(1)) > >>> task = loop.create_task(coro) > >>> loop.run_until_complete(asyncio.sleep(1)) > connect(3, {sa_family=AF_INET, sin_port=htons(80), > sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection > refused) > >>> s.close() > >>> s = socket.socket() > >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) > strace: Process 13739 attached > >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), > sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) > > >>> coro2 > ._call_check_cancel() at > /usr/lib64/python3.6/asyncio/futures.py:403]> > >>> loop.run_until_complete(asyncio.sleep(1)) > >>> coro2 > refused')> > >>> > > Note that with loop.sock_connect(), the connect syscall is only run > after loop.create_task() is called on the coroutine AND the loop is > running. On the other hand, as soon as loop.run_in_executor() is > called on socket.connect, the connect syscall gets called, without the > event loop running at all. > > Another such case is with Python 3.4.2, where even loop.sock_connect() > will run immediately: > > $ strace -e connect -f python3 > Python 3.4.2 (default, Oct 8 2014, 10:45:20) > [GCC 4.9.1] on linux > Type "help", "copyright", "credits" or "license" for more information. > >>> import socket > >>> import asyncio > >>> loop = asyncio.get_event_loop() > >>> s = socket.socket() > >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) > connect(7, {sa_family=AF_INET, sin_port=htons(82), > sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection > refused) > >>> c > refused')> > >>> > > In both these cases, the misbehaving "coroutine" aren't actually > defined as coroutine functions, but regular functions returning a > Future, which is probably why they don't act like coroutines. However, > coroutine functions and regular functions returning Futures are often > used interchangeably: Python docs Section 18.5.3.1 even says: > > > Note: In this documentation, some methods are documented as coroutines, > even if they are plain Python functions returning a Future. This is > intentional to have a freedom of tweaking the implementation of these > functions in the future. > > In particular, both run_in_executor() and sock_connect() are > documented as coroutines. > > If an asyncio API may change from a function returning Future to a > coroutine function and vice versa any time, then one cannot rely on > the behavior of creating the "coroutine object" not running the > coroutine immediately. This seems like an important Gotcha waiting to > bite someone. > > Back to the scenario in the beginning. If I want to write a function > that takes coroutine objects and schedule them to run later, and some > coroutine objects turn out to be misbehaving like above, then they > will run too early. To avoid this, I could either 1. pass the > coroutine functions and their arguments separately "callback style", > 2. use functools.partial or lambdas, or 3. always pass in real > coroutine objects returned from coroutine functions defined with > "async def". Does this sound right? > > Thanks, > > twistero > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From gvanrossum at gmail.com Thu May 3 16:24:04 2018 From: gvanrossum at gmail.com (Guido van Rossum) Date: Thu, 03 May 2018 20:24:04 +0000 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: Depending on the coroutine*not* running sounds like asking for trouble. On Thu, May 3, 2018, 09:38 Andrew Svetlov wrote: > What real problem do you want to solve? > Correct code should always use `await loop.sock_connect(sock, addr)`, it > this case the behavior difference never hurts you. > > On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador < > twisteroid.ambassador at gmail.com> wrote: > >> Hi, >> >> tl;dr: coroutine functions and regular functions returning Futures >> behave differently: the latter may start running immediately without >> being scheduled on a loop, or even with no loop running. This might be >> bad since the two are sometimes advertised to be interchangeable. >> >> >> I find that sometimes I want to construct a coroutine object, store it >> for some time, and run it later. Most times it works like one would >> expect: I call a coroutine function which gives me a coroutine object, >> I hold on to the coroutine object, I later await it or use >> loop.create_task(), asyncio.gather(), etc. on it, and only then it >> starts to run. >> >> However, I have found some cases where the "coroutine" starts running >> immediately. The first example is loop.run_in_executor(). I guess this >> is somewhat unsurprising since the passed function don't actually run >> in the event loop. Demonstrated below with strace and the interactive >> console: >> >> $ strace -e connect -f python3 >> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import asyncio >> >>> import socket >> >>> s = socket.socket() >> >>> loop = asyncio.get_event_loop() >> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >>> task = loop.create_task(coro) >> >>> loop.run_until_complete(asyncio.sleep(1)) >> connect(3, {sa_family=AF_INET, sin_port=htons(80), >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >> refused) >> >>> s.close() >> >>> s = socket.socket() >> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >> strace: Process 13739 attached >> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >> >> >>> coro2 >> ._call_check_cancel() at >> /usr/lib64/python3.6/asyncio/futures.py:403]> >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >>> coro2 >> > refused')> >> >>> >> >> Note that with loop.sock_connect(), the connect syscall is only run >> after loop.create_task() is called on the coroutine AND the loop is >> running. On the other hand, as soon as loop.run_in_executor() is >> called on socket.connect, the connect syscall gets called, without the >> event loop running at all. >> >> Another such case is with Python 3.4.2, where even loop.sock_connect() >> will run immediately: >> >> $ strace -e connect -f python3 >> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >> [GCC 4.9.1] on linux >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import socket >> >>> import asyncio >> >>> loop = asyncio.get_event_loop() >> >>> s = socket.socket() >> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >> connect(7, {sa_family=AF_INET, sin_port=htons(82), >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >> refused) >> >>> c >> > refused')> >> >>> >> >> In both these cases, the misbehaving "coroutine" aren't actually >> defined as coroutine functions, but regular functions returning a >> Future, which is probably why they don't act like coroutines. However, >> coroutine functions and regular functions returning Futures are often >> used interchangeably: Python docs Section 18.5.3.1 even says: >> >> > Note: In this documentation, some methods are documented as coroutines, >> even if they are plain Python functions returning a Future. This is >> intentional to have a freedom of tweaking the implementation of these >> functions in the future. >> >> In particular, both run_in_executor() and sock_connect() are >> documented as coroutines. >> >> If an asyncio API may change from a function returning Future to a >> coroutine function and vice versa any time, then one cannot rely on >> the behavior of creating the "coroutine object" not running the >> coroutine immediately. This seems like an important Gotcha waiting to >> bite someone. >> >> Back to the scenario in the beginning. If I want to write a function >> that takes coroutine objects and schedule them to run later, and some >> coroutine objects turn out to be misbehaving like above, then they >> will run too early. To avoid this, I could either 1. pass the >> coroutine functions and their arguments separately "callback style", >> 2. use functools.partial or lambdas, or 3. always pass in real >> coroutine objects returned from coroutine functions defined with >> "async def". Does this sound right? >> >> Thanks, >> >> twistero >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > -- > Thanks, > Andrew Svetlov > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.jerdonek at gmail.com Thu May 3 16:56:12 2018 From: chris.jerdonek at gmail.com (Chris Jerdonek) Date: Thu, 3 May 2018 13:56:12 -0700 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: It would probably be hard for people to find at this point all the places they might be relying on this behavior (if anywhere), but isn't this a basic documented property of coroutines? >From the introduction section on coroutines [1]: > Calling a coroutine does not start its code running ? the coroutine object returned by the call doesn?t do anything until you schedule its execution. There are two basic ways to start it running: call await coroutine or yield from coroutine from another coroutine (assuming the other coroutine is already running!), or schedule its execution using the ensure_future() function or the AbstractEventLoop.create_task() method. > Coroutines (and tasks) can only run when the event loop is running. [1]: https://docs.python.org/3/library/asyncio-task.html#coroutines --Chris On Thu, May 3, 2018 at 1:24 PM, Guido van Rossum wrote: > Depending on the coroutine*not* running sounds like asking for trouble. > > On Thu, May 3, 2018, 09:38 Andrew Svetlov wrote: >> >> What real problem do you want to solve? >> Correct code should always use `await loop.sock_connect(sock, addr)`, it >> this case the behavior difference never hurts you. >> >> On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador >> wrote: >>> >>> Hi, >>> >>> tl;dr: coroutine functions and regular functions returning Futures >>> behave differently: the latter may start running immediately without >>> being scheduled on a loop, or even with no loop running. This might be >>> bad since the two are sometimes advertised to be interchangeable. >>> >>> >>> I find that sometimes I want to construct a coroutine object, store it >>> for some time, and run it later. Most times it works like one would >>> expect: I call a coroutine function which gives me a coroutine object, >>> I hold on to the coroutine object, I later await it or use >>> loop.create_task(), asyncio.gather(), etc. on it, and only then it >>> starts to run. >>> >>> However, I have found some cases where the "coroutine" starts running >>> immediately. The first example is loop.run_in_executor(). I guess this >>> is somewhat unsurprising since the passed function don't actually run >>> in the event loop. Demonstrated below with strace and the interactive >>> console: >>> >>> $ strace -e connect -f python3 >>> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >>> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> import asyncio >>> >>> import socket >>> >>> s = socket.socket() >>> >>> loop = asyncio.get_event_loop() >>> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >>> task = loop.create_task(coro) >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >>> refused) >>> >>> s.close() >>> >>> s = socket.socket() >>> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >>> strace: Process 13739 attached >>> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >>> >>> >>> coro2 >>> ._call_check_cancel() at >>> /usr/lib64/python3.6/asyncio/futures.py:403]> >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >>> coro2 >>> >> refused')> >>> >>> >>> >>> Note that with loop.sock_connect(), the connect syscall is only run >>> after loop.create_task() is called on the coroutine AND the loop is >>> running. On the other hand, as soon as loop.run_in_executor() is >>> called on socket.connect, the connect syscall gets called, without the >>> event loop running at all. >>> >>> Another such case is with Python 3.4.2, where even loop.sock_connect() >>> will run immediately: >>> >>> $ strace -e connect -f python3 >>> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >>> [GCC 4.9.1] on linux >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> import socket >>> >>> import asyncio >>> >>> loop = asyncio.get_event_loop() >>> >>> s = socket.socket() >>> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >>> connect(7, {sa_family=AF_INET, sin_port=htons(82), >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >>> refused) >>> >>> c >>> >> refused')> >>> >>> >>> >>> In both these cases, the misbehaving "coroutine" aren't actually >>> defined as coroutine functions, but regular functions returning a >>> Future, which is probably why they don't act like coroutines. However, >>> coroutine functions and regular functions returning Futures are often >>> used interchangeably: Python docs Section 18.5.3.1 even says: >>> >>> > Note: In this documentation, some methods are documented as coroutines, >>> > even if they are plain Python functions returning a Future. This is >>> > intentional to have a freedom of tweaking the implementation of these >>> > functions in the future. >>> >>> In particular, both run_in_executor() and sock_connect() are >>> documented as coroutines. >>> >>> If an asyncio API may change from a function returning Future to a >>> coroutine function and vice versa any time, then one cannot rely on >>> the behavior of creating the "coroutine object" not running the >>> coroutine immediately. This seems like an important Gotcha waiting to >>> bite someone. >>> >>> Back to the scenario in the beginning. If I want to write a function >>> that takes coroutine objects and schedule them to run later, and some >>> coroutine objects turn out to be misbehaving like above, then they >>> will run too early. To avoid this, I could either 1. pass the >>> coroutine functions and their arguments separately "callback style", >>> 2. use functools.partial or lambdas, or 3. always pass in real >>> coroutine objects returned from coroutine functions defined with >>> "async def". Does this sound right? >>> >>> Thanks, >>> >>> twistero >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> -- >> Thanks, >> Andrew Svetlov >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > From andrew.svetlov at gmail.com Thu May 3 17:25:33 2018 From: andrew.svetlov at gmail.com (Andrew Svetlov) Date: Thu, 03 May 2018 21:25:33 +0000 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: I doubt if we should specify such things very explicitly. Call it "implementation detail" :) FYI in Python 3.7 all `sock_*()` methods are native coroutines now. `run_in_executor()` is a regular function that returns a future object. I don't remember is it the only exception or asyncio has other functions with such return type. On Thu, May 3, 2018 at 11:56 PM Chris Jerdonek wrote: > It would probably be hard for people to find at this point all the > places they might be relying on this behavior (if anywhere), but isn't > this a basic documented property of coroutines? > > From the introduction section on coroutines [1]: > > > Calling a coroutine does not start its code running ? the coroutine > object returned by the call doesn?t do anything until you schedule its > execution. There are two basic ways to start it running: call await > coroutine or yield from coroutine from another coroutine (assuming the > other coroutine is already running!), or schedule its execution using the > ensure_future() function or the AbstractEventLoop.create_task() method. > > > Coroutines (and tasks) can only run when the event loop is running. > > [1]: https://docs.python.org/3/library/asyncio-task.html#coroutines > > --Chris > > On Thu, May 3, 2018 at 1:24 PM, Guido van Rossum > wrote: > > Depending on the coroutine*not* running sounds like asking for trouble. > > > > On Thu, May 3, 2018, 09:38 Andrew Svetlov > wrote: > >> > >> What real problem do you want to solve? > >> Correct code should always use `await loop.sock_connect(sock, addr)`, it > >> this case the behavior difference never hurts you. > >> > >> On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador > >> wrote: > >>> > >>> Hi, > >>> > >>> tl;dr: coroutine functions and regular functions returning Futures > >>> behave differently: the latter may start running immediately without > >>> being scheduled on a loop, or even with no loop running. This might be > >>> bad since the two are sometimes advertised to be interchangeable. > >>> > >>> > >>> I find that sometimes I want to construct a coroutine object, store it > >>> for some time, and run it later. Most times it works like one would > >>> expect: I call a coroutine function which gives me a coroutine object, > >>> I hold on to the coroutine object, I later await it or use > >>> loop.create_task(), asyncio.gather(), etc. on it, and only then it > >>> starts to run. > >>> > >>> However, I have found some cases where the "coroutine" starts running > >>> immediately. The first example is loop.run_in_executor(). I guess this > >>> is somewhat unsurprising since the passed function don't actually run > >>> in the event loop. Demonstrated below with strace and the interactive > >>> console: > >>> > >>> $ strace -e connect -f python3 > >>> Python 3.6.5 (default, Apr 4 2018, 15:01:18) > >>> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux > >>> Type "help", "copyright", "credits" or "license" for more information. > >>> >>> import asyncio > >>> >>> import socket > >>> >>> s = socket.socket() > >>> >>> loop = asyncio.get_event_loop() > >>> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) > >>> >>> loop.run_until_complete(asyncio.sleep(1)) > >>> >>> task = loop.create_task(coro) > >>> >>> loop.run_until_complete(asyncio.sleep(1)) > >>> connect(3, {sa_family=AF_INET, sin_port=htons(80), > >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection > >>> refused) > >>> >>> s.close() > >>> >>> s = socket.socket() > >>> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) > >>> strace: Process 13739 attached > >>> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), > >>> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED > (Connection refused) > >>> > >>> >>> coro2 > >>> ._call_check_cancel() at > >>> /usr/lib64/python3.6/asyncio/futures.py:403]> > >>> >>> loop.run_until_complete(asyncio.sleep(1)) > >>> >>> coro2 > >>> >>> refused')> > >>> >>> > >>> > >>> Note that with loop.sock_connect(), the connect syscall is only run > >>> after loop.create_task() is called on the coroutine AND the loop is > >>> running. On the other hand, as soon as loop.run_in_executor() is > >>> called on socket.connect, the connect syscall gets called, without the > >>> event loop running at all. > >>> > >>> Another such case is with Python 3.4.2, where even loop.sock_connect() > >>> will run immediately: > >>> > >>> $ strace -e connect -f python3 > >>> Python 3.4.2 (default, Oct 8 2014, 10:45:20) > >>> [GCC 4.9.1] on linux > >>> Type "help", "copyright", "credits" or "license" for more information. > >>> >>> import socket > >>> >>> import asyncio > >>> >>> loop = asyncio.get_event_loop() > >>> >>> s = socket.socket() > >>> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) > >>> connect(7, {sa_family=AF_INET, sin_port=htons(82), > >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection > >>> refused) > >>> >>> c > >>> >>> refused')> > >>> >>> > >>> > >>> In both these cases, the misbehaving "coroutine" aren't actually > >>> defined as coroutine functions, but regular functions returning a > >>> Future, which is probably why they don't act like coroutines. However, > >>> coroutine functions and regular functions returning Futures are often > >>> used interchangeably: Python docs Section 18.5.3.1 even says: > >>> > >>> > Note: In this documentation, some methods are documented as > coroutines, > >>> > even if they are plain Python functions returning a Future. This is > >>> > intentional to have a freedom of tweaking the implementation of these > >>> > functions in the future. > >>> > >>> In particular, both run_in_executor() and sock_connect() are > >>> documented as coroutines. > >>> > >>> If an asyncio API may change from a function returning Future to a > >>> coroutine function and vice versa any time, then one cannot rely on > >>> the behavior of creating the "coroutine object" not running the > >>> coroutine immediately. This seems like an important Gotcha waiting to > >>> bite someone. > >>> > >>> Back to the scenario in the beginning. If I want to write a function > >>> that takes coroutine objects and schedule them to run later, and some > >>> coroutine objects turn out to be misbehaving like above, then they > >>> will run too early. To avoid this, I could either 1. pass the > >>> coroutine functions and their arguments separately "callback style", > >>> 2. use functools.partial or lambdas, or 3. always pass in real > >>> coroutine objects returned from coroutine functions defined with > >>> "async def". Does this sound right? > >>> > >>> Thanks, > >>> > >>> twistero > >>> _______________________________________________ > >>> Async-sig mailing list > >>> Async-sig at python.org > >>> https://mail.python.org/mailman/listinfo/async-sig > >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ > >> > >> -- > >> Thanks, > >> Andrew Svetlov > >> _______________________________________________ > >> Async-sig mailing list > >> Async-sig at python.org > >> https://mail.python.org/mailman/listinfo/async-sig > >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > > > _______________________________________________ > > Async-sig mailing list > > Async-sig at python.org > > https://mail.python.org/mailman/listinfo/async-sig > > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > -- Thanks, Andrew Svetlov -------------- next part -------------- An HTML attachment was scrubbed... URL: From twisteroid.ambassador at gmail.com Thu May 3 21:33:31 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Fri, 4 May 2018 09:33:31 +0800 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: The real problem I'm playing with is implementing "happy eyeballs", where I may have several sockets attempting to connect simultaneously, and the first one to successfully connect gets used. I had the idea of preparing all of the loop.sock_connect() coroutine objects in advance, and scheduling them one by one on the loop, but wanted to make double sure that the sockets won't start connecting before the coroutines are scheduled. I wanted to write something like this: successful_socket = await staggered_start([loop.sock_connect(socket.socket(), addr) for addr in addresses]) where async def staggered_start(coros) is some kind of reusable scheduling logic. As it turns out, I can't actually depend on loop.sock_connect() doing the Right Thing (TM) if I want to support Python 3.4. On Fri, May 4, 2018 at 12:37 AM, Andrew Svetlov wrote: > What real problem do you want to solve? > Correct code should always use `await loop.sock_connect(sock, addr)`, it > this case the behavior difference never hurts you. > > On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador > wrote: >> >> Hi, >> >> tl;dr: coroutine functions and regular functions returning Futures >> behave differently: the latter may start running immediately without >> being scheduled on a loop, or even with no loop running. This might be >> bad since the two are sometimes advertised to be interchangeable. >> >> >> I find that sometimes I want to construct a coroutine object, store it >> for some time, and run it later. Most times it works like one would >> expect: I call a coroutine function which gives me a coroutine object, >> I hold on to the coroutine object, I later await it or use >> loop.create_task(), asyncio.gather(), etc. on it, and only then it >> starts to run. >> >> However, I have found some cases where the "coroutine" starts running >> immediately. The first example is loop.run_in_executor(). I guess this >> is somewhat unsurprising since the passed function don't actually run >> in the event loop. Demonstrated below with strace and the interactive >> console: >> >> $ strace -e connect -f python3 >> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import asyncio >> >>> import socket >> >>> s = socket.socket() >> >>> loop = asyncio.get_event_loop() >> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >>> task = loop.create_task(coro) >> >>> loop.run_until_complete(asyncio.sleep(1)) >> connect(3, {sa_family=AF_INET, sin_port=htons(80), >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >> refused) >> >>> s.close() >> >>> s = socket.socket() >> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >> strace: Process 13739 attached >> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >> >> >>> coro2 >> ._call_check_cancel() at >> /usr/lib64/python3.6/asyncio/futures.py:403]> >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >>> coro2 >> > refused')> >> >>> >> >> Note that with loop.sock_connect(), the connect syscall is only run >> after loop.create_task() is called on the coroutine AND the loop is >> running. On the other hand, as soon as loop.run_in_executor() is >> called on socket.connect, the connect syscall gets called, without the >> event loop running at all. >> >> Another such case is with Python 3.4.2, where even loop.sock_connect() >> will run immediately: >> >> $ strace -e connect -f python3 >> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >> [GCC 4.9.1] on linux >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import socket >> >>> import asyncio >> >>> loop = asyncio.get_event_loop() >> >>> s = socket.socket() >> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >> connect(7, {sa_family=AF_INET, sin_port=htons(82), >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >> refused) >> >>> c >> > refused')> >> >>> >> >> In both these cases, the misbehaving "coroutine" aren't actually >> defined as coroutine functions, but regular functions returning a >> Future, which is probably why they don't act like coroutines. However, >> coroutine functions and regular functions returning Futures are often >> used interchangeably: Python docs Section 18.5.3.1 even says: >> >> > Note: In this documentation, some methods are documented as coroutines, >> > even if they are plain Python functions returning a Future. This is >> > intentional to have a freedom of tweaking the implementation of these >> > functions in the future. >> >> In particular, both run_in_executor() and sock_connect() are >> documented as coroutines. >> >> If an asyncio API may change from a function returning Future to a >> coroutine function and vice versa any time, then one cannot rely on >> the behavior of creating the "coroutine object" not running the >> coroutine immediately. This seems like an important Gotcha waiting to >> bite someone. >> >> Back to the scenario in the beginning. If I want to write a function >> that takes coroutine objects and schedule them to run later, and some >> coroutine objects turn out to be misbehaving like above, then they >> will run too early. To avoid this, I could either 1. pass the >> coroutine functions and their arguments separately "callback style", >> 2. use functools.partial or lambdas, or 3. always pass in real >> coroutine objects returned from coroutine functions defined with >> "async def". Does this sound right? >> >> Thanks, >> >> twistero >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -- > Thanks, > Andrew Svetlov From twisteroid.ambassador at gmail.com Thu May 3 21:46:25 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Fri, 4 May 2018 09:46:25 +0800 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: Then, perhaps it's safest to treat the same way as , i.e. callbacks, and pass them around the old way, using partials, lambdas, separate arguments for coroutine function and arguments, etc. Aww, suddenly coroutines don't feel as sexy as before. (j/k) On Fri, May 4, 2018 at 4:24 AM, Guido van Rossum wrote: > Depending on the coroutine*not* running sounds like asking for trouble. > > On Thu, May 3, 2018, 09:38 Andrew Svetlov wrote: >> >> What real problem do you want to solve? >> Correct code should always use `await loop.sock_connect(sock, addr)`, it >> this case the behavior difference never hurts you. >> >> On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador >> wrote: >>> >>> Hi, >>> >>> tl;dr: coroutine functions and regular functions returning Futures >>> behave differently: the latter may start running immediately without >>> being scheduled on a loop, or even with no loop running. This might be >>> bad since the two are sometimes advertised to be interchangeable. >>> >>> >>> I find that sometimes I want to construct a coroutine object, store it >>> for some time, and run it later. Most times it works like one would >>> expect: I call a coroutine function which gives me a coroutine object, >>> I hold on to the coroutine object, I later await it or use >>> loop.create_task(), asyncio.gather(), etc. on it, and only then it >>> starts to run. >>> >>> However, I have found some cases where the "coroutine" starts running >>> immediately. The first example is loop.run_in_executor(). I guess this >>> is somewhat unsurprising since the passed function don't actually run >>> in the event loop. Demonstrated below with strace and the interactive >>> console: >>> >>> $ strace -e connect -f python3 >>> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >>> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> import asyncio >>> >>> import socket >>> >>> s = socket.socket() >>> >>> loop = asyncio.get_event_loop() >>> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >>> task = loop.create_task(coro) >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >>> refused) >>> >>> s.close() >>> >>> s = socket.socket() >>> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >>> strace: Process 13739 attached >>> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection refused) >>> >>> >>> coro2 >>> ._call_check_cancel() at >>> /usr/lib64/python3.6/asyncio/futures.py:403]> >>> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >>> coro2 >>> >> refused')> >>> >>> >>> >>> Note that with loop.sock_connect(), the connect syscall is only run >>> after loop.create_task() is called on the coroutine AND the loop is >>> running. On the other hand, as soon as loop.run_in_executor() is >>> called on socket.connect, the connect syscall gets called, without the >>> event loop running at all. >>> >>> Another such case is with Python 3.4.2, where even loop.sock_connect() >>> will run immediately: >>> >>> $ strace -e connect -f python3 >>> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >>> [GCC 4.9.1] on linux >>> Type "help", "copyright", "credits" or "license" for more information. >>> >>> import socket >>> >>> import asyncio >>> >>> loop = asyncio.get_event_loop() >>> >>> s = socket.socket() >>> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >>> connect(7, {sa_family=AF_INET, sin_port=htons(82), >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >>> refused) >>> >>> c >>> >> refused')> >>> >>> >>> >>> In both these cases, the misbehaving "coroutine" aren't actually >>> defined as coroutine functions, but regular functions returning a >>> Future, which is probably why they don't act like coroutines. However, >>> coroutine functions and regular functions returning Futures are often >>> used interchangeably: Python docs Section 18.5.3.1 even says: >>> >>> > Note: In this documentation, some methods are documented as coroutines, >>> > even if they are plain Python functions returning a Future. This is >>> > intentional to have a freedom of tweaking the implementation of these >>> > functions in the future. >>> >>> In particular, both run_in_executor() and sock_connect() are >>> documented as coroutines. >>> >>> If an asyncio API may change from a function returning Future to a >>> coroutine function and vice versa any time, then one cannot rely on >>> the behavior of creating the "coroutine object" not running the >>> coroutine immediately. This seems like an important Gotcha waiting to >>> bite someone. >>> >>> Back to the scenario in the beginning. If I want to write a function >>> that takes coroutine objects and schedule them to run later, and some >>> coroutine objects turn out to be misbehaving like above, then they >>> will run too early. To avoid this, I could either 1. pass the >>> coroutine functions and their arguments separately "callback style", >>> 2. use functools.partial or lambdas, or 3. always pass in real >>> coroutine objects returned from coroutine functions defined with >>> "async def". Does this sound right? >>> >>> Thanks, >>> >>> twistero >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> -- >> Thanks, >> Andrew Svetlov >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ From dimaqq at gmail.com Thu May 3 21:52:32 2018 From: dimaqq at gmail.com (Dima Tisnek) Date: Fri, 04 May 2018 01:52:32 +0000 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: My 2c: don't use py3.4; in fact don't use 3.5 either :) If you decide to support older Python versions, it's only fair that separate implementation may be needed. Re: overall problem, why not try the following: wrap your individual tasks in async def, where each staggers, connects and resolves and handles cancellation (if it didn't win the race). IMO that's easier to reason about, debug and works around your problem ;) On Fri, 4 May 2018 at 9:34 AM, twisteroid ambassador < twisteroid.ambassador at gmail.com> wrote: > The real problem I'm playing with is implementing "happy eyeballs", > where I may have several sockets attempting to connect simultaneously, > and the first one to successfully connect gets used. I had the idea of > preparing all of the loop.sock_connect() coroutine objects in advance, > and scheduling them one by one on the loop, but wanted to make double > sure that the sockets won't start connecting before the coroutines are > scheduled. I wanted to write something like this: > > successful_socket = await > staggered_start([loop.sock_connect(socket.socket(), addr) for addr in > addresses]) > > where async def staggered_start(coros) is some kind of reusable > scheduling logic. As it turns out, I can't actually depend on > loop.sock_connect() doing the Right Thing (TM) if I want to support > Python 3.4. > > On Fri, May 4, 2018 at 12:37 AM, Andrew Svetlov > wrote: > > What real problem do you want to solve? > > Correct code should always use `await loop.sock_connect(sock, addr)`, it > > this case the behavior difference never hurts you. > > > > On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador > > wrote: > >> > >> Hi, > >> > >> tl;dr: coroutine functions and regular functions returning Futures > >> behave differently: the latter may start running immediately without > >> being scheduled on a loop, or even with no loop running. This might be > >> bad since the two are sometimes advertised to be interchangeable. > >> > >> > >> I find that sometimes I want to construct a coroutine object, store it > >> for some time, and run it later. Most times it works like one would > >> expect: I call a coroutine function which gives me a coroutine object, > >> I hold on to the coroutine object, I later await it or use > >> loop.create_task(), asyncio.gather(), etc. on it, and only then it > >> starts to run. > >> > >> However, I have found some cases where the "coroutine" starts running > >> immediately. The first example is loop.run_in_executor(). I guess this > >> is somewhat unsurprising since the passed function don't actually run > >> in the event loop. Demonstrated below with strace and the interactive > >> console: > >> > >> $ strace -e connect -f python3 > >> Python 3.6.5 (default, Apr 4 2018, 15:01:18) > >> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux > >> Type "help", "copyright", "credits" or "license" for more information. > >> >>> import asyncio > >> >>> import socket > >> >>> s = socket.socket() > >> >>> loop = asyncio.get_event_loop() > >> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) > >> >>> loop.run_until_complete(asyncio.sleep(1)) > >> >>> task = loop.create_task(coro) > >> >>> loop.run_until_complete(asyncio.sleep(1)) > >> connect(3, {sa_family=AF_INET, sin_port=htons(80), > >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection > >> refused) > >> >>> s.close() > >> >>> s = socket.socket() > >> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) > >> strace: Process 13739 attached > >> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), > >> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection > refused) > >> > >> >>> coro2 > >> ._call_check_cancel() at > >> /usr/lib64/python3.6/asyncio/futures.py:403]> > >> >>> loop.run_until_complete(asyncio.sleep(1)) > >> >>> coro2 > >> >> refused')> > >> >>> > >> > >> Note that with loop.sock_connect(), the connect syscall is only run > >> after loop.create_task() is called on the coroutine AND the loop is > >> running. On the other hand, as soon as loop.run_in_executor() is > >> called on socket.connect, the connect syscall gets called, without the > >> event loop running at all. > >> > >> Another such case is with Python 3.4.2, where even loop.sock_connect() > >> will run immediately: > >> > >> $ strace -e connect -f python3 > >> Python 3.4.2 (default, Oct 8 2014, 10:45:20) > >> [GCC 4.9.1] on linux > >> Type "help", "copyright", "credits" or "license" for more information. > >> >>> import socket > >> >>> import asyncio > >> >>> loop = asyncio.get_event_loop() > >> >>> s = socket.socket() > >> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) > >> connect(7, {sa_family=AF_INET, sin_port=htons(82), > >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection > >> refused) > >> >>> c > >> >> refused')> > >> >>> > >> > >> In both these cases, the misbehaving "coroutine" aren't actually > >> defined as coroutine functions, but regular functions returning a > >> Future, which is probably why they don't act like coroutines. However, > >> coroutine functions and regular functions returning Futures are often > >> used interchangeably: Python docs Section 18.5.3.1 even says: > >> > >> > Note: In this documentation, some methods are documented as > coroutines, > >> > even if they are plain Python functions returning a Future. This is > >> > intentional to have a freedom of tweaking the implementation of these > >> > functions in the future. > >> > >> In particular, both run_in_executor() and sock_connect() are > >> documented as coroutines. > >> > >> If an asyncio API may change from a function returning Future to a > >> coroutine function and vice versa any time, then one cannot rely on > >> the behavior of creating the "coroutine object" not running the > >> coroutine immediately. This seems like an important Gotcha waiting to > >> bite someone. > >> > >> Back to the scenario in the beginning. If I want to write a function > >> that takes coroutine objects and schedule them to run later, and some > >> coroutine objects turn out to be misbehaving like above, then they > >> will run too early. To avoid this, I could either 1. pass the > >> coroutine functions and their arguments separately "callback style", > >> 2. use functools.partial or lambdas, or 3. always pass in real > >> coroutine objects returned from coroutine functions defined with > >> "async def". Does this sound right? > >> > >> Thanks, > >> > >> twistero > >> _______________________________________________ > >> Async-sig mailing list > >> Async-sig at python.org > >> https://mail.python.org/mailman/listinfo/async-sig > >> Code of Conduct: https://www.python.org/psf/codeofconduct/ > > > > -- > > Thanks, > > Andrew Svetlov > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yarkot1 at gmail.com Fri May 4 00:11:26 2018 From: yarkot1 at gmail.com (Yarko Tymciurak) Date: Thu, 3 May 2018 23:11:26 -0500 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 8:52 PM, Dima Tisnek wrote: > My 2c: don't use py3.4; in fact don't use 3.5 either :) > If you decide to support older Python versions, it's only fair that > separate implementation may be needed. > I'd agree - focus Python 3.6+ > > Re: overall problem, why not try the following: > wrap your individual tasks in async def, where each staggers, connects and > resolves and handles cancellation (if it didn't win the race). > IMO that's easier to reason about, debug and works around your problem ;) > > On Fri, 4 May 2018 at 9:34 AM, twisteroid ambassador < > twisteroid.ambassador at gmail.com> wrote: > >> The real problem I'm playing with is implementing "happy eyeballs", >> where I may have several sockets attempting to connect simultaneously, >> and the first one to successfully connect gets used. I had the idea of >> > Simpler is better ... this isn't an asyncio example, but maybe the readability (ymmv? For me - very clearly readable) is worth a ponder: https://github.com/dabeaz/curio/blob/master/README.rst#a-complex-example > preparing all of the loop.sock_connect() coroutine objects in advance, >> and scheduling them one by one on the loop, but wanted to make double >> sure that the sockets won't start connecting before the coroutines are >> scheduled. I wanted to write something like this: >> >> successful_socket = await >> staggered_start([loop.sock_connect(socket.socket(), addr) for addr in >> addresses]) >> >> where async def staggered_start(coros) is some kind of reusable >> scheduling logic. As it turns out, I can't actually depend on >> loop.sock_connect() doing the Right Thing (TM) if I want to support >> Python 3.4. >> >> On Fri, May 4, 2018 at 12:37 AM, Andrew Svetlov >> wrote: >> > What real problem do you want to solve? >> > Correct code should always use `await loop.sock_connect(sock, addr)`, it >> > this case the behavior difference never hurts you. >> > >> > On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador >> > wrote: >> >> >> >> Hi, >> >> >> >> tl;dr: coroutine functions and regular functions returning Futures >> >> behave differently: the latter may start running immediately without >> >> being scheduled on a loop, or even with no loop running. This might be >> >> bad since the two are sometimes advertised to be interchangeable. >> >> >> >> >> >> I find that sometimes I want to construct a coroutine object, store it >> >> for some time, and run it later. Most times it works like one would >> >> expect: I call a coroutine function which gives me a coroutine object, >> >> I hold on to the coroutine object, I later await it or use >> >> loop.create_task(), asyncio.gather(), etc. on it, and only then it >> >> starts to run. >> >> >> >> However, I have found some cases where the "coroutine" starts running >> >> immediately. The first example is loop.run_in_executor(). I guess this >> >> is somewhat unsurprising since the passed function don't actually run >> >> in the event loop. Demonstrated below with strace and the interactive >> >> console: >> >> >> >> $ strace -e connect -f python3 >> >> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >> >> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >> >> Type "help", "copyright", "credits" or "license" for more information. >> >> >>> import asyncio >> >> >>> import socket >> >> >>> s = socket.socket() >> >> >>> loop = asyncio.get_event_loop() >> >> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >> >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >> >>> task = loop.create_task(coro) >> >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >> connect(3, {sa_family=AF_INET, sin_port=htons(80), >> >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >> >> refused) >> >> >>> s.close() >> >> >>> s = socket.socket() >> >> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >> >> strace: Process 13739 attached >> >> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >> >> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED >> (Connection refused) >> >> >> >> >>> coro2 >> >> ._call_check_cancel() at >> >> /usr/lib64/python3.6/asyncio/futures.py:403]> >> >> >>> loop.run_until_complete(asyncio.sleep(1)) >> >> >>> coro2 >> >> > >> refused')> >> >> >>> >> >> >> >> Note that with loop.sock_connect(), the connect syscall is only run >> >> after loop.create_task() is called on the coroutine AND the loop is >> >> running. On the other hand, as soon as loop.run_in_executor() is >> >> called on socket.connect, the connect syscall gets called, without the >> >> event loop running at all. >> >> >> >> Another such case is with Python 3.4.2, where even loop.sock_connect() >> >> will run immediately: >> >> >> >> $ strace -e connect -f python3 >> >> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >> >> [GCC 4.9.1] on linux >> >> Type "help", "copyright", "credits" or "license" for more information. >> >> >>> import socket >> >> >>> import asyncio >> >> >>> loop = asyncio.get_event_loop() >> >> >>> s = socket.socket() >> >> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >> >> connect(7, {sa_family=AF_INET, sin_port=htons(82), >> >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >> >> refused) >> >> >>> c >> >> > >> refused')> >> >> >>> >> >> >> >> In both these cases, the misbehaving "coroutine" aren't actually >> >> defined as coroutine functions, but regular functions returning a >> >> Future, which is probably why they don't act like coroutines. However, >> >> coroutine functions and regular functions returning Futures are often >> >> used interchangeably: Python docs Section 18.5.3.1 even says: >> >> >> >> > Note: In this documentation, some methods are documented as >> coroutines, >> >> > even if they are plain Python functions returning a Future. This is >> >> > intentional to have a freedom of tweaking the implementation of these >> >> > functions in the future. >> >> >> >> In particular, both run_in_executor() and sock_connect() are >> >> documented as coroutines. >> >> >> >> If an asyncio API may change from a function returning Future to a >> >> coroutine function and vice versa any time, then one cannot rely on >> >> the behavior of creating the "coroutine object" not running the >> >> coroutine immediately. This seems like an important Gotcha waiting to >> >> bite someone. >> >> >> >> Back to the scenario in the beginning. If I want to write a function >> >> that takes coroutine objects and schedule them to run later, and some >> >> coroutine objects turn out to be misbehaving like above, then they >> >> will run too early. To avoid this, I could either 1. pass the >> >> coroutine functions and their arguments separately "callback style", >> >> 2. use functools.partial or lambdas, or 3. always pass in real >> >> coroutine objects returned from coroutine functions defined with >> >> "async def". Does this sound right? >> >> >> >> Thanks, >> >> >> >> twistero >> >> _______________________________________________ >> >> Async-sig mailing list >> >> Async-sig at python.org >> >> https://mail.python.org/mailman/listinfo/async-sig >> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > >> > -- >> > Thanks, >> > Andrew Svetlov >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alicemail at loveisanalogue.info Fri May 4 03:45:04 2018 From: alicemail at loveisanalogue.info (Alice Heaton) Date: Fri, 4 May 2018 08:45:04 +0100 Subject: [Async-sig] ANN: miniasync, a small library build on top of asyncio for simple use cases Message-ID: <28f2080c-8b88-3c3e-79ef-763e4fc2d640@loveisanalogue.info> Hello, I have been working on miniasync, a small library build on top of asyncio to facilitate the occasional use of async code in otherwise synchronous applications. A typical use case is an otherwise synchronous application, which at some point wants to send notifications to multiple online services. miniasync makes it easier to implement this asynchronously without having to write asyncio boilerplate. http://miniasync.readthedocs.io asyncio is vast, powerful and flexible. But there is a learning curve - you need to understand the event loop api (other async languages like Javascript don't expose that), how to create it and run tasks on it, which means you need to understand about Tasks and Futures, etc. The flexibility is great, and gives Python developers a lot of scope for implementing advanced applications, but it's a put off for simple use cases. miniasync exposes a single function, "miniasync.run", which takes a list of coroutine objects, creates a local event loop, runs all the co-routines until they're complete, and returns their result in the order they were defined: import aiofiles import miniasync async def get_file_content(filename): async with aiofiles.open(filename, mode='r') as f: return await f.read() results = miniasync.run( get_file_content('file1.txt'), get_file_content('file2.txt'), ) assert results == [ '', '' ] This is similar to using a combination of gather and run_until_complete, but as a single step and without explicit reference to the loop. This also differs from gather in that: - Instead of saying you either want exceptions raised or returned, you need to list explicitly exceptions you want returned, all other exceptions are raised (explicit is better than implicit); - Unhandled exceptions cause all other tasks to be cancelled. miniasync.run always creates a new loop (so you can nest invocations of miniasync.run, and rely on each invocation only executing it's parameters). For the cases where you need the loop before running (eg. for creating a asyncio.Queue object to be shared amongst your co-routines), miniasync also exposes a context manager that creates a loop and lets you run co-routines on it: import asyncio import miniasync async def coro1(q): q.put_nowait('world') return 'hello' async def coro2(q): return await q.get() with miniasync.loop() as loop: q = asyncio.Queue() results = loop.run( coro1(q), coro2(q) ) assert results == ['hello', 'world'] You can pip install miniasync to try out, or read the docs on readthedocs (as above). For now miniasync covers what was my main issue with running simple async code. Other things I have in mind for the future is a simple interface for running multiple http requests (based on top of aiohttp). I'm keen to hear about other issues that could be simplified for simple use cases. :) Alice From dimaqq at gmail.com Fri May 4 04:18:46 2018 From: dimaqq at gmail.com (Dima Tisnek) Date: Fri, 4 May 2018 16:18:46 +0800 Subject: [Async-sig] ANN: miniasync, a small library build on top of asyncio for simple use cases In-Reply-To: <28f2080c-8b88-3c3e-79ef-763e4fc2d640@loveisanalogue.info> References: <28f2080c-8b88-3c3e-79ef-763e4fc2d640@loveisanalogue.info> Message-ID: Nice! At first, I thought that implementation would be trivial, but upon inspection it's actually educational! Perhaps interactive environments like ipython and jupyter-notebook could benefit from this library. Cheers, d. On 4 May 2018 at 15:45, Alice Heaton wrote: > Hello, > > I have been working on miniasync, a small library build on top of > asyncio to facilitate the occasional use of async code in otherwise > synchronous applications. > > A typical use case is an otherwise synchronous application, which at > some point wants to send notifications to multiple online services. > miniasync makes it easier to implement this asynchronously without > having to write asyncio boilerplate. > > http://miniasync.readthedocs.io > > asyncio is vast, powerful and flexible. But there is a learning curve - > you need to understand the event loop api (other async languages like > Javascript don't expose that), how to create it and run tasks on it, > which means you need to understand about Tasks and Futures, etc. The > flexibility is great, and gives Python developers a lot of scope for > implementing advanced applications, but it's a put off for simple use cases. > > miniasync exposes a single function, "miniasync.run", which takes a list > of coroutine objects, creates a local event loop, runs all the > co-routines until they're complete, and returns their result in the > order they were defined: > > import aiofiles > import miniasync > > async def get_file_content(filename): > async with aiofiles.open(filename, mode='r') as f: > return await f.read() > > results = miniasync.run( > get_file_content('file1.txt'), > get_file_content('file2.txt'), > ) > > assert results == [ > '', > '' > ] > > This is similar to using a combination of gather and run_until_complete, > but as a single step and without explicit reference to the loop. This > also differs from gather in that: > > - Instead of saying you either want exceptions raised or returned, you > need to list explicitly exceptions you want returned, all other > exceptions are raised (explicit is better than implicit); > - Unhandled exceptions cause all other tasks to be cancelled. > > miniasync.run always creates a new loop (so you can nest invocations of > miniasync.run, and rely on each invocation only executing it's > parameters). For the cases where you need the loop before running (eg. > for creating a asyncio.Queue object to be shared amongst your > co-routines), miniasync also exposes a context manager that creates a > loop and lets you run co-routines on it: > > import asyncio > import miniasync > > async def coro1(q): > q.put_nowait('world') > return 'hello' > > async def coro2(q): > return await q.get() > > with miniasync.loop() as loop: > q = asyncio.Queue() > results = loop.run( > coro1(q), > coro2(q) > ) > > assert results == ['hello', 'world'] > > You can pip install miniasync to try out, or read the docs on > readthedocs (as above). > > For now miniasync covers what was my main issue with running simple > async code. Other things I have in mind for the future is a simple > interface for running multiple http requests (based on top of aiohttp). > > I'm keen to hear about other issues that could be simplified for simple > use cases. > > :) > Alice > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ From alicemail at loveisanalogue.info Fri May 4 04:57:02 2018 From: alicemail at loveisanalogue.info (Alice Heaton) Date: Fri, 4 May 2018 09:57:02 +0100 Subject: [Async-sig] ANN: miniasync, a small library build on top of asyncio for simple use cases In-Reply-To: References: <28f2080c-8b88-3c3e-79ef-763e4fc2d640@loveisanalogue.info> Message-ID: <4e5b5525-ddc3-9ff0-87a2-e11a81d6fc3d@loveisanalogue.info> On 04/05/18 09:18, Dima Tisnek wrote: > Nice! Thanks :) > At first, I thought that implementation would be trivial, but upon > inspection it's actually educational! There a number of small gotchas which are obvious once you think about them, and are not complicated per se, but can trip people when they first start using asyncio (they tripped me anyway :)). For example: once you've cancelled a Future (by calling ".cancel()" on it) you actually need to run the loop again to give the code a chance to actually do it's clean up tasks. It makes sense once you understand how asyncio works, but it's not obvious at first. miniasync aims to shield people from these things for the simple use case. :) Alice From twisteroid.ambassador at gmail.com Fri May 4 11:00:48 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Fri, 4 May 2018 23:00:48 +0800 Subject: [Async-sig] "Coroutines" sometimes run without being scheduled on an event loop In-Reply-To: References: Message-ID: On Fri, May 4, 2018 at 12:11 PM, Yarko Tymciurak wrote: > > > On Thu, May 3, 2018 at 8:52 PM, Dima Tisnek wrote: >> >> My 2c: don't use py3.4; in fact don't use 3.5 either :) >> If you decide to support older Python versions, it's only fair that >> separate implementation may be needed. > > > I'd agree - focus Python 3.6+ Oh, I'm not going to support py3.4, if not just for the sweet async def and await syntax ;-) I dug it out for demonstration purposes, as an example that asyncio APIs do change in ways that matter for the problem discussed in the OP. >> >> >> Re: overall problem, why not try the following: >> wrap your individual tasks in async def, where each staggers, connects and >> resolves and handles cancellation (if it didn't win the race). >> IMO that's easier to reason about, debug and works around your problem ;) >> >> On Fri, 4 May 2018 at 9:34 AM, twisteroid ambassador >> wrote: >>> >>> The real problem I'm playing with is implementing "happy eyeballs", >>> where I may have several sockets attempting to connect simultaneously, >>> and the first one to successfully connect gets used. I had the idea of > > > Simpler is better ... this isn't an asyncio example, but maybe the > readability (ymmv? For me - very clearly readable) is worth a ponder: > > https://github.com/dabeaz/curio/blob/master/README.rst#a-complex-example > Thanks for mentioning that. In fact what prompted all this is the recent article on trio, which mentioned happy eyeballs, which then reminded me that I have 2 separate implementations of staggered-start-return-first-successful-cancel-all-others logic in one of my projects and they both look ugly as sin and I should probably try to improve them. So now I have looked at trio's implementation ( https://github.com/python-trio/trio/pull/145/files ), curio's (above), and a bug report for twisted ( https://twistedmatrix.com/trac/ticket/9345 ). One thing that struck me is that these implementations all have subtly different behavior. They all start the next connection when the previous one doesn't complete (either succeed or fail) within `delay`, but: - trio starts the next connection early if the immediately preceding one fails; - curio starts the next connection early if any of the connections still in flight fails; - twisted does not start the next connection early at all. (One of my implementations does the same thing as curio, the other starts early if there is no longer any connections in flight, i.e. all previous connections fail.) >>> >>> preparing all of the loop.sock_connect() coroutine objects in advance, >>> and scheduling them one by one on the loop, but wanted to make double >>> sure that the sockets won't start connecting before the coroutines are >>> scheduled. I wanted to write something like this: >>> >>> successful_socket = await >>> staggered_start([loop.sock_connect(socket.socket(), addr) for addr in >>> addresses]) >>> >>> where async def staggered_start(coros) is some kind of reusable >>> scheduling logic. As it turns out, I can't actually depend on >>> loop.sock_connect() doing the Right Thing (TM) if I want to support >>> Python 3.4. >>> >>> On Fri, May 4, 2018 at 12:37 AM, Andrew Svetlov >>> wrote: >>> > What real problem do you want to solve? >>> > Correct code should always use `await loop.sock_connect(sock, addr)`, >>> > it >>> > this case the behavior difference never hurts you. >>> > >>> > On Thu, May 3, 2018 at 7:04 PM twisteroid ambassador >>> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> tl;dr: coroutine functions and regular functions returning Futures >>> >> behave differently: the latter may start running immediately without >>> >> being scheduled on a loop, or even with no loop running. This might be >>> >> bad since the two are sometimes advertised to be interchangeable. >>> >> >>> >> >>> >> I find that sometimes I want to construct a coroutine object, store it >>> >> for some time, and run it later. Most times it works like one would >>> >> expect: I call a coroutine function which gives me a coroutine object, >>> >> I hold on to the coroutine object, I later await it or use >>> >> loop.create_task(), asyncio.gather(), etc. on it, and only then it >>> >> starts to run. >>> >> >>> >> However, I have found some cases where the "coroutine" starts running >>> >> immediately. The first example is loop.run_in_executor(). I guess this >>> >> is somewhat unsurprising since the passed function don't actually run >>> >> in the event loop. Demonstrated below with strace and the interactive >>> >> console: >>> >> >>> >> $ strace -e connect -f python3 >>> >> Python 3.6.5 (default, Apr 4 2018, 15:01:18) >>> >> [GCC 7.3.1 20180303 (Red Hat 7.3.1-5)] on linux >>> >> Type "help", "copyright", "credits" or "license" for more information. >>> >> >>> import asyncio >>> >> >>> import socket >>> >> >>> s = socket.socket() >>> >> >>> loop = asyncio.get_event_loop() >>> >> >>> coro = loop.sock_connect(s, ('127.0.0.1', 80)) >>> >> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >> >>> task = loop.create_task(coro) >>> >> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >> connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection >>> >> refused) >>> >> >>> s.close() >>> >> >>> s = socket.socket() >>> >> >>> coro2 = loop.run_in_executor(None, s.connect, ('127.0.0.1', 80)) >>> >> strace: Process 13739 attached >>> >> >>> [pid 13739] connect(3, {sa_family=AF_INET, sin_port=htons(80), >>> >> >>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED >>> >> >>> (Connection refused) >>> >> >>> >> >>> coro2 >>> >> ._call_check_cancel() at >>> >> /usr/lib64/python3.6/asyncio/futures.py:403]> >>> >> >>> loop.run_until_complete(asyncio.sleep(1)) >>> >> >>> coro2 >>> >> >> >> refused')> >>> >> >>> >>> >> >>> >> Note that with loop.sock_connect(), the connect syscall is only run >>> >> after loop.create_task() is called on the coroutine AND the loop is >>> >> running. On the other hand, as soon as loop.run_in_executor() is >>> >> called on socket.connect, the connect syscall gets called, without the >>> >> event loop running at all. >>> >> >>> >> Another such case is with Python 3.4.2, where even loop.sock_connect() >>> >> will run immediately: >>> >> >>> >> $ strace -e connect -f python3 >>> >> Python 3.4.2 (default, Oct 8 2014, 10:45:20) >>> >> [GCC 4.9.1] on linux >>> >> Type "help", "copyright", "credits" or "license" for more information. >>> >> >>> import socket >>> >> >>> import asyncio >>> >> >>> loop = asyncio.get_event_loop() >>> >> >>> s = socket.socket() >>> >> >>> c = loop.sock_connect(s, ('127.0.0.1', 82)) >>> >> connect(7, {sa_family=AF_INET, sin_port=htons(82), >>> >> sin_addr=inet_addr("127.0.0.1")}, 16) = -1ECONNREFUSED (Connection >>> >> refused) >>> >> >>> c >>> >> >> >> refused')> >>> >> >>> >>> >> >>> >> In both these cases, the misbehaving "coroutine" aren't actually >>> >> defined as coroutine functions, but regular functions returning a >>> >> Future, which is probably why they don't act like coroutines. However, >>> >> coroutine functions and regular functions returning Futures are often >>> >> used interchangeably: Python docs Section 18.5.3.1 even says: >>> >> >>> >> > Note: In this documentation, some methods are documented as >>> >> > coroutines, >>> >> > even if they are plain Python functions returning a Future. This is >>> >> > intentional to have a freedom of tweaking the implementation of >>> >> > these >>> >> > functions in the future. >>> >> >>> >> In particular, both run_in_executor() and sock_connect() are >>> >> documented as coroutines. >>> >> >>> >> If an asyncio API may change from a function returning Future to a >>> >> coroutine function and vice versa any time, then one cannot rely on >>> >> the behavior of creating the "coroutine object" not running the >>> >> coroutine immediately. This seems like an important Gotcha waiting to >>> >> bite someone. >>> >> >>> >> Back to the scenario in the beginning. If I want to write a function >>> >> that takes coroutine objects and schedule them to run later, and some >>> >> coroutine objects turn out to be misbehaving like above, then they >>> >> will run too early. To avoid this, I could either 1. pass the >>> >> coroutine functions and their arguments separately "callback style", >>> >> 2. use functools.partial or lambdas, or 3. always pass in real >>> >> coroutine objects returned from coroutine functions defined with >>> >> "async def". Does this sound right? >>> >> >>> >> Thanks, >>> >> >>> >> twistero >>> >> _______________________________________________ >>> >> Async-sig mailing list >>> >> Async-sig at python.org >>> >> https://mail.python.org/mailman/listinfo/async-sig >>> >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> > >>> > -- >>> > Thanks, >>> > Andrew Svetlov >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> >> >> _______________________________________________ >> Async-sig mailing list >> Async-sig at python.org >> https://mail.python.org/mailman/listinfo/async-sig >> Code of Conduct: https://www.python.org/psf/codeofconduct/ >> > From guido at python.org Fri May 4 13:02:13 2018 From: guido at python.org (Guido van Rossum) Date: Fri, 4 May 2018 10:02:13 -0700 Subject: [Async-sig] ANN: miniasync, a small library build on top of asyncio for simple use cases In-Reply-To: <4e5b5525-ddc3-9ff0-87a2-e11a81d6fc3d@loveisanalogue.info> References: <28f2080c-8b88-3c3e-79ef-763e4fc2d640@loveisanalogue.info> <4e5b5525-ddc3-9ff0-87a2-e11a81d6fc3d@loveisanalogue.info> Message-ID: Nice! On Fri, May 4, 2018 at 1:57 AM, Alice Heaton wrote: > On 04/05/18 09:18, Dima Tisnek wrote: > > Nice! > > Thanks :) > > > At first, I thought that implementation would be trivial, but upon > > inspection it's actually educational! > > There a number of small gotchas which are obvious once you think about > them, and are not complicated per se, but can trip people when they > first start using asyncio (they tripped me anyway :)). > > For example: once you've cancelled a Future (by calling ".cancel()" on > it) you actually need to run the loop again to give the code a chance to > actually do it's clean up tasks. It makes sense once you understand how > asyncio works, but it's not obvious at first. > > miniasync aims to shield people from these things for the simple use case. > > :) > Alice > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From twisteroid.ambassador at gmail.com Sun May 13 01:40:14 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Sun, 13 May 2018 13:40:14 +0800 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio Message-ID: Repo: https://github.com/twisteroidambassador/async_stagger Docs: http://async-stagger.readthedocs.io/en/latest/ Provides near drop-in replacements for open_connection() and create_connection() using Happy Eyeballs. Also exposes the underlying scheduling logic where you can plug in your own coroutines to run. I basically ported trio's implementation to asyncio, and it turned out to be not too difficult. Cheers, twistero From guido at python.org Sun May 13 09:35:23 2018 From: guido at python.org (Guido van Rossum) Date: Sun, 13 May 2018 09:35:23 -0400 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: References: Message-ID: Yury, This looks like good work.Would it make sense to add this to asyncio in 3.8? --Guido On Sun, May 13, 2018 at 1:40 AM, twisteroid ambassador < twisteroid.ambassador at gmail.com> wrote: > Repo: https://github.com/twisteroidambassador/async_stagger > Docs: http://async-stagger.readthedocs.io/en/latest/ > > Provides near drop-in replacements for open_connection() and > create_connection() using Happy Eyeballs. Also exposes the underlying > scheduling logic where you can plug in your own coroutines to run. > > I basically ported trio's implementation to asyncio, and it turned out > to be not too difficult. > > Cheers, > > twistero > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ > -- --Guido van Rossum (python.org/~guido) -------------- next part -------------- An HTML attachment was scrubbed... URL: From pfreixes at gmail.com Sun May 13 10:15:43 2018 From: pfreixes at gmail.com (Pau Freixes) Date: Sun, 13 May 2018 16:15:43 +0200 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: References: Message-ID: Hi, Great idea, I didn't know that it was already implemented by Trio and I didn't know that there were two RFCs to specify how to make this. In the case of the connection that strategy can bring many benefits in networks that are dynamic, perhaps the case of the ELBs in AWS which scale up or shrink the number of load balances that are in front of your application taking into account the traffic that they have to handle. That change gets reflected in the DNS answer, indeed AWS suggests to the clients to save the DNS resolutions in the cache client for not more than 60 seconds, obviously because the IP addresses will be adaptative to the number of load balancers at some specific time. Having a "big" connection timeout the client experience till reaching an available destination is proportional to the number of hosts iterated and failed for host in hosts: try: return await connect(host, timeout=1) except TimeoutError: logging.warning("Host not available, trying the next one") So definitely the proposal of the Happy Eyeballs implementation improves the latency needed to get a healthy connection. Thanks for making it visible! On Sun, May 13, 2018 at 7:40 AM, twisteroid ambassador wrote: > Repo: https://github.com/twisteroidambassador/async_stagger > Docs: http://async-stagger.readthedocs.io/en/latest/ > > Provides near drop-in replacements for open_connection() and > create_connection() using Happy Eyeballs. Also exposes the underlying > scheduling logic where you can plug in your own coroutines to run. > > I basically ported trio's implementation to asyncio, and it turned out > to be not too difficult. > > Cheers, > > twistero > _______________________________________________ > Async-sig mailing list > Async-sig at python.org > https://mail.python.org/mailman/listinfo/async-sig > Code of Conduct: https://www.python.org/psf/codeofconduct/ -- --pau From yselivanov at gmail.com Mon May 14 14:29:35 2018 From: yselivanov at gmail.com (Yury Selivanov) Date: Mon, 14 May 2018 14:29:35 -0400 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: References: Message-ID: <5dad9a26-a49f-4eed-b8b6-2351528a1dce@Spark> On May 13, 2018, 9:35 AM -0400, Guido van Rossum , wrote: > Yury, > > This looks like good work.Would it make sense to add this to asyncio in 3.8? Yes, it is solid. ?I'd like to see this in asyncio; specifically, I suggest to add a keyword-only argument to loop.create_connection & asyncio.open_connection to use happy eyeballs (off by default). Exposing the "staggered_race()" helper function might also be a good idea, I'm just not super happy with the name. Twistero, would you be interested in submitting a PR? Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From twisteroid.ambassador at gmail.com Tue May 15 02:07:14 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Tue, 15 May 2018 14:07:14 +0800 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: <5dad9a26-a49f-4eed-b8b6-2351528a1dce@Spark> References: <5dad9a26-a49f-4eed-b8b6-2351528a1dce@Spark> Message-ID: Sure, I should be able to massage the code into asyncio. Probably will need substantial help on writing any tests, though. On Tue, May 15, 2018 at 2:29 AM, Yury Selivanov wrote: > On May 13, 2018, 9:35 AM -0400, Guido van Rossum , wrote: > > Yury, > > This looks like good work.Would it make sense to add this to asyncio in 3.8? > > > > Yes, it is solid. I'd like to see this in asyncio; specifically, I suggest > to add a keyword-only argument to loop.create_connection & > asyncio.open_connection to use happy eyeballs (off by default). Exposing the > "staggered_race()" helper function might also be a good idea, I'm just not > super happy with the name. > > Twistero, would you be interested in submitting a PR? > > Yury > From twisteroid.ambassador at gmail.com Tue May 15 22:57:37 2018 From: twisteroid.ambassador at gmail.com (twisteroid ambassador) Date: Wed, 16 May 2018 10:57:37 +0800 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: References: <5dad9a26-a49f-4eed-b8b6-2351528a1dce@Spark> Message-ID: Just created an Issue for this: https://bugs.python.org/issue33530 On Tue, May 15, 2018 at 2:07 PM, twisteroid ambassador wrote: > Sure, I should be able to massage the code into asyncio. Probably will > need substantial help on writing any tests, though. > > On Tue, May 15, 2018 at 2:29 AM, Yury Selivanov wrote: >> On May 13, 2018, 9:35 AM -0400, Guido van Rossum , wrote: >> >> Yury, >> >> This looks like good work.Would it make sense to add this to asyncio in 3.8? >> >> >> >> Yes, it is solid. I'd like to see this in asyncio; specifically, I suggest >> to add a keyword-only argument to loop.create_connection & >> asyncio.open_connection to use happy eyeballs (off by default). Exposing the >> "staggered_race()" helper function might also be a good idea, I'm just not >> super happy with the name. >> >> Twistero, would you be interested in submitting a PR? >> >> Yury >> From njs at pobox.com Wed May 16 00:17:02 2018 From: njs at pobox.com (Nathaniel Smith) Date: Wed, 16 May 2018 00:17:02 -0400 Subject: [Async-sig] async_stagger: Happy Eyeballs implementation in pure asyncio In-Reply-To: References: Message-ID: On Sun, May 13, 2018 at 1:40 AM, twisteroid ambassador wrote: > Repo: https://github.com/twisteroidambassador/async_stagger > Docs: http://async-stagger.readthedocs.io/en/latest/ > > Provides near drop-in replacements for open_connection() and > create_connection() using Happy Eyeballs. Also exposes the underlying > scheduling logic where you can plug in your own coroutines to run. > > I basically ported trio's implementation to asyncio, and it turned out > to be not too difficult. A few people in trio's chat channel have been experimenting with strategies for implementing it in gevent, and folks might find the strategies they've been coming up with interesting too: https://gist.github.com/davidkhess/bc213e643db2581ee830a1e706e85f8f https://gist.github.com/ssanderson/f625716602a4bd7c8ead0dd4befad8ea There's some discussion starting here, and continuing through today: https://gitter.im/python-trio/general?at=5afa458bb84be71db908becd -n -- Nathaniel J. Smith -- https://vorpus.org From gmludo at gmail.com Sun May 20 15:23:31 2018 From: gmludo at gmail.com (Ludovic Gasc) Date: Sun, 20 May 2018 21:23:31 +0200 Subject: [Async-sig] asyncio.Lock equivalent for multiple processes In-Reply-To: References: <20180417134100.2fff0e3a@fsol> <20180417151654.31f22050@fsol> Message-ID: FYI, advisory locks of PostgreSQL are working pretty well on production since one month now. Thanks again for your help. -- Ludovic Gasc (GMLudo) 2018-04-18 7:09 GMT+02:00 Ludovic Gasc : > Indeed, thanks for the suggestion :-) > > Le mer. 18 avr. 2018 ? 01:21, Nathaniel Smith a ?crit : > >> Pretty sure you want to add a try/finally around that yield, so you >> release the lock on errors. >> >> On Tue, Apr 17, 2018, 14:39 Ludovic Gasc wrote: >> >>> 2018-04-17 15:16 GMT+02:00 Antoine Pitrou : >>> >>>> >>>> >>>> You could simply use something like the first 64 bits of >>>> sha1("myapp:") >>>> >>> >>> I have followed your idea, except I used hashtext directly, it's an >>> internal postgresql function that generates an integer directly. >>> >>> For now, it seems to work pretty well but I didn't yet finished all >>> tests. >>> The final result is literally 3 lines of Python inside an async >>> contextmanager, I like this solution ;-) : >>> >>> @asynccontextmanager >>> async def lock(env, category='global', name='global'): >>> # Alternative lock id with 'mytable'::regclass::integer OID >>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_lock( >>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)}) >>> >>> yield None >>> >>> await env['aiopg']['cursor'].execute("SELECT pg_advisory_unlock( >>> hashtext(%(lock_name)s) );", {'lock_name': '%s.%s' % (category, name)}) >>> >>> >>> >>>> >>>> Regards >>>> >>>> Antoine. >>>> >>>> >>>> On Tue, 17 Apr 2018 15:04:37 +0200 >>>> Ludovic Gasc wrote: >>>> > Hi Antoine & Chris, >>>> > >>>> > Thanks a lot for the advisory lock, I didn't know this feature in >>>> > PostgreSQL. >>>> > Indeed, it seems to fit my problem. >>>> > >>>> > The small latest problem I have is that we have string names for >>>> locks, >>>> > but advisory locks accept only integers. >>>> > Nevertheless, it isn't a problem, I will do a mapping between names >>>> and >>>> > integers. >>>> > >>>> > Yours. >>>> > >>>> > -- >>>> > Ludovic Gasc (GMLudo) >>>> > >>>> > 2018-04-17 13:41 GMT+02:00 Antoine Pitrou : >>>> > >>>> > > On Tue, 17 Apr 2018 13:34:47 +0200 >>>> > > Ludovic Gasc wrote: >>>> > > > Hi Nickolai, >>>> > > > >>>> > > > Thanks for your suggestions, especially for the file system lock: >>>> We >>>> > > don't >>>> > > > have often locks, but we must be sure it's locked. >>>> > > > >>>> > > > For 1) and 4) suggestions, in fact we have several systems to >>>> sync and >>>> > > also >>>> > > > a PostgreSQL transaction, the request must be treated by the same >>>> worker >>>> > > > from beginning to end and the other systems aren't idempotent at >>>> all, >>>> > > it's >>>> > > > "old-school" proprietary systems, good luck to change that ;-) >>>> > > >>>> > > If you already have a PostgreSQL connection, can't you use a >>>> PostgreSQL >>>> > > lock? e.g. an "advisory lock" as described in >>>> > > https://www.postgresql.org/docs/9.1/static/explicit-locking.html >>>> > > >>>> > > Regards >>>> > > >>>> > > Antoine. >>>> > > >>>> > > >>>> > > >>>> > >>>> >>>> >>>> >>>> _______________________________________________ >>>> Async-sig mailing list >>>> Async-sig at python.org >>>> https://mail.python.org/mailman/listinfo/async-sig >>>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>>> >>> >>> _______________________________________________ >>> Async-sig mailing list >>> Async-sig at python.org >>> https://mail.python.org/mailman/listinfo/async-sig >>> Code of Conduct: https://www.python.org/psf/codeofconduct/ >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmludo at gmail.com Sun May 20 15:55:16 2018 From: gmludo at gmail.com (Ludovic Gasc) Date: Sun, 20 May 2018 21:55:16 +0200 Subject: [Async-sig] Who will be present at EuroPython 2018 ? Message-ID: Hi, You certainly know that EuroPython is 23-29 July: https://ep2018.europython.eu/en/ I have seen Yury's tweet about it: https://twitter.com/1st1/status/997910868573720579 Who has planned to be present ? It might be the opportunity to do a sprint code around AsyncIO to improve it or increase the documentation like: https://asyncio.readthedocs.io/en/latest/ Have a nice week-end. -- Ludovic Gasc (GMLudo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.gronholm at nextday.fi Sun May 20 16:58:33 2018 From: alex.gronholm at nextday.fi (alex.gronholm at nextday.fi) Date: Sun, 20 May 2018 23:58:33 +0300 Subject: [Async-sig] Who will be present at EuroPython 2018 ? In-Reply-To: References: Message-ID: <59653d5e0569d25a62c3adffc9f6963fd1c17996.camel@nextday.fi> I'll be there if I get my speaker slot :) Probably not otherwise. su, 2018-05-20 kello 21:55 +0200, Ludovic Gasc kirjoitti: > Hi, > You certainly know that EuroPython is 23-29 July: > https://ep2018.europython.eu/en/ > > I have seen Yury's tweet about it: https://twitter.com/1st1/status/99 > 7910868573720579 > > Who has planned to be present ? > It might be the opportunity to do a sprint code around AsyncIO to > improve it or increase the documentation like: > https://asyncio.readthedocs.io/en/latest/ > > Have a nice week-end. > -- > Ludovic Gasc (GMLudo) > > > _______________________________________________Async-sig mailing > listAsync-sig at python.orghttps://mail.python.org/mailman/listinfo/asyn > c-sigCode of Conduct: https://www.python.org/psf/codeofconduct/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: