[Python-Dev] PEP 550 v4: coroutine policy
Antoine Pitrou
antoine at python.org
Tue Aug 29 15:32:35 EDT 2017
Le 29/08/2017 à 21:18, Yury Selivanov a écrit :
> On Tue, Aug 29, 2017 at 2:40 PM, Antoine Pitrou <solipsis at pitrou.net> wrote:
>> On Mon, 28 Aug 2017 17:24:29 -0400
>> Yury Selivanov <yselivanov.ml at gmail.com> wrote:
>>> Long story short, I think we need to rollback our last decision to
>>> prohibit context propagation up the call stack in coroutines. In PEP
>>> 550 v3 and earlier, the following snippet would work just fine:
>>>
>>> var = new_context_var()
>>>
>>> async def bar():
>>> var.set(42)
>>>
>>> async def foo():
>>> await bar()
>>> assert var.get() == 42 # with previous PEP 550 semantics
>>>
>>> run_until_complete(foo())
>>>
>>> But it would break if a user wrapped "await bar()" with "wait_for()":
>>>
>>> var = new_context_var()
>>>
>>> async def bar():
>>> var.set(42)
>>>
>>> async def foo():
>>> await wait_for(bar(), 1)
>>> assert var.get() == 42 # AssertionError !!!
>>>
>>> run_until_complete(foo())
>>>
>> [...]
>
>> Why wouldn't the bar() coroutine inherit
>> the LC at the point it's instantiated (i.e. where the synchronous bar()
>> call is done)?
>
> We want tasks to have their own isolated contexts. When a task
> is started, it runs its code in parallel with its "parent" task.
I'm sorry, but I don't understand what it all means.
To pose the question differently: why is example #1 supposed to be
different, philosophically, than example #2? Both spawn a coroutine,
both wait for its execution to end. There is no reason that adding a
wait_for() intermediary (presumably because the user wants to add a
timeout) would significantly change the execution semantics of bar().
> wait_for() in the above example creates an asyncio.Task implicitly,
> and that's why we don't see 'var' changed to '42' in foo().
I don't understand why a non-obvious behaviour detail (the fact that
wait_for() creates an asyncio.Task implicitly) should translate into a
fundamental difference in observable behaviour. I find it
counter-intuitive and error-prone.
> This is a slightly complicated case, but it's addressable with a good
> documentation and recommended best practices.
It would be better addressed with consistent behaviour that doesn't rely
on specialist knowledge, though :-/
>>> This means that PEP 550 will have a caveat for async code: don't rely
>>> on context propagation up the call stack, unless you are writing
>>> __aenter__ and __aexit__ that are guaranteed to be called without
>>> being wrapped into a Task.
>>
>> Hmm, sorry for being a bit slow, but I'm not sure what this
>> sentence implies. How is the user supposed to know whether something
>> will be wrapped into a Task (short of being an expert in asyncio
>> internals perhaps)?
>>
>> Actually, if could whip up an example of what you mean here, it would
>> be helpful I think :-)
>
> __aenter__ won't ever be wrapped in a task because its called by
> the interpreter.
>
> var = new_context_var()
>
> class MyAsyncCM:
>
> def __aenter__(self):
> var.set(42)
>
> async with MyAsyncCM():
> assert var.get() == 42
>
> The above snippet will always work as expected.
Uh... So I really don't understand what you meant above when you wrote:
"""
This means that PEP 550 will have a caveat for async code: don't rely
on context propagation up the call stack, unless you are writing
__aenter__ and __aexit__ that are guaranteed to be called without
being wrapped into a Task.
"""
To ask the question again: can you showcase how and where the "caveat"
applies?
Regards
Antoine.
More information about the Python-Dev
mailing list