[Python-ideas] Async API

Yury Selivanov yselivanov.ml at gmail.com
Wed Oct 24 02:52:52 CEST 2012


Hi Greg,

On 2012-10-23, at 8:24 PM, Greg Ewing <greg.ewing at canterbury.ac.nz> wrote:

> Yury Selivanov wrote:
> 
>>   def foo():
>>       connection = open_connection()
>>       try:
>>           spam()
>>       finally:
>>           [some code]
>>           connection.close()
>>           What happens if you run 'foo.with_timeout(1)' and timeout occurs at "[some code]" point?
> 
> I would say that vital cleanup code probably shouldn't do
> anything that could block. If you really need to do that,
> it should be protected by a finally clause of its own:
> 
>   def foo():
>       connection = open_connection()
>       try:
>           spam()
>       finally:
>           try:
>               [some code]
>           finally:
>               connection.close()

Please take a look at the problem definition in PEP 419.

It's not about try..finally nesting, it's about Scheduler being aware
that a coroutine is in its 'finally' block and thus shouldn't be interrupted
at the moment (a problem that doesn't exist in a non-coroutine world).

Speaking about your solution, imagine if you have three connections to close,
what will you write?

   finally:
       try:
           c1.close() # coroutine call
       finally:
           try:
               c2.close() # coroutine call
           finally:
               c3.close() # coroutine call

But if you somehow make scheduler aware of 'finally' block, through PEP 419 
(which I don't like), or like in my framework where we inline special code in
finally statement by modifying coroutine opcodes (which I don't like too), 
you can simply write::

    finally:
        c1.close()
        c2.close()
        c3.close()

And scheduler will gladly wait until finally is over.  And the code snippet
above is something, that is familiar to every user of python--nobody expects
code in the finally section to be interrupted from the *outside* world.  If
we fail to guarantee 'finally' block safety, then coroutine-style programming
is going to be much tougher.  Or we have to abandon timeouts and coroutines
interruptions.

So eventually, we'll need to figure out the best mechanism/approach for this.

Now, I don't think it's the right moment to shift discussion into this 
particular problem, but I would rather like to bring up the point, that
implementing 'yield'-style coroutines is a very hard thing, and I'm not
sure that we should implement them in 3.4.

Setting guidelines and standard protocols, adding socket-factories support
where necessary in the stdlib is a better approach (in my humble opinion.)

-
Yury


More information about the Python-ideas mailing list