[Python-ideas] Protecting finally clauses of interruptions

Yury Selivanov yselivanov.ml at gmail.com
Wed Apr 4 21:37:16 CEST 2012


On 2012-04-04, at 2:44 PM, Paul Colomiets wrote:
> I have a global timeout for processing single request. It's actually higher
> in a chain of generator calls. So dispatcher looks like:
> 
> def dispatcher(self, method, args):
>    with timeout(10):
>        yield getattr(self.method)(*args)

How does it work?  To what object are you actually attaching timeout?

I'm just curious now how your 'timeout' context manager works.

And what's the advantage of having some "global" timeout instead
of a timeout specifically bound to some coroutine?

Do you have that code publicly released somewhere?  I just really want
to understand how exactly your architecture works to come with a 
better proposal (if there is one possible ;).

As an off-topic: would be interesting to have various coroutines
approaches and architectures listed somewhere, to understand how
python programmers actually do it.

> And all the local timeouts, like timeout for single request are
> usually applied at a socket level, where specific protocol
> is implemented:
> 
> def redis_unlock(lock):
>    yield redis_socket.wait_write(2)  # wait two seconds
>   # TimeoutError may have been raised in wait_write()
>    cmd = ('DEL user:'+lock+'\n').encode('ascii')
>    redis_socket.write(cmd)  # should be loop here, actually
>    yield redis_socket.wait_read(2)  # another two seconds
>    result = redis_socket.read(1024)  # here loop too
>    assert result == 'OK\n'

So you have explicit timeouts in the 'redis_unlock', but you want
them to be ignored if it was called from some 'finally' block?

> So they are not interruptions. Although, we don't use them
> much with coroutines, global timeout for request is
> usually enough.

Don't really follow you here.

> But anyway I don't see a reason to protect a single frame,
> because even if you have a simple mutex without coroutines
> you end up with:
> 
> def something():
>  lock.acquire()
>  try:
>    pass
>  finally:
>    lock.release()
> 
> And if lock's imlementation is something along the lines of:
> 
> def release(self):
>    self._native_lock.release()
> 
> How would you be sure that interruption is not executed
> when interpreter resolved `self._native_lock.release` but
> not yet called it?

Is it in a context of coroutines or threads?  If former, then
because you, perhaps, want to interrupt 'something()'?  And it is a
separate frame from the frame where 'release()' is running?

> It's also interesting question. I don't think it's possible to interrupt
> JIT'ed code in arbitrary location.

I guess that should really be asked on the pypy-dev mail-list, once 
we have a proposal.

>> That's the second reason I don't like your proposal.
>> 
>> def foo():
>>   try:
>>      ..
>>   finally:
>>      yield unlock()
>>   # <--- the ideal point to interrupt foo
>> 
>>   f = open('a', 'w')
>>   # what if we interrupt it here?
>>   try:
>>      ..
>>   finally:
>>      f.close()
>> 
> 
> And which one fixes this problem? There is no guarantee
> that your timeout code haven't interrupted
> at " # what if we interrupt it here?". If it's a bit less likely,
> it's not real solution. Please, don't present it as such.

Sorry, I must had it explained in more details.  Right now we
interrupt code only where we have a 'yield', a greenlet.switch(), 
or at the end of finally block, not at some arbitrary opcode.

-
Yury



More information about the Python-ideas mailing list