is there enough information?

castironpi at gmail.com castironpi at gmail.com
Wed Feb 27 05:15:28 EST 2008


On Feb 26, 11:45 pm, Dennis Lee Bieber <wlfr... at ix.netcom.com> wrote:
>         Warning -- long post follows
>
> On Tue, 26 Feb 2008 12:17:54 -0800, Dennis Lee Bieber
> <wlfr... at ix.netcom.com> declaimed the following in comp.lang.python:
>
>
>
> >    Which is, it seems, totally backwards... Also... to my knowledge,
> > the "with" construct (I'm still on Python 2.4 and don't have "with") is
> > NOT the same thing as a Java "Synchronized" object. The common use of
> > "with" is with objects, like files, that are opened at the start of the
> > block, and need to be closed on block exit, regardless of how the block
> > is exited.
>
>         Okay... I do need to retract this part... Looking at the PEP, the
> "with" statement /can/ be made to act as a synchronization method IF
> supplied with an object that performs locking/unlocking as part of the
> "context manager" methods.
>
>         But I still don't see any such in your usage. If it is there, it is
> not in the code you show us.
>
>         Note that such synchronization of a critical section is meant
> only to ensure that one processing thread can execute the code (or
> access some data) in the critical section at a time. It is NOT meant to
> provide sequential control between threads. Sequential processing needs
> multiple signalling methods. You have four, I believe, critical sections
> (though they aren't critical in terms of being a shared access item --
> nothing is shared between your threads) -- hence you would need four
> separate locking objects.

This is very odd.  Yours -seems- (*subjective) backwards to -me-.  I
get the sense that yours and mine are orthogonal: you approach the
task "horizontally" where I approach it "vertically", for some rather
post-cognitive metric of approach, perhaps like breadth-first vs.
depth-first.  But I digress.

I realize that concrete examples are very useful in a thourough
analysis of the differences between our code.  Mine isn't very
concrete, but it might shed some light, keeping in mind that even if
it spotlights a flaw or weakness in either my or your solution, it's
not (repeat, NOT) a conclusive case for or against either.  Don't get
defensive!

The application is actually a test bed for still another kind of lock
I'm whittling, ironically enough.  The acq( thd, lck ) function sends
a message to thread thd to try to acquire lck.  It's important that
lck be acquired from thd specifically, as (i) there are get_ident()
calls like in threading.RLock, and as (ii) thd should actually block.
acq, however, does not: it fires the call, waits for a completion
event -or- timeout, and returns the return value if there is one, or a
special one if thd blocked.  That way I can check to make sure threads
are blocking on the right calls to the derived acquire method, and
returning the right values if they're not.

Rough overview:

#Case 1
[snip]
assert acq( thread3, lock3 ) is Blocks
assert acq( thread2, lock2 ) is Fails
assert acq( thread4, lock4 ) is Fails
assert acq( thread1, lock1 ) is Acquires
[snip]

The results vary because of other things that happened to the threads
and locks earlier in the case.  So I define acq like this:

def acq( thd, lck ):
   thd.test_lock_to_acquire= lck
   thd.set_cmd_assigned_signal.set()
   thd.set_cmd_completed.wait( 1 )
   ret= thd.cmd_return_value
   thd.set_return_value_read.set()
   return ret

The thread is running a loop, so that I can make multiple test calls
to the same thread, to simulate what happens if a thread acquires one
lock first, then another lock later, but, to repeat, I want the thread
that's governing the case to keep moving, whereas the thread that
makes the real acquire() call does actually block.

With acq, I set up an instruction-acknowledgement sequence, to get one
thread to do something that I don't determine ahead of time.
(Instruction and acknowledgement may not be the right words.)  The
thread runs in a function, thread_loop, that takes the thd parameter,
a probably misnamed container object instance that merely contains the
synchro events, a continue flag, the lock to try to acquire or None,
and the result of trying to acquire it or None.

def thread_loop( thd ):
   while thd.cont:
      thd.set_cmd_assigned_signal.wait()
      thd.cmd_return_value= Vacant
      ret= thd.test_lock_to_acquire.acquire()
      thd.cmd_return_value= ret
      thd.set_cmd_completed.set()
      thd.set_return_value_read.wait()

It first waits to be informed that it has an instruction pending.
Then it preps cmd_return_value, just in case the lock.acquire()
doesn't return, which it then attempts.  Whenever it returns, it sets
cmd_return_value to the returned value, informs acq() that it has
returned and waits for acq() to inform it that it has successfully
copied the value.  It may be that I don't have any use for the last
step, the corner case being if the case tries to use it twice in a
row.

To avoid the confusion of the trio of Event objects,
set_cmd_assigned_signal, set_cmd_completed, and set_return_value_read,
and the clear instructions not shown, I abstract it away, replacing
them with a single instance of a class (which I haven't shown), but
which does the same thing.

def acq( thd, lck ):
   with thd.steps[0]:
      thd.test_lock_to_acquire= lck
   with thd.steps[2]:
      ret= thd.cmd_return_value
   return ret

steps[0] is a method in disguise.  Unless it's never been called
before with index 0, it waits until steps.reset() is called.  steps[1]
waits until the steps[0] context exits.  steps[2] waits until the
steps[1] context exits (even if steps[1] hasn't entered yet!) and so
on.  Later on, steps[0] enters again, and waits, again, for
steps.reset().

You may dispute that a class such as the one of which 'steps' is an
instance exists.  That was my very first question.  Specify 'acq'
only, and does a Steps class have enough information, from only the
details given and no more, to release each of its individual locks all
and only in the order of the indices?  Regardless, here is how
thread_loop shapes up.  Once again, 'thd' is a container, that now I'm
thinking is -probably- misnamed.  Tell my publisher.  ;)

def thread_loop( thd ):
   while thd.cont:
      thd.cmd_return_value= Vacant
      with thd.steps[1]:
         ret= thd.test_lock_to_acquire.acquire()
         thd.cmd_return_value= ret
      with thd.steps[3]:
         pass
      thd.steps.reset()

Here we find the missing steps[1] from earlier, which runs acquire on
the lock that acq assigned in steps[0].  If it returns, it records the
return value, and then steps[3] blocks for steps[2] to complete.  This
fires a blank, but makes sure that steps.reset() isn't called before
steps[2] is done.  It can go there if you want.

The implementation specific of the timeout, whether you want it in
steps[1] or steps[2], I intentionally left open.  Either way, it might
take more than the brackets operator, such as:

      with thd.steps.timingout( step= 1, timeout= 1 ):

or

      with thd.steps[1].timingout( 1 ):

, which is actually possible.  steps[n] may return a newly-created
object instance with only __enter__ and __exit__ methods, which just
reroute back to the steps instance, so it wouldn't be hard to add a
timingout method to handle this.  Either way, I find this
(*subjective) a fabulous illustration of the power of the context
manager, especially with the extra timingout option, and it's a snap
to write.  For the record, Step was only 44 lines long!  The return
from __getitem__ was particularly cute:

        return WithObj(
            partial( self.ienter, index ),
            partial( self.iexit, index ) )


Any questions?



More information about the Python-list mailing list