what's the point of rpython?

Paul Rubin http
Wed Jan 21 23:46:02 EST 2009


Rhamphoryncus <rhamph at gmail.com> writes:
> a) The contended case is the issue, not the uncontended case.  An
> uncontended lock is just constant overhead, not a barrier to
> scalability

a1) Really what matters is the actual mix between contended and
uncontended accesses, and the synchronization strategy affects the
amount of contention.  For example, the traditional locking strategy
involves acquiring a lock before reading the object, so two
simultaneous read-only accesses would create lock contention.  With
STM, only updates acquire a lock, so multiple read-only threads can
access the object simultaneously with no contention.  

a2) Constant factors matter!!!  If using a different mechanism makes
Python 2x faster, that's a very powerful reason to favor the other
mechanism.

> b) Locks on linux (since the switch to futexes) are pretty close to
> free when uncontended.

I thought futexes use a traditional locked read-modify-write
instruction which is 50-100 cycles, cheap compared to a system call
but quite expensive compared with a normal 1-cycle non-locked
operation.


> > > The second issue is the objects themselves, like a list which is
> > > mutable.  If you're using it in a single thread or writing from
> > > multiple threads this is a non-trivial constant cost.  ...
> The second issue has *nothing* to do with refcounts.  It is the
> objects themselves.  A classic example is trying to do "i += 1", which
> breaks down into "i = i + 1", which isn't an atomic operation.

Oh, I see what you mean; yes, you're right, it's best to avoid doing
those sorts of operation in shared objects too frequently.

> Java's ConcurrentHash map gets this right.

I'll have to look that up, it sounds interesting.  I've been staying
away from Java but a situation is coming up where I may have to (ugh)
start using it.  Thanks.



More information about the Python-list mailing list