The Future of Python Threading

Nick Craig-Wood nick at craig-wood.com
Sat Aug 11 10:30:05 EDT 2007


Bjoern Schliessmann <usenet-mail-0306.20.chr0n0ss at spamgourmet.com> wrote:
>  Nick Craig-Wood wrote:
>  [GIL]
> > That is certainly true.  However the point being is that running
> > on 2 CPUs at once at 95% efficiency is much better than running on
> > only 1 at 99%...
> 
>  How do you define this percent efficiency?

Those are hypothetical numbers.  I guess that a finely locked python
will spend a lot more time locking and unlocking individual objects
than it currently does locking and unlocking the GIL.  This is for two
reasons

1) the GIL will be in cache at all times and therefore "hot" and quick
to access

2) much more locking and unlocking of each object will need to be
done.

> >>> The truth is that the future (and present reality) of almost
> >>> every form of computing is multi-core,
> >> 
> >>  Is it? 8)
> > 
> > Intel, AMD and Sun would have you believe that yes!
> 
>  Strange, in my programs, I don't need any "real" concurrency (they
>  are network servers and scripts). Or do you mean "the future of
>  computing hardware is multi-core"? That indeed may be true.

I meant the latter.  I agree with you though that not all programs
need to be multi-threaded.  Hence my proposal for two python binaries.

> >>  The question is: If it really was, how much of useful
> >>  performance gain would you get?
> > 
> > The linux kernel has been through these growing pains already... 
> > SMP support was initially done with the Big Kernel Lock (BKL)
> > which is exactly equivalent to the GIL.
> 
>  So, how much performance gain would you get? Again, managing
>  fine-grained locking can be much more work than one simple lock.

Assuming that you are not IO bound, but compute bound and that compute
is being done in python then you'll get a speed up proportional to how
many processors you have, minus a small amount for locking overhead.

> > The linux kernel has moved onwards to finer and finer grained
> > locking.
> 
>  How do you compare a byte code interpreter to a monolithic OS
>  kernel?

In this (locking) respect they are quite similar actually.  You can
think of kernel code as being the python interpreter (BKL vs GIL),
user space being as C extensions running with the GIL unlocked and
calling back into the python interpreter / kernel.

> > I'd like to see a python build as it is at the moment and a
> > python-mt build which has the GIL broken down into a lock on each
> > object. python-mt would certainly be slower for non threaded
> > tasks, but it would certainly be quicker for threaded tasks on
> > multiple CPU computers.
> 
>  From where do you take this certainty? For example, if the program
>  in question involves mostly IO access, there will be virtually no
>  gain. Multithreading is not Performance.

Yes you are right of course.  IO bound tasks don't benefit from
multi-threading.  In fact usually the reverse.  Twisted covers this
ground extremely well in my experience.  However IO bound tasks
probably aren't taxing your quad core chip either...

> > The user could then choose which python to run.
> > 
> > This would of course make C extensions more complicated...
> 
>  Also, C extensions can release the GIL for long-running
>  computations.

Provided they stay in C.  If they call any python stuff then they need
to take it again.

-- 
Nick Craig-Wood <nick at craig-wood.com> -- http://www.craig-wood.com/nick



More information about the Python-list mailing list