[C++-sig] Managing the GIL across competing threads

Niall Douglas s_sourceforge at nedprod.com
Sun Mar 18 18:13:12 CET 2012


On 17 Mar 2012 at 22:20, Adam Preble wrote:

> > If by "Python side" you mean Boost.Python, then I agree: BPL has no
> > support for GIL management at all, and it really ought to. This was
> > one of the things that was discussed in the BPL v3 discussions on
> > this list a few months ago.
> >
> Hey do you know any terms or thread names where I could go digging through
> some of those discussions?  I'm just trying stuff superficially and finding
> some things that are at least interesting, but I'm not sure I got that
> stuff yet.

Try "[Boost.Python v3] Conversions and Registries", about October of 
last year.

If you search right back to many years ago, I used to maintain a 
patch which implemented GIL management for Boost.Python. It's long 
bitrotted though.

> > You need to clarify what you mean by "own subsystem". Do you mean
> > "own process"?
> >
> In the model I'm using, a subsystem would be a thread taking care of a
> particular resource.  In this case, I figured I would make the Python
> interpreter that resource, and install better guards around interacting
> with it.  For one thing, rather than anything else grabbing the GIL, they
> would enqueue stuff for it to execute.  That's about as far as I got with
> it in my head.  I managed to unjam the deadlock I was experiencing by
> eliminating the contention the two conflicting threads were having with
> each other.

The only way to get two Python interpreters to run at once is to put 
each into its own process. As it happens, IPC is usually fast enough 
relative to the slowness of Python that often this actually works 
very well.

> > You're going to have to be a lot clearer here. Do you mean you want
> > BPL to give up the GIL until the wait is done, or Python?
> >
> Something, somewhere.  I wasn't being to picky.  I wondered if there was a
> way to do it that I hadn't found with the regular Python headers.

Generally Python releases the GIL around anything it thinks might 
wait. So long as you write your extension code to do the same, all 
should be well.

> > Regarding inconsistency over releases and locks, that's 101 threading
> > theory. Any basic threading programming textbook will tell you how to
> > handle such issues.
> 
> I'm hoping I'm asking about, say, 102 threading stuff instead. ;)
>  Specifically, I'm trying my darndest to keep explict lock controls outside
> of the main code because it really is hard to get right all the time.
>  Rather, I am trying to apply some kind of structure.  The subsystem model
> I did a terrible job of mentioning is an example, as well as using promises
> and futures.  If there are tricks, features, or something similar to keep
> in mind, that would steer how I approach this.

Y'see, I'd look at promises and futures primarily as a latency hiding 
mechanism rather than for lock handling. I agree it would have been 
super-sweet if Python had been much more thread aware in its design, 
but we live with the hand we're dealt. Trying to bolt on futures and 
promises to a system which wasn't designed for it is likely to be 
self-defeating.

There is a school of thought that multithreading is only really 
worthwhile doing in statically compiled languages. Anything 
interpreted tends, generally speaking, to be too chunky to fine grain 
holding locks to make threading useful for anything except i/o. 
Certainly Python is extremely chunky, and I'd generally avoid using 
threads in Python at all except as a way of portably doing 
asynchronous i/o. Otherwise it's not worth it. If you're thinking it 
is, it's time for a redesign.

Niall

-- 
Technology & Consulting Services - ned Productions Limited.
http://www.nedproductions.biz/. VAT reg: IE 9708311Q.
Work Portfolio: http://careers.stackoverflow.com/nialldouglas/





More information about the Cplusplus-sig mailing list