[C++-sig] Re: boost::python and threads

Vladimir Vukicevic vladimir at pobox.com
Sun Jul 6 20:09:29 CEST 2003


David Abrahams wrote:

 >>FooProxy is a remote object that I get as a result of calling various
 >>functions to get a proxy -- it has a "void bar();" (non-virtual) that
 >>I can call. 
 >
 >
 >Hmm.  This is totally off-topic, but it seems odd to me (and
 >inconvenient, in C++) that FooProxy is not derived from Foo.
 >
Live servants and proxies live in different class hierarchies; the Proxy 
objects just know how to add their tag to a packet and serialize 
arguments to operations, whereas Foo might have additional data members 
and the like.  (The library in question is the Ice communications 
library, at http://www.zeroc.com/ .)

 >>However, because the call can be made synchronously, I
 >>need to wrap the call to bar() with Py_BEGIN_ALLOW_THREADS and
 >>Py_END_ALLOW_THREADS. 
 >
 >Well, not exactly, IIUC.  The reasons to wrap it in
 >Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS are:
 >
 >   1. Because it might take a long time and you want other threads to
 >      be able to use the Python interpreter.
 >
 >   2. Because the call actually *depends* on other threads using the
 >      Python interpreter during its execution in order to be able to
 >      reach completion.

Right; I have both cases happening -- i.e. there's a 
"waitForShutdown();" that you can call on the main message dispatcher -- 
if you call that without releasing the GIL, you shoot yourself in the 
foot :).  The case #2 is what I'm trying to deal with now, as I have a 
RPC invoked by process B on process A, and then in A's handler it 
attempts to execute an rpc on process B -- and it blocks, since B is 
still waiting for the completion of the original call (while holding the 
GIL).

 >>It's this problem that I'm directly dealing with -- I can't figure
 >>out a way to wrap class FooProxy such that I can manage the current
 >>ThreadState and the GIL around them.  I can create another wrapper
 >>class
 >
 >Do you mean like a virtual-function-dispatching class, the kind with
 >the initial "PyObject* self" argument to its constructors?

Yes.

 >>that does the right thing and calls FooProxy::bar(), but
 >>functions are returning a FooProxy (not mywrapper_for_FooProxy)
 >
 >
 >I thought you said that all the C++ interfaces passed FooProxyHandle
 >(?)

Sorry, my mistake.. it is a FooProxyHandle.  Internally FooProxy is 
typedef'd to ::Ice::ProxyHandle< ::IceProxy::Foo> :)

 >>and I already have a Handle<FooProxy> as a wrapper specified in the
 >>boost::python class_... so I end up with incorrect types half the
 >>time.
 >
 >
 >Specifically, what problem are you having?  Could you show a function
 >you are unable to wrap properly and describe the specific problems
 >you're having?

Sure; see end of message, as it's somewhat lengthy and I didn't want to 
clutter up the replies. (*)

 >>However, an alternative solution that I was thinking of is to bake the
 >>thread smarts directly into boost::python. 
 >
 >
 >I have always thought that should happen in some form or other anyway.
 >
 >>It seems to me that call_method and similar should always be wrapped
 >>with a PyGILState_Ensure/PyGILState_Release. 
 >
 >
 >Some people know their extensions are only being called on the main
 >thread and not everyone will be ready to pay the cost for acquiring
 >the GIL on every callback.  IMO supplying a GILState class which
 >acquires on construction and releases on destruction *should* be
 >enough.

Hmm, I'm not sure what you mean; where would you apply the GILState 
class?  I agree though, acquiring the GIL on every callback is 
heavyhanded, but necessary for some applications (like mine, where I 
don't know exactly which thread a callback will get invoked on for a 
particular object instance).

 >>Also, an additional call policy could be added, something like
 >>"allow_threads", which would cause b::p to call
 >>Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS around the function
 >>invocation on C++.  This seems like a much more generally useful
 >>solution than me trying to do the dance in wrapper classes.
 >
 >I think some change to the requirements and usage of CallPolicies
 >would be neccessary in order to support that (see my reply to
 >Nikolay), but I agree with the general idea.
 >
 >>Is this the right way to go? 
 >
 >Probably.  I also think that some people want automatic
 >Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS for *all* of their wrapped
 >functions, and that should probably be configurable, too.  You have to
 >be careful with that, though: once you do Py_BEGIN_ALLOW_THREADS you
 >can't use any Boost.Python facilities which use the Python 'C' API
 >(e.g. object, handle<>, ...) until you reach Py_END_ALLOW_THREADS.

Yep; I assume invoke.hpp is the lowest-level at which the C++ function 
is actually invoked.  I'd like to do the wrapping in caller.hpp, at 
which point I could extend call policies, but invoke() uses a result 
converter which will need to make Python API calls.  So I wrap the 
actual call to f() in invoke.hpp.  The only wrinkle I'm having now is in 
call<> and call_method<>, where the return type is void (because the 
compiler doesn't like R _r = baz(); return _r; where R is void :) -- 
I've tried, unsuccessfully, to specialize these functions for return 
types of void, but I'm afraid my C++-fu isn't making the grade.

 >>If so, I'd appreciate any pointers on implementing a new call
 >>policy, especially if it's possible to both specify allow_threads
 >>and a return_value_policy.
 >
 >I think you'd better start by looking at the macro definition of
 >Py_BEGIN_ALLOW_THREADS et al and then at how policies are used to see
 >how state can be stored for the duration of the call they affect, as
 >the macros are wont to do.

* Specific function problem:

Each ProxyHandle<Proxy::Foo> exports (via the ProxyHandle class) a 
checkedCast() static member, that can be used to convert 
ProxyHandle<Proxy::Object> (which is the root of all proxies) to a 
specific proxy type. checkedCast() checkedcast actually creates a new 
ProxyHandle<Proxy::Foo> object, and then copies the target server 
information into it from the ProxyHandle<Proxy::Object>, so there's no 
"cast" in the C++ sense going on (though it does check if it can 
dynamic_cast<> the Proxy::Object* to a Proxy::Foo*, and if so, just does 
that).

To have support for checkedCast in python, I create wrapper functions 
such as:

namespace pyce_Hello_casts {
::IceInternal::ProxyHandle< ::IceProxy::Hello> _Hello_checkedCast (const 
::Ice::ObjectPrx& o)
{
    return ::IceInternal::ProxyHandle< ::IceProxy::Hello>::checkedCast (o);
}
};

and my class_<> def looks like:

class_< ::IceProxy::Hello ,
       ::IceInternal::ProxyHandle< ::IceProxy::Hello >,
       bases< ::IceProxy::Ice::Object >,
       boost::noncopyable >
   ("HelloPrx")
 .def("sayHello", &::IceProxy::Hello::sayHello)
 .def("checkedCast", pyce_Hello_casts::_Hello_checkedCast)
 .staticmethod("checkedCast")
;

This all works fine, until I want to intercept the call to sayHello() 
(to save/restore the ThreadState goop).  I can create a wrapper that 
derives from ::IceProxy::Hello, but then I have to deal with that 
PyObject* constructor, for which I have no need for here -- I'm never 
going to utilize call_method, and HelloPrx will never be derived from in 
python-land.  The sayHello() and other member functions are also not 
virtual.  I could do a dance with a custom-written overload dispatch for 
each function.  If I went the wrapper route, I would need to implement 
my own version of checkedCast that can convert to my wrapper, and add 
some implicitly_convertible statements, and make sure that no references 
to ::IceProxy::Hello (or the equivalent handle) get exposed anywhere.. 
certainly feasable, but a pain, especially for cases where I get a 
handle to a ::IceProxy::Hello in C++ and I'd have to convert.

But both approaches seem like they're far too verbose.  I'll probably 
create a call_method_with_threads<> such that the non-GIL-acquiring 
call_method<> is still available; extending call policies would allow a 
nice implementation for GIL-releasing and non-GIL-releasing C++ calls in 
caller.hpp, but the problem with the ResultConverter being called from 
invoke.hpp still remains.  If there was a base class for all result 
converters, it could perhaps acquire the GIL in its constructor and 
release in the destructor, though no such class exists (that I can 
see?), and it would be yet another mutex acquisition/release.

I'm currently running into a snag in invoke.hpp, though; I've modified 
the non-void-return calls to invoke as such:

template <class RC, class F BOOST_PP_ENUM_TRAILING_PARAMS_Z(1, N, class AC)>
inline PyObject* invoke(fn_tag, RC*, F& f 
BOOST_PP_ENUM_TRAILING_BINARY_PARAMS_Z(1, N, AC, & ac) )
{
    PyThreadState *_save;
    Py_UNBLOCK_THREADS
    typename boost::function_traits<F>::result_type _r = f( 
BOOST_PP_ENUM_BINARY_PARAMS_Z(1, N, ac, () BOOST_PP_INTERCEPT) );
    Py_BLOCK_THREADS
    return RC()(_r);
}

.. but the function_traits invocation doesn't seem to be doing what I 
want, as I'm getting errors of the form:
/usr/local/boost/boost/python/detail/invoke.hpp:105: base class `
   boost::detail::function_traits_helper<Ice::ObjectPtr 
(Ice::Stream::**)(const
   std::string&, const std::string&, const Ice::ObjectFactoryPtr&)>' has
   incomplete type

I apologize for the long message; I hope I've come across clearer 
instead of muddying up the waters more.  Thanks for the help and ideas!
    - Vlad








More information about the Cplusplus-sig mailing list