From adam.preble at gmail.com Tue Mar 6 03:23:26 2012 From: adam.preble at gmail.com (Adam Preble) Date: Mon, 5 Mar 2012 20:23:26 -0600 Subject: [C++-sig] Retaining a useful weak_ptr to a wrapped C++ interface implemented in Python Message-ID: I am trying to break a smart pointer cycle in my application. There is an interface that can be implemented in both c++ and Python, and it is used for messaging callbacks. I can contain a list of these as shared pointers, but it only makes sense for cases where the owner of the container has exclusive control of the object in question. These work fine in both c++ and Python. I decided for the messaging callback list I would switch them to weak_ptr's. I get into trouble if I register a Python implementation of the callback with ownership staying within Python. When it comes time to communicate a message back, I see that the weak_ptr has: use_count = 0 weak_count = 1 The pointer fails to cast. Meanwhile, the object is looks very much alive in Python. I have a destructor in the c++ definition interface for debugging and I see it never gets called. Similarly, __del__ doesn't get called on the Python implementation, although I am lead to believe I can't trust it. Perhaps most compelling is that I can continue to poke and prod at the object in the Python shell well after this failed weak_ptr cast. I am wondering if there is anything I can do for the cast to succeed. At this point all I can think of is making the owner of the container get a specific reference to the blasted thing, which I'd rather not have to do. -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandsmeier at gmx.de Tue Mar 6 10:47:01 2012 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Tue, 6 Mar 2012 10:47:01 +0100 Subject: [C++-sig] Retaining a useful weak_ptr to a wrapped C++ interface implemented in Python In-Reply-To: References: Message-ID: Adam, you probably run into a similar problem that I ran into (not with boost::shared_ptr but with our custom implementation which strongly mimics boost::shared_ptr). I wanted to have a class in C++ that is extended in python, and it should store a weak_ptr to itself. Unfortunately I didn't solve this yet, I stored a shared_ptr which means that the object gets never deleted. I have to revisit this at a later point, but I'll try to explain you what's going on. You probably have a function like this void addCallback(share_ptr<> arg) { callbacks.push_back(weak_ptr<>(arg); } Or you immediately take the argument as a weak_ptr, that doesn't matter. The more important fact is, you created that object in some C++, you have that object in python and then you call addCallback from the python context. So, before the function exits your object `arg` exists at least in three places: 1) somewhere in C++ where it was created 2) in the python context 3) in the context of addCallback To understand what is going wrong, I need to explain that the shared pointer in context 1 and 3 are not the same! They share the same deallocator and the same object, but they are not the same shared pointer, i.e. their use count and weak_count are not the same. That is due to the way boost python handles step 2, that magic is in these three files (in boost/python): converter/shared_ptr_to_python.hpp converter/shared_ptr_from_python.hpp converter/shared_ptr_deleter.hpp In `shared_ptr_deleter.hpp` there is a certain magic implemented, that does the following: Assume you have class `Parent` and a class `Child` derived from it. Now you can do: - create an instance of Child C++ and bring it to python as shared_ptr - pass that instance to C++ (via shared_ptr or shared_ptr) - get it later back from C++ but as a shared_ptr - magic: you can treat that instance as a shared_ptr In C++ you would need to do a dynamic cast to get this functionallity, but because that object has been known to python to be an instance of Child, boost::python automatically makes it an instance of Child, nice right? Unfortunately your (and my) problem are a consequence of this. When you go from 2->3 boost::python prepares for doing its magic. It doesn't just return a copy of the shared_ptr from 1), it creates a new shared_ptr with a special Deallocator object. The use_count at that moment is 2: one for python 2) and one for addCallback() 3). When the function addCallback() finishes, the use_count=1 (from python) and weak_count=1 from 3). Once the python context ends then use_count=0 and weak_count=1, and I believe that is exactly what you observe. In this case as use_count drops to 0 the boost custom Deallocator gets called. This is usally not bad, as he just deregisters (decreases the use cound by 1) in the shared_ptr for context 1.Only if that use_count in context 1) would drop to 0 the object would get deleted (that's why you don't observe it to be deleted). The problem is now, that the two weak/shared ptrs (which still point to a healthy and alive object) are now disconnected. So when you try to turn the weak_ptr in context3 to a strong pointer you would get serious problems. I hope this is a correct explanation of the sitation. To solve this would need to revist the magic for shared_ptrs in boost::python. I plan to try and solve it some point later, but I am no regular developer for boost::python and I can not promise that I will succeed, nor when. -Holger On Tue, Mar 6, 2012 at 03:23, Adam Preble wrote: > I am trying to break a smart pointer cycle in my application. ?There is an > interface that can be implemented in both c++ and Python, and it is used for > messaging callbacks. ?I can contain a list of these as shared pointers, but > it only makes sense for cases where the owner of the container has exclusive > control of the object in question. ?These work fine in both c++ and Python. > ?I decided for the messaging callback list I would switch them to > weak_ptr's. > > I get into trouble if I register a Python implementation of the callback > with ownership staying within Python. ?When it comes time to communicate a > message back, I see that the weak_ptr has: > > use_count = 0 > weak_count = 1 > > The pointer fails to cast. ?Meanwhile, the object is looks very much alive > in Python. ?I have a destructor in the c++ definition interface for > debugging and I see it never gets called. ?Similarly, __del__ doesn't get > called on the Python implementation, although I am lead to believe I can't > trust it. ?Perhaps most compelling is that I can continue to poke and prod > at the object in the Python shell well after this failed weak_ptr cast. > > I am wondering if there is anything I can do for the cast to succeed. ?At > this point all I can think of is making the owner of the container get a > specific reference to the blasted thing, which I'd rather not have to do. > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From adam.preble at gmail.com Tue Mar 6 16:31:06 2012 From: adam.preble at gmail.com (Adam Preble) Date: Tue, 6 Mar 2012 09:31:06 -0600 Subject: [C++-sig] Retaining a useful weak_ptr to a wrapped C++ interface implemented in Python In-Reply-To: References: Message-ID: On Tue, Mar 6, 2012 at 3:47 AM, Holger Brandsmeier wrote: > Adam, > > You probably have a function like this > > void addCallback(share_ptr<> arg) { > callbacks.push_back(weak_ptr<>(arg); > } > > That's pretty much what I have happening. > > So, before the function exits your object `arg` exists at least in three > places: > 1) somewhere in C++ where it was created > 2) in the python context > 3) in the context of addCallback > > Technically, it was declared and constructed from Python, but everything you say is a consequence of this is consistent with what I'm seeing. We could get into semantics here. If I create an object implementing a C++ interface, do we consider that created in Python or would it be regarded as created in the C++ runtime? > Assume you have class `Parent` and a class `Child` derived from it. > Now you can do: > - create an instance of Child C++ and bring it to python as > shared_ptr > - pass that instance to C++ (via shared_ptr or shared_ptr) > - get it later back from C++ but as a shared_ptr > - magic: you can treat that instance as a shared_ptr > In C++ you would need to do a dynamic cast to get this functionallity, > but because that object has been known to python to be an instance of > Child, boost::python automatically makes it an instance of Child, nice > right? > > Unfortunately your (and my) problem are a consequence of this. When > you go from 2->3 boost::python prepares for doing its magic. It > doesn't just return a copy of the shared_ptr from 1), it creates a new > shared_ptr with a special Deallocator object. The use_count at that > moment is 2: one for python 2) and one for addCallback() 3). When the > function addCallback() finishes, the use_count=1 (from python) and > weak_count=1 from 3). Once the python context ends then use_count=0 > and weak_count=1, and I believe that is exactly what you observe. > > In this case as use_count drops to 0 the boost custom Deallocator gets > called. This is usally not bad, as he just deregisters (decreases the > use cound by 1) in the shared_ptr for context 1.Only if that use_count > in context 1) would drop to 0 the object would get deleted (that's why > you don't observe it to be deleted). The problem is now, that the two > weak/shared ptrs (which still point to a healthy and alive object) are > now disconnected. So when you try to turn the weak_ptr in context3 to > a strong pointer you would get serious problems. > > Sounds about like what I'm dealing with here. Perhaps of particular note is that the pointers I'm moving around are typed for a parent class, but the actual reference is to a child. > I hope this is a correct explanation of the sitation. To solve this > would need to revist the magic for shared_ptrs in boost::python. I > plan to try and solve it some point later, but I am no regular > developer for boost::python and I can not promise that I will succeed, > nor when. > > Originally I was using shared_ptr instead of weak_ptr for the callback managers, and found some stuff never got deleted. Could this process also cause the disconnect there? -------------- next part -------------- An HTML attachment was scrubbed... URL: From brandsmeier at gmx.de Tue Mar 6 16:46:35 2012 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Tue, 6 Mar 2012 16:46:35 +0100 Subject: [C++-sig] Retaining a useful weak_ptr to a wrapped C++ interface implemented in Python In-Reply-To: References: Message-ID: Adam, >> So, before the function exits your object `arg` exists at least in three >> places: >> ?1) somewhere in C++ where it was created >> ?2) in the python context >> ?3) in the context of addCallback >> > > Technically, it was declared and constructed from Python, but everything you > say is a consequence of this is consistent with what I'm seeing. ?We could > get into semantics here. ?If I create an object implementing a C++ > interface, do we consider that created in Python or would it be regarded as > created in the C++ runtime? Regard it as beein created from C++, as it has been created from boost python. If it would be a pure python object, then it would be a different story as there would not be a shared_ptr in the first place. >> Assume you have class `Parent` and a class `Child` derived from it. >> Now you can do: >> ?- create an instance of Child C++ and bring it to python as >> shared_ptr >> ?- pass that instance to C++ (via shared_ptr or shared_ptr) >> ?- get it later back from C++ but as a shared_ptr >> ?- magic: you can treat that instance as a shared_ptr >> In C++ you would need to do a dynamic cast to get this functionallity, >> but because that object has been known to python to be an instance of >> Child, boost::python automatically makes it an instance of Child, nice >> right? >> >> Unfortunately your (and my) problem are a consequence of this. When >> you go from 2->3 boost::python prepares for doing its magic. It >> doesn't just return a copy of the shared_ptr from 1), it creates a new >> shared_ptr with a special Deallocator object. The use_count at that >> moment is 2: one for python 2) and one for addCallback() 3). When the >> function addCallback() finishes, the use_count=1 (from python) and >> weak_count=1 from 3). Once the python context ends then use_count=0 >> and weak_count=1, and I believe that is exactly what you observe. >> >> In this case as use_count drops to 0 the boost custom Deallocator gets >> called. This is usally not bad, as he just deregisters (decreases the >> use cound by 1) in the shared_ptr for context 1.Only if that use_count >> in context 1) would drop to 0 the object would get deleted (that's why >> you don't observe it to be deleted). The problem is now, that the two >> weak/shared ptrs (which still point to a healthy and alive object) are >> now disconnected. So when you try to turn the weak_ptr in context3 to >> a strong pointer you would get serious problems. >> > > Sounds about like what I'm dealing with here. ?Perhaps of particular note is > that the pointers I'm moving around are typed for a parent class, but the > actual reference is to a child. I don't think that it matters if you actually use that feature or not. Just because the functionality is there, you get the problem. I just wanted to give you an idea of why it is very nice that we have that functionality. >> I hope this is a correct explanation of the sitation. To solve this >> would need to revist the magic for shared_ptrs in boost::python. I >> plan to try and solve it some point later, but I am no regular >> developer for boost::python and I can not promise that I will succeed, >> nor when. >> > > Originally I was using shared_ptr instead of weak_ptr for the callback > managers, and found some stuff never got deleted. ?Could this process also > cause the disconnect there? If you are storing the result as a shared_ptr in the problem, then you will not run into problem, that is what I am doing at the moment. This implies that my class never gets deleted, which I can accept at the moment, but its actually quite bad. Maybe someone else on the list has a solution for this. At the moment I can only explain what I believe is causing the error. -Holger From adam.preble at gmail.com Tue Mar 6 17:43:50 2012 From: adam.preble at gmail.com (Adam Preble) Date: Tue, 6 Mar 2012 10:43:50 -0600 Subject: [C++-sig] Retaining a useful weak_ptr to a wrapped C++ interface implemented in Python In-Reply-To: References: Message-ID: Do have a way to unregister the callback? When I was using a shared_ptr originally, I was trying to use that, yet the objects still hung around. I was thinking that I properly didn't completely unregister it from all listeners, but I wonder now if I actually managed to cover all my tracks. I'm working through some unit tests goofs from my most recent refactoring related to all this, so I can't reliably test this out. When I get the whole thing working together again, I'm thinking of trying again with some callbacks and seeing that the objects in the shared pointers are getting destroyed. On Tue, Mar 6, 2012 at 9:46 AM, Holger Brandsmeier wrote: > If you are storing the result as a shared_ptr in the problem, then you > will not run into problem, that is what I am doing at the moment. This > implies that my class never gets deleted, which I can accept at the > moment, but its actually quite bad. > > Maybe someone else on the list has a solution for this. At the moment > I can only explain what I believe is causing the error. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michaelschuitema at gmail.com Wed Mar 14 17:45:37 2012 From: michaelschuitema at gmail.com (Michael Schuitema) Date: Wed, 14 Mar 2012 16:45:37 +0000 Subject: [C++-sig] multiple modules per dll/so? Message-ID: I would like to create a python package with several submodules, eg parent parent.child1 parent.child2 A number of the submodules use C++ extension made available via boost.python. Do I need to create a separate dll for each of them or is there a way to expose several packages with one dll? Can I have BOOST_PYTHON_MODULE(child1) BOOST_PYTHON_MODULE(child2) etc in a single dll. Cheers, mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at seefeld.name Wed Mar 14 17:57:40 2012 From: stefan at seefeld.name (Stefan Seefeld) Date: Wed, 14 Mar 2012 12:57:40 -0400 Subject: [C++-sig] multiple modules per dll/so? In-Reply-To: References: Message-ID: <4F60CE04.7090107@seefeld.name> On 2012-03-14 12:45, Michael Schuitema wrote: > I would like to create a python package with several submodules, eg > > parent > parent.child1 > parent.child2 > > A number of the submodules use C++ extension made available via > boost.python. Do I need to create a separate dll for each of them or > is there a way to expose several packages with one dll? No, you have to build separate extension modules for them. Stefan -- ...ich hab' noch einen Koffer in Berlin... From amundson at fnal.gov Wed Mar 14 21:37:53 2012 From: amundson at fnal.gov (James Amundson) Date: Wed, 14 Mar 2012 15:37:53 -0500 Subject: [C++-sig] Combining Boost Serialization with Boost Python Message-ID: <4F6101A1.8020706@fnal.gov> I have a use case involving Boost Serialization and Boost Python. I have a (mostly) C++ library wrapped with Python. The C++ code uses Boost Serialization to periodically create checkpoints from which the user can resume later. Getting the serialization and python libraries to work together didn't present any special problems until I had a case in which the user had a derived Python class (from a C++ base). In searching the web, I have seen people asking about similar cases, but never an answer. See, e.g., http://stackoverflow.com/questions/7289897/boost-serialization-and-boost-python-two-way-pickle . I have a partial solution to the problem using Boost Serialization and Python Pickle together. I would appreciate any advice on how to improve it. My solution is limited in that the state of the base C++ object is not restored. I also get an extra copy of the C++ object. Since my current use case has no state in the base object, I can live with these limitations. I would however, like to find a full solution. In the files I have attached, I can successfully run three variations on a simple test case. A class of type Base is passed to a class of type Caller. Caller calls the passed class's doit method twice, saves (checkpoints) its state, then calls the doit method again. The resume script restores the state of the Caller and Base classes, then calls the doit method again. If successful, the resumed doit call should be the same as the third doit call from the original script. In testcheckpoint1.py, the Base class is the original (C++) Base. It has no state, so the example is pretty trivial. Its doit method prints "Base::doit" -------------------------------------------------------------------------- % python testcheckpoint1.py Base::doit Base::doit checkpointed here Base::doit % python testresume.py resuming... Base::doit -------------------------------------------------------------------------- In testcheckpoint2.py I pass a derived Python class to Caller. The derived doit method displays "Derived:doit" and a counter: -------------------------------------------------------------------------- % python testcheckpoint2.py Derived::doit count = 1 Derived::doit count = 2 checkpointed here Derived::doit count = 3 % python testresume.py resuming... Derived::doit count = 3 -------------------------------------------------------------------------- In testcheckpoint3,py, I use a derived Python class with a non-trivial constructor, which is used to set the initial value of the counter: -------------------------------------------------------------------------- % python testcheckpoint3.py Derived2::doit count = 5 Derived2::doit count = 6 checkpointed here Derived2::doit count = 7 % python testresume.py resuming... Derived2::doit count = 7 -------------------------------------------------------------------------- As you can see, many things are possible with the code the way it currently exists. I would still like to figure out how to preserve the state of the base class. Any and all advice will be appreciated. --Jim Amundson -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mymodule.cc Type: text/x-c++src Size: 3548 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testcheckpoint1.py Type: text/x-python Size: 180 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testcheckpoint2.py Type: text/x-python Size: 209 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: derivedmodule.py Type: text/x-python Size: 844 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testcheckpoint3.py Type: text/x-python Size: 211 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: testresume.py Type: text/x-python Size: 95 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: Makefile URL: From ndbecker2 at gmail.com Thu Mar 15 13:28:19 2012 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 15 Mar 2012 08:28:19 -0400 Subject: [C++-sig] Combining Boost Serialization with Boost Python References: <4F6101A1.8020706@fnal.gov> Message-ID: I have used this myself for some time. Here is an example: typedef boost::mt19937 rng_t; struct mt_pickle_suite : bp::pickle_suite { static bp::object getstate (const rng_t& rng) { std::ostringstream os; boost::archive::binary_oarchive oa(os); oa << rng; return bp::str (os.str()); } static void setstate(rng_t& rng, bp::object entries) { bp::str s = bp::extract (entries)(); std::string st = bp::extract (s)(); std::istringstream is (st); boost::archive::binary_iarchive ia (is); ia >> rng; } }; From jjchristophe at yahoo.fr Thu Mar 15 14:22:00 2012 From: jjchristophe at yahoo.fr (christophe jean-joseph) Date: Thu, 15 Mar 2012 13:22:00 +0000 (GMT) Subject: [C++-sig] Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) Message-ID: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> Hello The situation is as follow. I have a C++ code that I haven't written and that I barely can modified. I am supposed to reflect this code in Python with boost. In the code, I have something looking like this: Class A_Base { ? A_Base(){}; ? ~A_Base(){}; ? Whatever virtual and pure virtual functions; } Class A_Derived{ ? A_Derived(Type1 arg1,...){whatever instructions;} ? ~A_Derived(){}; ? Whatever functions; } Class B { ? B(A_Base& aBase, double& x){}; ? ~B(){}; ? Whatever functions; } In the C++ main, at some point aDerived A_Derived is set, and then B(aDerived, x). I need to reflect that under python. Until now, I have been able, with a simple example, to reflect a function f, which is not a ctor, from a class B using A_Base& as argument type, but I can't figure out how to deal with it for a constructor. Based on: http://wiki.python.org/moin/boost.python/ExportingClasses I am declaring f under both its class B and A_Base as follow: class_A_Base ("A_Base", no_init)? //this line can be modified ??? .def("f", &B::f); class_B ("B", init< >())? //this line can be modified ??? .def("f", &B::f); But when I try this for a constructor as f, it refuse to compile. Anyone got a clue? Thank you very much in advance for any further help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.preble at gmail.com Fri Mar 16 05:59:03 2012 From: adam.preble at gmail.com (Adam Preble) Date: Thu, 15 Mar 2012 23:59:03 -0500 Subject: [C++-sig] (no subject) Message-ID: I discovered recently that during callbacks to Python objects, sometimes the interpreter would try to do stuff at the same time, despite the fact I made a call to ensure the GIL. I read the solution for that kind of thing I'm doing is calling PyEval_InitThreads(). This worked at first, but like more race conditions and stuff, all it takes is walking away from the computer for it all to fall apart. What I'm seeing is a rather elaborate deadlock situation revolved around securing the GIL. I think I need to basically put my interpreter in its own subsystem, design-wise, and ferret calls to invoke things in the interpreter to it, in order to ultimately get around this. However, what I'm asking of the distribution is how they've gotten around this. To give something a little more concrete--this is pretty gross: 1. Main thread starts interpreter and is running a script 2. The script defines an implementation of callback interface A 3. The script starts some objects that represent threads in the C++ runtime. These are threads 1 and 2. 4. The script starts to create an object that is wrapped from C++ 5. The object requires a resource from thread 1, where I use promises and futures enqueue and fulfill the request when thread #1 isn't busy. 6. Meanwhile, the interpreter thread is waiting for the resource since it cannot put it off any further 7. At this point, thread 2 tries to invoke the callback to interface A, and it needs the interpreter thread. 8. thread #1 needs thread #2 to complete this critical step before the get() call will complete for the master interpreter thread 9. Thread #2 needs thread #1 to finish so it can get the GIL. It's basically locked up in PyGILState_Ensure(). Heck of a deadlock. I am pondering having something else control the interpreter in its own thread and have everybody enqueue stuff up to run in it, like with the promises and futures I'm using elsewhere already. The reason is that thread #2 doesn't really need to steal the interpreter at that very exact moment. And furthermore, I'm trying to use Stackless, and it's my understanding there I can link it up so that the interpreter gets ahold of the Python runtime at controlled intervals--if desired--to invoke other stuff. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.preble at gmail.com Fri Mar 16 06:04:45 2012 From: adam.preble at gmail.com (Adam Preble) Date: Fri, 16 Mar 2012 00:04:45 -0500 Subject: [C++-sig] Managing the GIL across competing threads Message-ID: I edited the post so many times I forgot the subject line! I am a bad person. Maybe the most condensed question here is: How do you normally manage multiple resources competing with the GIL in a way that could cause deadlocks with it? For the details, see the long post. On Thu, Mar 15, 2012 at 11:59 PM, Adam Preble wrote: > I discovered recently that during callbacks to Python objects, sometimes > the interpreter would try to do stuff at the same time, despite the fact I > made a call to ensure the GIL. I read the solution for that kind of thing > I'm doing is calling PyEval_InitThreads(). This worked at first, but like > more race conditions and stuff, all it takes is walking away from the > computer for it all to fall apart. > > What I'm seeing is a rather elaborate deadlock situation revolved around > securing the GIL. I think I need to basically put my interpreter in its > own subsystem, design-wise, and ferret calls to invoke things in the > interpreter to it, in order to ultimately get around this. However, what > I'm asking of the distribution is how they've gotten around this. > > To give something a little more concrete--this is pretty gross: > 1. Main thread starts interpreter and is running a script > 2. The script defines an implementation of callback interface A > 3. The script starts some objects that represent threads in the C++ > runtime. These are threads 1 and 2. > 4. The script starts to create an object that is wrapped from C++ > 5. The object requires a resource from thread 1, where I use promises and > futures enqueue and fulfill the request when thread #1 isn't busy. > 6. Meanwhile, the interpreter thread is waiting for the resource since it > cannot put it off any further > 7. At this point, thread 2 tries to invoke the callback to interface A, > and it needs the interpreter thread. > 8. thread #1 needs thread #2 to complete this critical step before the > get() call will complete for the master interpreter thread > 9. Thread #2 needs thread #1 to finish so it can get the GIL. It's > basically locked up in PyGILState_Ensure(). > > Heck of a deadlock. I am pondering having something else control the > interpreter in its own thread and have everybody enqueue stuff up to run in > it, like with the promises and futures I'm using elsewhere already. The > reason is that thread #2 doesn't really need to steal the interpreter at > that very exact moment. And furthermore, I'm trying to use Stackless, and > it's my understanding there I can link it up so that the interpreter gets > ahold of the Python runtime at controlled intervals--if desired--to invoke > other stuff. > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_sourceforge at nedprod.com Fri Mar 16 16:38:04 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Fri, 16 Mar 2012 15:38:04 -0000 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: References: Message-ID: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> The key to avoiding deadlocks is, in every case, the appropriate design. In what you posted you appear to be doing too much at once, so you're holding onto too many locks at once. Either replace those with a single, master lock or try to never hold more than one lock at any one time. That includes the GIL. In well designed code, one almost NEVER holds more than two locks at any time. Consider breaking object instantiation up into well defined stages. Consider throwing away your current implementation entirely and starting from scratch. Debugging a bad design will take longer and cost more than throwing it away and starting again with the benefit of hindsight. Niall On 16 Mar 2012 at 0:04, Adam Preble wrote: > I edited the post so many times I forgot the subject line! I am a bad > person. > > Maybe the most condensed question here is: How do you normally manage > multiple resources competing with the GIL in a way that could cause > deadlocks with it? For the details, see the long post. > > On Thu, Mar 15, 2012 at 11:59 PM, Adam Preble wrote: > > > I discovered recently that during callbacks to Python objects, sometimes > > the interpreter would try to do stuff at the same time, despite the fact I > > made a call to ensure the GIL. I read the solution for that kind of thing > > I'm doing is calling PyEval_InitThreads(). This worked at first, but like > > more race conditions and stuff, all it takes is walking away from the > > computer for it all to fall apart. > > > > What I'm seeing is a rather elaborate deadlock situation revolved around > > securing the GIL. I think I need to basically put my interpreter in its > > own subsystem, design-wise, and ferret calls to invoke things in the > > interpreter to it, in order to ultimately get around this. However, what > > I'm asking of the distribution is how they've gotten around this. > > > > To give something a little more concrete--this is pretty gross: > > 1. Main thread starts interpreter and is running a script > > 2. The script defines an implementation of callback interface A > > 3. The script starts some objects that represent threads in the C++ > > runtime. These are threads 1 and 2. > > 4. The script starts to create an object that is wrapped from C++ > > 5. The object requires a resource from thread 1, where I use promises and > > futures enqueue and fulfill the request when thread #1 isn't busy. > > 6. Meanwhile, the interpreter thread is waiting for the resource since it > > cannot put it off any further > > 7. At this point, thread 2 tries to invoke the callback to interface A, > > and it needs the interpreter thread. > > 8. thread #1 needs thread #2 to complete this critical step before the > > get() call will complete for the master interpreter thread > > 9. Thread #2 needs thread #1 to finish so it can get the GIL. It's > > basically locked up in PyGILState_Ensure(). > > > > Heck of a deadlock. I am pondering having something else control the > > interpreter in its own thread and have everybody enqueue stuff up to run in > > it, like with the promises and futures I'm using elsewhere already. The > > reason is that thread #2 doesn't really need to steal the interpreter at > > that very exact moment. And furthermore, I'm trying to use Stackless, and > > it's my understanding there I can link it up so that the interpreter gets > > ahold of the Python runtime at controlled intervals--if desired--to invoke > > other stuff. > > > > > > _______________________________________________ > > Cplusplus-sig mailing list > > Cplusplus-sig at python.org > > http://mail.python.org/mailman/listinfo/cplusplus-sig > > > -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ From adam.preble at gmail.com Fri Mar 16 18:57:55 2012 From: adam.preble at gmail.com (Adam Preble) Date: Fri, 16 Mar 2012 12:57:55 -0500 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> References: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> Message-ID: Well the current design does have a problem, but I think it's more to do with the Python side than the sum of the whole thing. Most of my threads are basically subsystems in their own rights, and whatever that subsystem manage is encapsulated inside it. I get into this little spats when I need to request a resource from one or more of them--not necessarily from one subsystem to another, just in my main script I have to create an object that involves a piece from one and a piece from another. The subsystems will normally get to them when they are not in their own critical sections. I just got bit when they both went through what is essentially an unprotected interpreter. So I assume I would satisfy your suggestion of a master lock if I basically put the interpreter in its own subsystem. Everybody then gets their turn at it. Something I'm curious about has to do with the busy what in my promises and future implementation. When I end up waiting on something, is there a way right now with Python to give up the GIL until the wait is done? If I were to do a Release before the wait and an Ensure right after it, will there be inconsistent issues? I ask this despite probably trying it tonight since these are the kind of issues that tend to bite me after the fact and not up front. On Fri, Mar 16, 2012 at 10:38 AM, Niall Douglas wrote: > The key to avoiding deadlocks is, in every case, the appropriate > design. > > In what you posted you appear to be doing too much at once, so you're > holding onto too many locks at once. Either replace those with a > single, master lock or try to never hold more than one lock at any > one time. That includes the GIL. In well designed code, one almost > NEVER holds more than two locks at any time. > > Consider breaking object instantiation up into well defined stages. > Consider throwing away your current implementation entirely and > starting from scratch. Debugging a bad design will take longer and > cost more than throwing it away and starting again with the benefit > of hindsight. > > Niall > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s_sourceforge at nedprod.com Sat Mar 17 19:44:06 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sat, 17 Mar 2012 18:44:06 -0000 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: References: , <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com>, Message-ID: <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com> On 16 Mar 2012 at 12:57, Adam Preble wrote: > Well the current design does have a problem, but I think it's more to do > with the Python side than the sum of the whole thing. If by "Python side" you mean Boost.Python, then I agree: BPL has no support for GIL management at all, and it really ought to. This was one of the things that was discussed in the BPL v3 discussions on this list a few months ago. If by "Python side" you just mean Python, well it's basically one big fat lock. There's nothing wrong with that design, indeed it's extremely common. > So I assume I would satisfy your suggestion of a master lock if I basically > put the interpreter in its own subsystem. Everybody then gets their turn > at it. You need to clarify what you mean by "own subsystem". Do you mean "own process"? > Something I'm curious about has to do with the busy what in my promises and > future implementation. When I end up waiting on something, is there a way > right now with Python to give up the GIL until the wait is done? If I were > to do a Release before the wait and an Ensure right after it, will there be > inconsistent issues? I ask this despite probably trying it tonight since > these are the kind of issues that tend to bite me after the fact and not up > front. You're going to have to be a lot clearer here. Do you mean you want BPL to give up the GIL until the wait is done, or Python? Regarding inconsistency over releases and locks, that's 101 threading theory. Any basic threading programming textbook will tell you how to handle such issues. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ From talljimbo at gmail.com Sun Mar 18 00:22:22 2012 From: talljimbo at gmail.com (Jim Bosch) Date: Sat, 17 Mar 2012 19:22:22 -0400 Subject: [C++-sig] Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) In-Reply-To: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> References: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> Message-ID: <4F651CAE.1010205@gmail.com> On 03/15/2012 09:22 AM, christophe jean-joseph wrote: > Hello > > The situation is as follow. > I have a C++ code that I haven't written and that I barely can modified. > I am supposed to reflect this code in Python with boost. > In the code, I have something looking like this: > > Class A_Base { > A_Base(){}; > ~A_Base(){}; > Whatever virtual and pure virtual functions; > } > > Class A_Derived{ > A_Derived(Type1 arg1,...){whatever instructions;} > ~A_Derived(){}; > Whatever functions; > } > > Class B { > B(A_Base& aBase, double& x){}; > ~B(){}; > Whatever functions; > } > > In the C++ main, at some point aDerived A_Derived is set, and then > B(aDerived, x). > I need to reflect that under python. > Until now, I have been able, with a simple example, to reflect a > function f, which is not a ctor, from a class B using A_Base& as > argument type, but I can't figure out how to deal with it for a constructor. > Based on: > > http://wiki.python.org/moin/boost.python/ExportingClasses > > I am declaring f under both its class B and A_Base as follow: > > class_A_Base ("A_Base", no_init) //this line can > be modified > .def("f", &B::f); > class_B ("B", init< >()) //this line can be modified > .def("f", &B::f); > > > But when I try this for a constructor as f, it refuse to compile. You don't wrap constructors the same way as functions (I don't think you can even take their address in C++). Instead, you use: .def(init()) (where T1,T2,... are the argument types of the constructor), or do the same thing in the init argument to class_ itself (as I've done below). I think what you want is something like this (assuming A_Derived does actually inherit from A_Base, even though it doesn't in your example code): class_("A_Base", no_init); class_,noncopyable>("A_Derived", no_init); class_("B", init()); Note that the float you pass in as "x" to B's constructor will not be modified in Python; that's unavoidable in this case. If that doesn't work, please post exactly what you're trying to compile; the example you posted has a few things that obviously won't compile, so it's hard to know exactly what the problem is. Good luck! Jim From talljimbo at gmail.com Sun Mar 18 00:39:49 2012 From: talljimbo at gmail.com (Jim Bosch) Date: Sat, 17 Mar 2012 19:39:49 -0400 Subject: [C++-sig] Combining Boost Serialization with Boost Python In-Reply-To: <4F6101A1.8020706@fnal.gov> References: <4F6101A1.8020706@fnal.gov> Message-ID: <4F6520C5.4090007@gmail.com> On 03/14/2012 04:37 PM, James Amundson wrote: > I have a use case involving Boost Serialization and Boost Python. I have > a (mostly) C++ library wrapped with Python. The C++ code uses Boost > Serialization to periodically create checkpoints from which the user can > resume later. Getting the serialization and python libraries to work > together didn't present any special problems until I had a case in which > the user had a derived Python class (from a C++ base). In searching the > web, I have seen people asking about similar cases, but never an answer. > See, e.g., > http://stackoverflow.com/questions/7289897/boost-serialization-and-boost-python-two-way-pickle > > . > It's been a long while since I last tried to pickle Boost.Python objects, but I do recall being a lot happier with the level of control I had when I just implemented my own __reduce__ methods rather than relying on the __getstate__ and __setstate__ defined by enable_pickling(). In many cases, it was most convenient to actually just write __reduce__ in pure-Python and add it to the wrapped classes in the __init__.py file. It would have to delegate to wrapped C++ methods to do the Boost.Serialization calls, of course. Using __reduce__ would allow you to provide a specific callable to reconstruct the Python derived class, which could then ensure it does exactly the right combination of regular unpickling and C++ deserialization. Sorry that's not the review of your "almost there" solution you were looking for, but I do think you might find the problem easier to solve with __reduce__. Jim From adam.preble at gmail.com Sun Mar 18 04:20:30 2012 From: adam.preble at gmail.com (Adam Preble) Date: Sat, 17 Mar 2012 22:20:30 -0500 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com> References: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com> Message-ID: On Sat, Mar 17, 2012 at 1:44 PM, Niall Douglas wrote: > If by "Python side" you mean Boost.Python, then I agree: BPL has no > support for GIL management at all, and it really ought to. This was > one of the things that was discussed in the BPL v3 discussions on > this list a few months ago. > > Hey do you know any terms or thread names where I could go digging through some of those discussions? I'm just trying stuff superficially and finding some things that are at least interesting, but I'm not sure I got that stuff yet. > You need to clarify what you mean by "own subsystem". Do you mean > "own process"? > > In the model I'm using, a subsystem would be a thread taking care of a particular resource. In this case, I figured I would make the Python interpreter that resource, and install better guards around interacting with it. For one thing, rather than anything else grabbing the GIL, they would enqueue stuff for it to execute. That's about as far as I got with it in my head. I managed to unjam the deadlock I was experiencing by eliminating the contention the two conflicting threads were having with each other. > You're going to have to be a lot clearer here. Do you mean you want > BPL to give up the GIL until the wait is done, or Python? > > Something, somewhere. I wasn't being to picky. I wondered if there was a way to do it that I hadn't found with the regular Python headers. > Regarding inconsistency over releases and locks, that's 101 threading > theory. Any basic threading programming textbook will tell you how to > handle such issues. I'm hoping I'm asking about, say, 102 threading stuff instead. ;) Specifically, I'm trying my darndest to keep explict lock controls outside of the main code because it really is hard to get right all the time. Rather, I am trying to apply some kind of structure. The subsystem model I did a terrible job of mentioning is an example, as well as using promises and futures. If there are tricks, features, or something similar to keep in mind, that would steer how I approach this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Sun Mar 18 16:54:22 2012 From: talljimbo at gmail.com (Jim Bosch) Date: Sun, 18 Mar 2012 11:54:22 -0400 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: References: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com> Message-ID: <4F66052E.6000504@gmail.com> On 03/17/2012 11:20 PM, Adam Preble wrote: > > > On Sat, Mar 17, 2012 at 1:44 PM, Niall Douglas > > wrote: > > If by "Python side" you mean Boost.Python, then I agree: BPL has no > support for GIL management at all, and it really ought to. This was > one of the things that was discussed in the BPL v3 discussions on > this list a few months ago. > > > Hey do you know any terms or thread names where I could go digging > through some of those discussions? Look for "[Boost.Python v3]" and "New Major-Release Boost.Python Development" in the subject line. We didn't go into a lot of depth on the threading, I'm afraid, as one of the problems is that the guy starting the effort - me - doesn't actually know much about threaded programming. But I am hoping that I can design things in such a way that someone like Niall could easily take it from there. Jim From stefan at seefeld.name Sun Mar 18 17:05:38 2012 From: stefan at seefeld.name (Stefan Seefeld) Date: Sun, 18 Mar 2012 12:05:38 -0400 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: <4F66052E.6000504@gmail.com> References: <4F635E5C.22389.54A17695@s_sourceforge.nedprod.com> <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com> <4F66052E.6000504@gmail.com> Message-ID: <4F6607D2.1090002@seefeld.name> On 2012-03-18 11:54, Jim Bosch wrote: > On 03/17/2012 11:20 PM, Adam Preble wrote: >> >> >> On Sat, Mar 17, 2012 at 1:44 PM, Niall Douglas >> > wrote: >> >> If by "Python side" you mean Boost.Python, then I agree: BPL has no >> support for GIL management at all, and it really ought to. This was >> one of the things that was discussed in the BPL v3 discussions on >> this list a few months ago. >> >> >> Hey do you know any terms or thread names where I could go digging >> through some of those discussions? > > Look for "[Boost.Python v3]" and "New Major-Release Boost.Python > Development" in the subject line. > > We didn't go into a lot of depth on the threading, I'm afraid, as one > of the problems is that the guy starting the effort - me - doesn't > actually know much about threaded programming. But I am hoping that I > can design things in such a way that someone like Niall could easily > take it from there. I recall seeing a discussion of locking policy being attached to individual functions by means of the return-value and argument-passing policy traits; in other words, something that's associated with from-python and to-python conversion. I found that rather elegant. I'm not sure whether anyone has any practical experience with that technique, or whether it was just an idea worth exploring. FWIW, Stefan -- ...ich hab' noch einen Koffer in Berlin... From s_sourceforge at nedprod.com Sun Mar 18 18:13:12 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sun, 18 Mar 2012 17:13:12 -0000 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: References: , <4F64DB76.18121.5A7224F0@s_sourceforge.nedprod.com>, Message-ID: <4F6617A8.16986.5F45490C@s_sourceforge.nedprod.com> On 17 Mar 2012 at 22:20, Adam Preble wrote: > > If by "Python side" you mean Boost.Python, then I agree: BPL has no > > support for GIL management at all, and it really ought to. This was > > one of the things that was discussed in the BPL v3 discussions on > > this list a few months ago. > > > Hey do you know any terms or thread names where I could go digging through > some of those discussions? I'm just trying stuff superficially and finding > some things that are at least interesting, but I'm not sure I got that > stuff yet. Try "[Boost.Python v3] Conversions and Registries", about October of last year. If you search right back to many years ago, I used to maintain a patch which implemented GIL management for Boost.Python. It's long bitrotted though. > > You need to clarify what you mean by "own subsystem". Do you mean > > "own process"? > > > In the model I'm using, a subsystem would be a thread taking care of a > particular resource. In this case, I figured I would make the Python > interpreter that resource, and install better guards around interacting > with it. For one thing, rather than anything else grabbing the GIL, they > would enqueue stuff for it to execute. That's about as far as I got with > it in my head. I managed to unjam the deadlock I was experiencing by > eliminating the contention the two conflicting threads were having with > each other. The only way to get two Python interpreters to run at once is to put each into its own process. As it happens, IPC is usually fast enough relative to the slowness of Python that often this actually works very well. > > You're going to have to be a lot clearer here. Do you mean you want > > BPL to give up the GIL until the wait is done, or Python? > > > Something, somewhere. I wasn't being to picky. I wondered if there was a > way to do it that I hadn't found with the regular Python headers. Generally Python releases the GIL around anything it thinks might wait. So long as you write your extension code to do the same, all should be well. > > Regarding inconsistency over releases and locks, that's 101 threading > > theory. Any basic threading programming textbook will tell you how to > > handle such issues. > > I'm hoping I'm asking about, say, 102 threading stuff instead. ;) > Specifically, I'm trying my darndest to keep explict lock controls outside > of the main code because it really is hard to get right all the time. > Rather, I am trying to apply some kind of structure. The subsystem model > I did a terrible job of mentioning is an example, as well as using promises > and futures. If there are tricks, features, or something similar to keep > in mind, that would steer how I approach this. Y'see, I'd look at promises and futures primarily as a latency hiding mechanism rather than for lock handling. I agree it would have been super-sweet if Python had been much more thread aware in its design, but we live with the hand we're dealt. Trying to bolt on futures and promises to a system which wasn't designed for it is likely to be self-defeating. There is a school of thought that multithreading is only really worthwhile doing in statically compiled languages. Anything interpreted tends, generally speaking, to be too chunky to fine grain holding locks to make threading useful for anything except i/o. Certainly Python is extremely chunky, and I'd generally avoid using threads in Python at all except as a way of portably doing asynchronous i/o. Otherwise it's not worth it. If you're thinking it is, it's time for a redesign. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ From s_sourceforge at nedprod.com Sun Mar 18 18:39:05 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sun, 18 Mar 2012 17:39:05 -0000 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: <4F6607D2.1090002@seefeld.name> References: , <4F66052E.6000504@gmail.com>, <4F6607D2.1090002@seefeld.name> Message-ID: <4F661DB9.28894.5F5CFAE7@s_sourceforge.nedprod.com> On 18 Mar 2012 at 12:05, Stefan Seefeld wrote: > > We didn't go into a lot of depth on the threading, I'm afraid, as one > > of the problems is that the guy starting the effort - me - doesn't > > actually know much about threaded programming. But I am hoping that I > > can design things in such a way that someone like Niall could easily > > take it from there. > > I recall seeing a discussion of locking policy being attached to > individual functions by means of the return-value and argument-passing > policy traits; in other words, something that's associated with > from-python and to-python conversion. I found that rather elegant. I'm > not sure whether anyone has any practical experience with that > technique, or whether it was just an idea worth exploring. Indeed, I had argued for a DLL and SO aware type registry to hold default BPL<=>Python enter/exit policy (of which locking is an example) which can be optionally overridden at the point of call. It was from this and other reasons I had argued for a fused compile time/runtime registry, and it was at that point that Dave came in to argue against such a design for a series of good reasons. I actually don't think myself and Dave disagreed, rather as usual he sees the world one way and I see the world another, and we use different vocabulary so we both think we're arguing when often we agree. That said, I would be at pains to think twice or thrice about an idea if Dave queries it, not least because it encourages me to articulate myself much more clearly. He's also generally correct, or has had an insight I have missed. Actually, on the basis of that previous discussion I have raised within ISO standards the point that we really ought to do something about standardising shared libraries in ISO WG14. It's long overdue and every attempt has failed so far. Yet as things stand, BPL is simply broken in a multi-extension use context and there is no standards-compliant way of fixing it. As the C language is the primary interop for all other programming languages, and POSIX's approach to shared libraries is broken, I fear we might just have to go stomp on some eggshells to get this fixed. That said, if done right we could seriously improve interop for all languages. Imagine a standardised way of talking to Haskell from Python via a modern C interop specification? Now that would be cool! Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ From s_sourceforge at nedprod.com Sun Mar 18 18:39:05 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sun, 18 Mar 2012 17:39:05 -0000 Subject: [C++-sig] Managing the GIL across competing threads In-Reply-To: <4F66052E.6000504@gmail.com> References: , , <4F66052E.6000504@gmail.com> Message-ID: <4F661DB9.32414.5F5CFA5A@s_sourceforge.nedprod.com> On 18 Mar 2012 at 11:54, Jim Bosch wrote: > We didn't go into a lot of depth on the threading, I'm afraid, as one of > the problems is that the guy starting the effort - me - doesn't actually > know much about threaded programming. But I am hoping that I can design > things in such a way that someone like Niall could easily take it from > there. I'm currently in early negotiations with a Silicon Valley crowd, part of which involve doing some substantial improvements to Boost.Python in this area. I generally don't believe anyone until I either see money or a signed contract, so take this news with a pinch of salt, but I hope that the list knows that I would love to work on this topic if it were financially viable for me to do so. Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/ From jjchristophe at yahoo.fr Wed Mar 21 17:09:11 2012 From: jjchristophe at yahoo.fr (christophe jean-joseph) Date: Wed, 21 Mar 2012 16:09:11 +0000 (GMT) Subject: [C++-sig] Re : Re : Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) In-Reply-To: <4F694C8F.6050206@gmail.com> References: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F651CAE.1010205@gmail.com> <1332241623.22632.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F694C8F.6050206@gmail.com> Message-ID: <1332346151.60933.YahooMailNeo@web132305.mail.ird.yahoo.com> Thank you for your answer, As I said, the solution I am currently using is working fine, I am just using a method explained in the tutorial for a function independant from any class: http://wiki.python.org/moin/boost.python/ExportingClasses? and I extend it to a function of another class. You recommend to write things that way: class_< A_i, bp::bases >(...);? but, A_i are not derived from B, and as they are already derived from A_Base_j classes (some from a same base class, not all of them), I already declared their base classes. Declaring a function B::f(A& a, ...) in A as: .def("f", &B::f) keep C++ declaration (I mean, even if B::f is declared in A, it's declared as a method of B, which is correct). But what your are proposing seems not correct to me, as long as A isn't a derived class from B. Christophe Jean-Joseph ? ________________________________ De?: Jim Bosch ??: christophe jean-joseph Envoy? le : Mercredi 21 mars 2012 4h35 Objet?: Re: Re : [C++-sig] Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) On 03/20/2012 07:07 AM, christophe jean-joseph wrote: > > Thank you for your answer and sorry for the delay: I finally used > another solution. > I used a setParam function instead of the constructor for boost::python. > From the point of view of c++, the only difference is that the former > constructor is now just calling setParam. > This former constructor is ignored by boost::python, and instead > setParam is reflected as a normal function. > This works fine (my Python main produce, with less than 0.01% > difference, the same results as the C++ main). > But I am not satisfied with my solution for the following reason: > if A and B are 2 classes, in order to reflect a function B::f(A& a, > ....), I am writing things as follow as follow: > Class_ (usual declarations here) > .def("f", &B::f) > Class_ (usual declarations here) > .def("f", &B::f) > > Which means, for n classes A_i (i from 1 to n) and a function of a class > B b::f(A_1& a1, A_2& a2,...., A_n& an, ....) I'll have to write: > Class_ (usual declarations here) > .def("f", &B::f) > . > . > . > Class_ (usual declarations here) > .def("f", &B::f) > Class_ (usual declarations here) > .def("f", &B::f) > > Which is not very clean, even though it's working. > So if there is any other way to do that, I will be glad to hear it. > I'm a little confused; if the A classes do not inherit from B, why would you want to wrap a member function of B as a method of A?? I don't think they would actually work, unless you've made sure all the A classes are convertible to B by some other mechanism. And if the A classes do inherit from B, if you use: class_< A_i, bp::bases >(...); with no .def("f", ...), you'll still be able to use the "f" method in Python because it will be inherited from B. Jim > ------------------------------------------------------------------------ > *De :* Jim Bosch > *? :* christophe jean-joseph ; Development of > Python/C++ integration > *Envoy? le :* Dimanche 18 mars 2012 0h22 > *Objet :* Re: [C++-sig] Re : [boost] Re : Using an element of a class A > in a constructor of a class B (reflection with boost::python) > > On 03/15/2012 09:22 AM, christophe jean-joseph wrote: >? > Hello >? > >? > The situation is as follow. >? > I have a C++ code that I haven't written and that I barely can modified. >? > I am supposed to reflect this code in Python with boost. >? > In the code, I have something looking like this: >? > >? > Class A_Base { >? > A_Base(){}; >? > ~A_Base(){}; >? > Whatever virtual and pure virtual functions; >? > } >? > >? > Class A_Derived{ >? > A_Derived(Type1 arg1,...){whatever instructions;} >? > ~A_Derived(){}; >? > Whatever functions; >? > } >? > >? > Class B { >? > B(A_Base& aBase, double& x){}; >? > ~B(){}; >? > Whatever functions; >? > } >? > >? > In the C++ main, at some point aDerived A_Derived is set, and then >? > B(aDerived, x). >? > I need to reflect that under python. >? > Until now, I have been able, with a simple example, to reflect a >? > function f, which is not a ctor, from a class B using A_Base& as >? > argument type, but I can't figure out how to deal with it for a > constructor. >? > Based on: >? > >? > http://wiki.python.org/moin/boost.python/ExportingClasses >? > >? > I am declaring f under both its class B and A_Base as follow: >? > >? > class_A_Base ("A_Base", no_init) //this line can >? > be modified >? > .def("f", &B::f); >? > class_B ("B", init< >()) //this line can be > modified >? > .def("f", &B::f); >? > >? > >? > But when I try this for a constructor as f, it refuse to compile. > > You don't wrap constructors the same way as functions (I don't think you > can even take their address in C++). Instead, you use: > > .def(init()) > > (where T1,T2,... are the argument types of the constructor), or do the > same thing in the init argument to class_ itself (as I've done below). > > I think what you want is something like this (assuming A_Derived does > actually inherit from A_Base, even though it doesn't in your example code): > > class_("A_Base", no_init); > class_,noncopyable>("A_Derived", no_init); > class_("B", init()); > > Note that the float you pass in as "x" to B's constructor will not be > modified in Python; that's unavoidable in this case. > > If that doesn't work, please post exactly what you're trying to compile; > the example you posted has a few things that obviously won't compile, so > it's hard to know exactly what the problem is. > > Good luck! > > Jim > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Wed Mar 21 17:32:53 2012 From: talljimbo at gmail.com (Jim Bosch) Date: Wed, 21 Mar 2012 12:32:53 -0400 Subject: [C++-sig] Re : Re : Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) In-Reply-To: <1332346151.60933.YahooMailNeo@web132305.mail.ird.yahoo.com> References: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F651CAE.1010205@gmail.com> <1332241623.22632.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F694C8F.6050206@gmail.com> <1332346151.60933.YahooMailNeo@web132305.mail.ird.yahoo.com> Message-ID: <4F6A02B5.2090806@gmail.com> On 03/21/2012 12:09 PM, christophe jean-joseph wrote: > > > Thank you for your answer, > > As I said, the solution I am currently using is working fine, I am just using a method explained in the tutorial for a function independant from any class: > > http://wiki.python.org/moin/boost.python/ExportingClasses > > > and I extend it to a function of another class. > You recommend to write things that way: > > class_< A_i, bp::bases >(...); > > > but, A_i are not derived from B, and as they are already derived from A_Base_j classes (some from a same base class, not all of them), I already declared their base classes. > Declaring a function B::f(A& a, ...) in A as: > .def("f",&B::f) > keep C++ declaration (I mean, even if B::f is declared in A, it's declared as a method of B, which is correct). > But what your are proposing seems not correct to me, as long as A isn't a derived class from B. > Oh, I understand now. "B::f" is a static member function that takes an "A" as its first argument. You just want a more elegant way to wrap a lot of similar classes. You may not be able to get it a lot cleaner, but you can cut down some of the boilerplate by writing a templated function to wrap an "A" class. You can then just call that repeatedly: template void wrapA(char const * name) { bp::class_(name, ...) .def("F", &B::f) ; } BOOST_PYTHON_MODULE(whatever) { wrapA("A_1"); wrapA("A_2"); ... wrapA("A_n"); } Of course, you'll have to modify it to do more than that, but hopefully that will get you started. Jim From asu_mcs at yahoo.com Thu Mar 22 09:39:40 2012 From: asu_mcs at yahoo.com (sam) Date: Thu, 22 Mar 2012 01:39:40 -0700 (PDT) Subject: [C++-sig] sendkeys method Message-ID: <1332405580.70881.YahooMailClassic@web45212.mail.sp1.yahoo.com> Hello, Is there away(method) in C/C++ programming that make the software think that someone pressed the OK button, similar to sendkeys in VBS (WshShell.SendKeys ("{Enter}")) ? Thank you, Sam ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jjchristophe at yahoo.fr Thu Mar 22 10:33:16 2012 From: jjchristophe at yahoo.fr (christophe jean-joseph) Date: Thu, 22 Mar 2012 09:33:16 +0000 (GMT) Subject: [C++-sig] Re : Re : Re : Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) In-Reply-To: <4F6A02B5.2090806@gmail.com> References: <1331817720.8663.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F651CAE.1010205@gmail.com> <1332241623.22632.YahooMailNeo@web132306.mail.ird.yahoo.com> <4F694C8F.6050206@gmail.com> <1332346151.60933.YahooMailNeo@web132305.mail.ird.yahoo.com> <4F6A02B5.2090806@gmail.com> Message-ID: <1332408796.89781.YahooMailNeo@web132304.mail.ird.yahoo.com> Thank you very much for your suggestion. Jean-Joseph ? ________________________________ De?: Jim Bosch ??: christophe jean-joseph Cc?: "cplusplus-sig at python.org" Envoy? le : Mercredi 21 mars 2012 17h32 Objet?: Re: Re : Re : [C++-sig] Re : [boost] Re : Using an element of a class A in a constructor of a class B (reflection with boost::python) On 03/21/2012 12:09 PM, christophe jean-joseph wrote: > > > Thank you for your answer, > > As I said, the solution I am currently using is working fine, I am just using a method explained in the tutorial for a function independant from any class: > > http://wiki.python.org/moin/boost.python/ExportingClasses > > > and I extend it to a function of another class. > You recommend to write things that way: > > class_? >(...); > > > but, A_i are not derived from B, and as they are already derived from A_Base_j classes (some from a same base class, not all of them), I already declared their base classes. > Declaring a function B::f(A&? a, ...) in A as: > .def("f",&B::f) > keep C++ declaration (I mean, even if B::f is declared in A, it's declared as a method of B, which is correct). > But what your are proposing seems not correct to me, as long as A isn't a derived class from B. > Oh, I understand now.? "B::f" is a static member function that takes an "A" as its first argument.? You just want a more elegant way to wrap a lot of similar classes. You may not be able to get it a lot cleaner, but you can cut down some of the boilerplate by writing a templated function to wrap an "A" class.? You can then just call that repeatedly: template void wrapA(char const * name) { ? ? bp::class_(name, ...) ? ? ? ? .def("F", &B::f) ? ? ? ? ; } BOOST_PYTHON_MODULE(whatever) { ? ? wrapA("A_1"); ? ? wrapA("A_2"); ? ? ... ? ? wrapA("A_n"); } Of course, you'll have to modify it to do more than that, but hopefully that will get you started. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.cronan at geostellar.com Thu Mar 29 23:14:04 2012 From: kyle.cronan at geostellar.com (Kyle Cronan) Date: Thu, 29 Mar 2012 16:14:04 -0500 Subject: [C++-sig] function template instantiation Message-ID: Hi all, I've been using Boost.Python to expose some instances of a class template to a python module. I ran into a problem with my class's method templates not being instantiated, leading to "undefined symbol" on import. Actually, this message from back in 2004 describes the problem exactly: http://mail.python.org/pipermail/cplusplus-sig/2004-February/006586.html The advise is to move the method definitions into the header file, but I have some methods I want to ensure are not inlined, so I ended up using explicit template instantiations instead. Kind of a pain. The standard has this to say about function template instantiations: 14.7.1 Implicit instantiation 2. Unless a function template specialization has been explicitly instantiated or explicitly specialized, the function template specialization is implicitly instantiated when the specialization is referenced in a context that requires a function definition to exist. So my question is, why isn't taking the address of some instance of a templatized function and passing it to def() enough to require the function definition to exist? Should this be considered a usability bug in the library, a problem with my compiler, or maybe neither? I'm using gcc 4.4. Thanks, Kyle Cronan From stefan at seefeld.name Thu Mar 29 23:42:35 2012 From: stefan at seefeld.name (Stefan Seefeld) Date: Thu, 29 Mar 2012 17:42:35 -0400 Subject: [C++-sig] function template instantiation In-Reply-To: References: Message-ID: <4F74D74B.2070105@seefeld.name> Kyle, On 03/29/2012 05:14 PM, Kyle Cronan wrote: > So my question is, why isn't taking the address of some instance of a > templatized function and passing it to def() enough to require the > function definition to exist? It certainly is enough to require, but isn't enough to magically make a definition to appear. Either you provide the definition in the header, and let the compiler do that magic of instantiating the template on-demand, or you defer to a separate compilation unit, but then can't rely on such magic and need to explicitly instantiate. > Should this be considered a usability > bug in the library, a problem with my compiler, or maybe neither? I haven't seen your specific code, but your description sounds like all is working as expected. Stefan -- ...ich hab' noch einen Koffer in Berlin... From kyle.cronan at geostellar.com Fri Mar 30 00:38:03 2012 From: kyle.cronan at geostellar.com (Kyle Cronan) Date: Thu, 29 Mar 2012 17:38:03 -0500 Subject: [C++-sig] function template instantiation In-Reply-To: <4F74D74B.2070105@seefeld.name> References: <4F74D74B.2070105@seefeld.name> Message-ID: On Thu, Mar 29, 2012 at 4:42 PM, Stefan Seefeld wrote: > > It certainly is enough to require, but isn't enough to magically make a > definition to appear. Either you provide the definition in the header, > and let the compiler do that magic of instantiating the template > on-demand, or you defer to a separate compilation unit, but then can't > rely on such magic and need to explicitly instantiate. Makes sense, thank you. Hopefully, having this in the list archive will help others in the future. Somehow I had never run into the need to use explicit instantiation before! But I think I get it now: by the time the linker sees what's going on it's too late to generate the necessary object code. Best, Kyle From sven at marnach.net Fri Mar 30 00:59:27 2012 From: sven at marnach.net (Sven Marnach) Date: Thu, 29 Mar 2012 23:59:27 +0100 Subject: [C++-sig] function template instantiation In-Reply-To: References: Message-ID: <20120329225927.GD21082@bagheera> (answering off-list) Kyle Cronan schrieb am Thu, 29. Mar 2012, um 16:14:04 -0500: > ... but I have some methods I want to ensure are not inlined ... Note that this is impossible. The compiler is free to inline *any* function. If the address of a function is taken, the compiler must emit code for a non-inlined version, but this is independent from whether the function is defined inline or not. And even if a non-inlined version is present, the compiler still can inline other calls. Bottom line: it doesn't matter whether you define functions in the header or not, at least as far as inlining in concerned. Cheers, Sven From s_sourceforge at nedprod.com Fri Mar 30 15:18:15 2012 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Fri, 30 Mar 2012 14:18:15 +0100 Subject: [C++-sig] function template instantiation In-Reply-To: References: Message-ID: <4F75B297.8219.3C3FAA7F@s_sourceforge.nedprod.com> On 29 Mar 2012 at 16:14, Kyle Cronan wrote: > So my question is, why isn't taking the address of some instance of a > templatized function and passing it to def() enough to require the > function definition to exist? Should this be considered a usability > bug in the library, a problem with my compiler, or maybe neither? No, in that earlier post this behaviour is by design and intent (and I'm surprised no one said so at the time). When you do this: class_< field >("field_vect", init< int >()) .def("getsize", &field::getsize) ; ... you're telling the compiler to instantiate field::getsize. Problem is, the compiler hasn't seen what getsize is supposed to be because its implementation was declared in a separate compilation unit. Therefore it can't know how to mangle the symbol for field::getsize, and therefore can't resolve the symbol. Besides the above, you should NEVER declare template implementations outside a header file. Otherwise it easily leads to violations of the ODR. The old export keyword was supposed to solve the problem of having to declare template implementations in headers, but that's dead as a concept now. BTW, regarding inlining, there are magic attributes to force never inlining. Very, very useful for debugging optimised code, and easier and clearer to use those than obfuscating the code. HTH, Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Work Portfolio: http://careers.stackoverflow.com/nialldouglas/