From talljimbo at gmail.com Mon Aug 1 00:53:30 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Sun, 31 Jul 2011 15:53:30 -0700 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <4E3509A0.9070901@orange.fr> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> Message-ID: <4E35DCEA.2010301@gmail.com> On 07/31/2011 12:52 AM, Valentin Perrelle wrote: > Le 31/07/2011 07:38, Jim Bosch a ?crit : >> I guess I don't really understand how your program flow is supposed to >> work - how did you plan to invoke C++ code from Python before >> importing your Boost.Python module? Usually the natural place to >> register converters is during module import after you've registered >> the classes. > > I don't plan to invoke c++ code from Python before importing the module: > I need to import the module before exposing some instance of some class > of the module. Does this mean that the module should be already imported > when the script start running ? Can't i expose the object and then let > decide the Python script wheter it need to use it or not ? > All of that sounds sounds. But at what point were you trying to register a to-python converter? It sounded like you were trying to do that before importing the module, and since a to-python converter is by definition C++, I didn't understand how you could do it in a Python script before importing the module. Jim From valentin.perrelle at orange.fr Mon Aug 1 12:00:49 2011 From: valentin.perrelle at orange.fr (Valentin Perrelle) Date: Mon, 01 Aug 2011 12:00:49 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <4E35DCEA.2010301@gmail.com> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> Message-ID: <4E367951.9010606@orange.fr> > All of that sounds sounds. But at what point were you trying to > register a to-python converter? It sounded like you were trying to do > that before importing the module, and since a to-python converter is > by definition C++, I didn't understand how you could do it in a Python > script before importing the module. I'm registering the converter by calling the boost::python::import function in C++ code. I don't know any other way to do that. Now, this give me another error. I'm trying to implement a "reload script" feature. I thought that all i had to do was to call Py_Finalize, then Py_Initialize again, and to remove any references i was holding to wrapping classes. But whenever i'm importing my extension again, i get the runtime error: Assertion failed: slot->m_to_python == 0, file libs\python\src\converter\registry.cpp, line 212 which means my to_python converter have been registered once again. Is there a way to unregister them ? should i find a to not initialize the extension again ? From simwarg at gmail.com Mon Aug 1 12:39:21 2011 From: simwarg at gmail.com (Simon Warg) Date: Mon, 1 Aug 2011 12:39:21 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <4E367951.9010606@orange.fr> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> Message-ID: <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> You could use the reload() function in python 2.7 or imp.reload() in python 3. It takes a module object as argument. // Simon On 1 aug 2011, at 12:00, Valentin Perrelle wrote: > >> All of that sounds sounds. But at what point were you trying to register a to-python converter? It sounded like you were trying to do that before importing the module, and since a to-python converter is by definition C++, I didn't understand how you could do it in a Python script before importing the module. > > I'm registering the converter by calling the boost::python::import function in C++ code. I don't know any other way to do that. > > Now, this give me another error. I'm trying to implement a "reload script" feature. I thought that all i had to do was to call Py_Finalize, then Py_Initialize again, and to remove any references i was holding to wrapping classes. But whenever i'm importing my extension again, i get the runtime error: > > Assertion failed: slot->m_to_python == 0, file libs\python\src\converter\registry.cpp, line 212 > > which means my to_python converter have been registered once again. Is there a way to unregister them ? should i find a to not initialize the extension again ? > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From valentin.perrelle at orange.fr Mon Aug 1 12:58:55 2011 From: valentin.perrelle at orange.fr (Valentin Perrelle) Date: Mon, 01 Aug 2011 12:58:55 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> Message-ID: <4E3686EF.6090507@orange.fr> > You could use the reload() function in python 2.7 or imp.reload() in python 3. It takes a module object as argument. Thanks. However it wouldn't reset to the initial state in the general case. All modules needs to be unloaded. I don't know a safe way to it yet. I'm just sure i want to do it in c++ not in Python. I've just found that the call of Py_Finalize with Boost.Python is a known issue, already discussed on this mailing list and that the manual says not to call it : http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/embedding.html From simwarg at gmail.com Mon Aug 1 13:19:46 2011 From: simwarg at gmail.com (Simon Warg) Date: Mon, 1 Aug 2011 13:19:46 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <4E3686EF.6090507@orange.fr> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> <4E3686EF.6090507@orange.fr> Message-ID: <67DF43FC-A3F9-492D-ADBF-CBB115EDE744@gmail.com> In My program I need to unload modules as well. What I do is remove all references to the particular module and it will be unloaded. Are you using boost python for python 2 or 3? If it's the latter it is safe to use Py_Finalize()! I use it myself! // Simon On 1 aug 2011, at 12:58, Valentin Perrelle wrote: > >> You could use the reload() function in python 2.7 or imp.reload() in python 3. It takes a module object as argument. > Thanks. However it wouldn't reset to the initial state in the general case. All modules needs to be unloaded. I don't know a safe way to it yet. I'm just sure i want to do it in c++ not in Python. > > I've just found that the call of Py_Finalize with Boost.Python is a known issue, already discussed on this mailing list and that the manual says not to call it : http://www.boost.org/doc/libs/1_47_0/libs/python/doc/tutorial/doc/html/python/embedding.html > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From valentin.perrelle at orange.fr Mon Aug 1 13:38:27 2011 From: valentin.perrelle at orange.fr (Valentin Perrelle) Date: Mon, 01 Aug 2011 13:38:27 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <67DF43FC-A3F9-492D-ADBF-CBB115EDE744@gmail.com> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> <4E3686EF.6090507@orange.fr> <67DF43FC-A3F9-492D-ADBF-CBB115EDE744@gmail.com> Message-ID: <4E369033.5070003@orange.fr> Le 01/08/2011 13:19, Simon Warg a ?crit : > In My program I need to unload modules as well. What I do is remove all references to the particular module and it will be unloaded. It seems i didn't achieve to do that. There should be some references i can't remove, i don't know why yet. > > Are you using boost python for python 2 or 3? If it's the latter it is safe to use Py_Finalize()! I use it myself! I'm using Python 3. But the problem of unregistered converters is still there. See http://mail.python.org/pipermail/cplusplus-sig/2009-August/014736.html From simwarg at gmail.com Mon Aug 1 14:22:16 2011 From: simwarg at gmail.com (Simon Warg) Date: Mon, 1 Aug 2011 14:22:16 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: <4E369033.5070003@orange.fr> References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> <4E3686EF.6090507@orange.fr> <67DF43FC-A3F9-492D-ADBF-CBB115EDE744@gmail.com> <4E369033.5070003@orange.fr> Message-ID: You can remove one reference from the sys module: Import sys del sys.modules['mymodule'] In cpp it would be like: import('sys').attr('modules')['mymodule'].del() I can give you my code later. Don't have it here! // Simon On 1 aug 2011, at 13:38, Valentin Perrelle wrote: > Le 01/08/2011 13:19, Simon Warg a ?crit : >> In My program I need to unload modules as well. What I do is remove all references to the particular module and it will be unloaded. > It seems i didn't achieve to do that. There should be some references i can't remove, i don't know why yet. > >> >> Are you using boost python for python 2 or 3? If it's the latter it is safe to use Py_Finalize()! I use it myself! > I'm using Python 3. But the problem of unregistered converters is still there. See http://mail.python.org/pipermail/cplusplus-sig/2009-August/014736.html > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From paulc.mnt at gmail.com Mon Aug 1 16:50:47 2011 From: paulc.mnt at gmail.com (diego_pmc) Date: Mon, 1 Aug 2011 07:50:47 -0700 (PDT) Subject: [C++-sig] Boost.Python: Callbacks to class functions Message-ID: <1312210247385-3709880.post@n4.nabble.com> I have an `EventManager` class written in C++ and exposed to Python. This is how I intended for it to be used from the Python side: class Something: def __init__(self): EventManager.addEventHandler(FooEvent, self.onFooEvent) def __del__(self): EventManager.removeEventHandler(FooEvent, self.onFooEvent) def onFooEvent(self, event): pass (The `add-` and `remove-` are exposed as static functions of `EventManager`.) The problem with the above code is that the callbacks are captured inside `boost::python::object` instances; when I do `self.onFooEvent` these will increase the reference count of `self`, which will prevent it from being deleted, so the destructor never gets called, so the event handlers never get removed (except at the end of the application). The code works well for functions that don't have a `self` argument (i.e. free or static functions). How should I capture Python function objects such that I won't increase their reference count? I only need a weak reference to the objects. -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Python-Callbacks-to-class-functions-tp3709880p3709880.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From paulc.mnt at gmail.com Mon Aug 1 16:58:53 2011 From: paulc.mnt at gmail.com (diego_pmc) Date: Mon, 1 Aug 2011 07:58:53 -0700 (PDT) Subject: [C++-sig] Wrap std::vector In-Reply-To: References: Message-ID: <1312210733476-3709907.post@n4.nabble.com> You don't really provide that much information: you want help with writing a wrapper? do you want C++ and Python to point to the same instance of `GameObject`? If it's the latter, have you tried doing `vector<boost::shared_ptr<GameObject>>`? -- View this message in context: http://boost.2283326.n4.nabble.com/Wrap-std-vector-pointer-tp3708421p3709907.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From simwarg at gmail.com Tue Aug 2 00:10:53 2011 From: simwarg at gmail.com (Simon W) Date: Tue, 2 Aug 2011 00:10:53 +0200 Subject: [C++-sig] Wrap std::vector In-Reply-To: <1312210733476-3709907.post@n4.nabble.com> References: <1312210733476-3709907.post@n4.nabble.com> Message-ID: I wrap my c++ vector like: class_("GameObjectList") .def(vector_index_suite); When I run the following in python: objects = gameobject_manager.getGameObjects() # getGameObjects is returning a std::vector for object in objects: ... I get the error > TypeError: No to_python (by-value) converter found for C++ type: > GameObject* I have not tried shared_ptr because I waas hoping on another solution since it would require a lot of changes to make it a shared_ptr. On Mon, Aug 1, 2011 at 4:58 PM, diego_pmc wrote: > You don't really provide that much information: you want help with writing > a > wrapper? do you want C++ and Python to point to the same instance of > `GameObject`? If it's the latter, have you tried doing > `vector<boost::shared_ptr<GameObject>>`? > > -- > View this message in context: > http://boost.2283326.n4.nabble.com/Wrap-std-vector-pointer-tp3708421p3709907.html > Sent from the Python - c++-sig mailing list archive at Nabble.com. > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nat at lindenlab.com Tue Aug 2 02:00:10 2011 From: nat at lindenlab.com (Nat Goodspeed) Date: Mon, 1 Aug 2011 20:00:10 -0400 Subject: [C++-sig] Boost.Python: Callbacks to class functions In-Reply-To: <1312210247385-3709880.post@n4.nabble.com> References: <1312210247385-3709880.post@n4.nabble.com> Message-ID: On Aug 1, 2011, at 10:50 AM, diego_pmc wrote: > How should I capture Python function objects such > that I won't increase their reference count? I only need a weak reference to the objects. http://docs.python.org/library/weakref.html#module-weakref I don't know how to access a Python weakref from Boost.Python. > From talljimbo at gmail.com Tue Aug 2 02:23:03 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 01 Aug 2011 17:23:03 -0700 Subject: [C++-sig] Boost.Python: Callbacks to class functions In-Reply-To: <1312210247385-3709880.post@n4.nabble.com> References: <1312210247385-3709880.post@n4.nabble.com> Message-ID: <4E374367.4010801@gmail.com> On 08/01/2011 07:50 AM, diego_pmc wrote: > I have an `EventManager` class written in C++ and exposed to Python. This is > how I intended for it to be used from the Python side: > > class Something: > def __init__(self): > EventManager.addEventHandler(FooEvent, self.onFooEvent) > def __del__(self): > EventManager.removeEventHandler(FooEvent, self.onFooEvent) > def onFooEvent(self, event): > pass > > (The `add-` and `remove-` are exposed as static functions of > `EventManager`.) > > The problem with the above code is that the callbacks are captured inside > `boost::python::object` instances; when I do `self.onFooEvent` these will > increase the reference count of `self`, which will prevent it from being > deleted, so the destructor never gets called, so the event handlers never > get removed (except at the end of the application). > > The code works well for functions that don't have a `self` argument (i.e. > free or static functions). How should I capture Python function objects such > that I won't increase their reference count? I only need a weak reference to > the objects. Are these cycles actually a problem in practice? Python does do garbage collection, so it might be that it knows about all these dependencies and just hasn't bothered to try to delete them because it doesn't need the memory yet. Anyhow, as the other reply suggested, one option is clearly weakref (in addition, look at http://docs.python.org/c-api/weakref.html for how to use those in C/C++; there's no special Boost.Python interface for them). Unfortunately, what could have been a better option - implementing the special functions that tell Python how to break cycles in your C++ classes (http://docs.python.org/c-api/gcsupport.html) - is pretty low level, and probably impossible with Boost.Python. Good luck! Jim Bosch From paulc.mnt at gmail.com Tue Aug 2 06:44:25 2011 From: paulc.mnt at gmail.com (diego_pmc) Date: Mon, 1 Aug 2011 21:44:25 -0700 (PDT) Subject: [C++-sig] Boost.Python: Callbacks to class functions In-Reply-To: <4E374367.4010801@gmail.com> References: <1312210247385-3709880.post@n4.nabble.com> <4E374367.4010801@gmail.com> Message-ID: On Tue, Aug 2, 2011 at 3:00 AM, Nat Goodspeed wrote: > > > http://docs.python.org/library/weakref.html#module-weakref > > Unfortunately, I can't use `weakref` I already tried that, but the problem is that the weak references get deleted (and their value set to `None`) before `__del__` is called. So when `removeEventHandler` is called, it won't be able to find the weakref it's supposed to remove (because its value has been changed to `None`). To fix, this would require some special handling of Python function objects in the `EventManager` ? I don't want that, I wish to keep the manager agnostic to Python since I'm going to be pushing events into it from both C++ and Python. I'd like to solve this problem exclusively through the `EventManager` wrapper class I wrote. On Tue, Aug 2, 2011 at 3:24 AM, Jim Bosch-2 [via Boost] < ml-node+3711102-1593261807-256468 at n4.nabble.com> wrote: > > Are these cycles actually a problem in practice? Python does do garbage > collection, so it might be that it knows about all these dependencies > and just hasn't bothered to try to delete them because it doesn't need > the memory yet. > > Yes, they are problematic. When the object gets removed it should not receive any more events or it will very likely result in some odd behavior on the screen. These objects are entity states in a game. A state dictates how an entity reacts to game events and when the entity changes its state, the old one should stop giving the entity instructions, or they will conflict with the instructions given by the new state. -- View this message in context: http://boost.2283326.n4.nabble.com/Boost-Python-Callbacks-to-class-functions-tp3709880p3711324.html Sent from the Python - c++-sig mailing list archive at Nabble.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Tue Aug 2 07:24:04 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 01 Aug 2011 22:24:04 -0700 Subject: [C++-sig] Boost.Python: Callbacks to class functions In-Reply-To: References: <1312210247385-3709880.post@n4.nabble.com> <4E374367.4010801@gmail.com> Message-ID: <4E3789F4.7090101@gmail.com> On 08/01/2011 09:44 PM, diego_pmc wrote: > > Are these cycles actually a problem in practice? Python does do > garbage > collection, so it might be that it knows about all these dependencies > and just hasn't bothered to try to delete them because it doesn't need > the memory yet. > > > Yes, they are problematic. When the object gets removed it should not > receive any more events or it will very likely result in some odd > behavior on the screen. These objects are entity states in a game. A > state dictates how an entity reacts to game events and when the entity > changes its state, the old one should stop giving the entity > instructions, or they will conflict with the instructions given by the > new state. Hmm. That might mean you need to do a big design change; while it often works, one really isn't supposed to rely on __del__ being called when a Python object first could be garbage-collected - when cycles are involved, Python doesn't even guarantee that it will ever call __del__. It sounds like you'd be much better off with a named destructor-like method that would be called explicitly when you want to remove an object from the game. Jim From paulc.mnt at gmail.com Tue Aug 2 08:13:46 2011 From: paulc.mnt at gmail.com (Paul-Cristian Manta) Date: Tue, 2 Aug 2011 09:13:46 +0300 Subject: [C++-sig] Boost.Python: Callbacks to class functions In-Reply-To: <4E3789F4.7090101@gmail.com> References: <1312210247385-3709880.post@n4.nabble.com> <4E374367.4010801@gmail.com> <4E3789F4.7090101@gmail.com> Message-ID: On Tue, Aug 2, 2011 at 8:24 AM, Jim Bosch wrote: > > Hmm. That might mean you need to do a big design change; while it often > works, one really isn't supposed to rely on __del__ being called when a > Python object first could be garbage-collected - when cycles are involved, > Python doesn't even guarantee that it will ever call __del__. It sounds > like you'd be much better off with a named destructor-like method that would > be called explicitly when you want to remove an object from the game. > > I am aware of that. :) Still, I'd like to leave it only as a last resort, I don't like complicating the API (who does? :P ). If I get rid of the strong reference that the `EventManager` has over the callback, there will be no cycles. My dependency graph would look something like this: ? http://i.imgur.com/wkxUn.jpg -------------- next part -------------- An HTML attachment was scrubbed... URL: From valentin.perrelle at orange.fr Tue Aug 2 11:01:18 2011 From: valentin.perrelle at orange.fr (Valentin Perrelle) Date: Tue, 02 Aug 2011 11:01:18 +0200 Subject: [C++-sig] Abstract class instances to-python conversion In-Reply-To: References: <4E346660.5020701@orange.fr> <4E347C01.2010903@gmail.com> <4E348338.6080708@orange.fr> <4E34EA6A.9080800@gmail.com> <4E3509A0.9070901@orange.fr> <4E35DCEA.2010301@gmail.com> <4E367951.9010606@orange.fr> <148532FC-BC3E-4BA9-81E5-25E300E2C4B9@gmail.com> <4E3686EF.6090507@orange.fr> <67DF43FC-A3F9-492D-ADBF-CBB115EDE744@gmail.com> <4E369033.5070003@orange.fr> Message-ID: <4E37BCDE.9090301@orange.fr> > In cpp it would be like: > import('sys').attr('modules')['mymodule'].del() Thank you, it worked. However, I believe it would only remove one module, not any other module imported. I tried to clear the dictionnary, which worked in Python, i didn't achieve to reproduce this in c++. Anyway, this would free memory occupied by cycles in the python reference graph. But this is only a matter of memory leak, not really relevent in my context. I tried another solution, which doesn't seem at first very suitable. I created a subinterpreter for each new execution of the script. Since there is never 2 execution at the same time, this ensure a good independance between distinct executions. It worked well. This may be a good solution, at least until the Py_Finalize issue is solved. Thank you again for your help. From matt-bradbury at live.co.uk Wed Aug 3 23:36:05 2011 From: matt-bradbury at live.co.uk (Matthew Bradbury) Date: Wed, 3 Aug 2011 21:36:05 +0000 (UTC) Subject: [C++-sig] Wrap std::vector References: Message-ID: Simon W gmail.com> writes: > > > >From my research it seems not very trivial to wrap std::vector that holds pointer types. For example:??? std::vectorI've looked at boost python vector_index_suite but it just gives me the runtime error:> TypeError: No to_python (by-value) converter found for C++ type:> GameObject*I have already exposed GameObject:??? class_(" > GameObject") ... > So > it have come to my understanding it's not possible with an out of the > box solution. I have to do some kind of wrapper? Could someone just help > me out where I should start?Thank you! > > I've come across this before as well and use the following to provide support for vectors of pointers. I only need to iterate over them so have only modified the __iter__ method. template class vector_ptr_indexing_suite : public vector_indexing_suite> { public: typedef iterator> def_iterator; template static void extension_def(Class & cl) { vector_indexing_suite>::extension_def(cl); cl.def("__iter__", def_iterator()); } }; class_>("ObjectContainer") .def(vector_ptr_indexing_suite>()) ; You may want to override functions such as get_item, whose implementation might look like: static object get_item(Container& container, index_type i) { return object(ptr(container[i])); } Anyway have a good look in vector_indexing_suite.hpp to see what is there and what you might need to hack at. From brandsmeier at gmx.de Fri Aug 5 20:00:00 2011 From: brandsmeier at gmx.de (Holger Brandsmeier) Date: Fri, 5 Aug 2011 20:00:00 +0200 Subject: [C++-sig] extract<> with custom shared pointers Message-ID: Dear Boost::python experts, I am trying to use a custom shared pointer type instead of boost::shared_pointer, in my case Teuchos::RCP from the Trilinos project. The details of Teuchos::RCP should not matter much here. In any case there is a Doxygen documentation on the Trilinos webpage. For completeness I attached a simple example code, but let me explain here the problem. I have two simple classes A and B, where B is derived from A. The C++-functions: void computeOnA(Teuchos::RCP a) { a->a *= 2; } and void computeOnAs(boost::shared_ptr a) { a->a *= 2; } both work nicely from python. The function void computeOnTupleOfAs(const tuple& as) { for(int i = 0; i < len(as); ++i) { boost::shared_ptr a = extract >(as[i]); computeOnAs(a); } } also works nicely, given both tuples containing instances of A or the derived class B as expected. However the function: void computeOnTupleOfA(const tuple& as) { for(int i = 0; i < len(as); ++i) { Teuchos::RCP a = extract >(as[i]); computeOnA(a); } } only works given tuples of A and not the derived class B, the following error is shown: TypeError: No registered converter was able to produce a C++ rvalue of type Teuchos::RCP from this Python object of type B. So my question is, which magic do I need to do (I am willing to use certain internals of boost::python) so that B elements can be extracted from tuples of Bs. I currently only provide the functions: T* get_pointer(Teuchos::RCP const& p) T* get_pointer(Teuchos::RCP& p) Thanks for any help, Holger -------------- next part -------------- A non-text attachment was scrubbed... Name: rcpPy.cpp Type: text/x-c++src Size: 2114 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: teuchosRCPPy.h Type: text/x-chdr Size: 205 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_rcp.py.out Type: application/octet-stream Size: 1116 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test_rcp.py Type: text/x-python Size: 754 bytes Desc: not available URL: From talljimbo at gmail.com Fri Aug 5 20:34:46 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 05 Aug 2011 11:34:46 -0700 Subject: [C++-sig] extract<> with custom shared pointers In-Reply-To: References: Message-ID: <4E3C37C6.4040803@gmail.com> On 08/05/2011 11:00 AM, Holger Brandsmeier wrote: > Dear Boost::python experts, > > I am trying to use a custom shared pointer type instead of > boost::shared_pointer, in my case Teuchos::RCP from the Trilinos > project. The details of Teuchos::RCP should not matter much here. In > any case there is a Doxygen documentation on the Trilinos webpage. > So my question is, which magic do I need to do (I am willing to use > certain internals of boost::python) so that B elements can be > extracted from tuples of Bs. > > I currently only provide the functions: > T* get_pointer(Teuchos::RCP const& p) > T* get_pointer(Teuchos::RCP& p) > First, you should try partial-specializing the bp::pointee struct for your smart pointer, if you haven't already: http://www.boost.org/doc/libs/1_46_1/libs/python/doc/v2/pointee.html If that doesn't work: When implementing custom template-based conversions (very convenient, but definitely deep in the internals), I've found it necessary to partial-specialize all of the following template classes in order to get all of the features of Boost.Python to work: bp::to_python_value< T const & > bp::to_python_value< T & > bp::converter::arg_to_python< T > bp::converter::arg_rvalue_from_python< T const & > bp::converter::extract_rvalue< T > In your case, some of these are already provided by the custom smart pointer support in Boost.Python itself. I suspect you only need to specialize extract_rvalue, or maybe arg_rvalue_from_python to get extraction working properly. I can provide more detail on how to do that if you need help, but everything I know I learned from reading the source. Jim Bosch From fjanoos at yahoo.com Fri Aug 5 21:53:20 2011 From: fjanoos at yahoo.com (fjanoos) Date: Fri, 5 Aug 2011 12:53:20 -0700 (PDT) Subject: [C++-sig] CMake and getting starting with boost.python In-Reply-To: References: Message-ID: <1312574000005-3722133.post@n4.nabble.com> Dear Braddock, I'm trying to build Python wrappers with Boost for a fairly large C++ project - and most of its configuration is in CMake. Do you have any more information on using Boost Python through cmake. Specifically - i'm interested in configuring cmake to auto-detect (or atleast require) the boost installation and necessary libraries. thanks, -fj -- View this message in context: http://boost.2283326.n4.nabble.com/C-sig-CMake-and-getting-starting-with-boost-python-tp2700144p3722133.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From fjanoos at yahoo.com Fri Aug 5 22:43:40 2011 From: fjanoos at yahoo.com (fjanoos) Date: Fri, 5 Aug 2011 13:43:40 -0700 (PDT) Subject: [C++-sig] How to import a boost::python dll in windows? In-Reply-To: <7465b6170702140438g5d93eb2an87f817022e705950@mail.gmail.com> References: <1312577020188-2699467.post@n4.nabble.com> <7465b6170702140438g5d93eb2an87f817022e705950@mail.gmail.com> Message-ID: <1312577020187-3722232.post@n4.nabble.com> Hi, I am having a similar problem as the o.p. I was trying to build Python wrappers for the hello_ext project given in the boost tutorial using CMake instead of bjam using the instructions as per http://mail.python.org/pipermail/cplusplus-sig/2007-June/012247.html The setup is Boost 1.47.0, Python 2.7, Windows 7 x64, MSVC 2008 (9.0) x64. After doing CMake and MSVC - it produces a hello_ext.dll. Even if i rename this to hello_ext.pyd, Python throws the following error. Any suggestions / ideas ? Thanks, -fj In [12]: ls Directory of D:\software\boost\boost-source\libs\python\example\tutorial\build\Release 05-Aug-11 16:36 . 05-Aug-11 16:36 .. 05-Aug-11 16:34 732 hello_ext.exp 05-Aug-11 16:34 945,152 hello_ext.idb 05-Aug-11 16:34 1,750 hello_ext.lib 05-Aug-11 16:34 11,264 hello_ext.pyd 4 File(s) 958,898 bytes 2 Dir(s) 370,089,795,584 bytes free In [13]: import hello_ext --------------------------------------------------------------------------- ImportError Traceback (most recent call last) D:\software\boost\boost-source\libs\python\example\tutorial\build\Release\ in () ----> 1 import hello_ext ImportError: DLL load failed: The specified module could not be found. -- View this message in context: http://boost.2283326.n4.nabble.com/C-sig-How-to-import-a-boost-python-dll-in-windows-tp2699467p3722232.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From talljimbo at gmail.com Fri Aug 5 23:24:29 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 05 Aug 2011 14:24:29 -0700 Subject: [C++-sig] How to import a boost::python dll in windows? In-Reply-To: <1312577020187-3722232.post@n4.nabble.com> References: <1312577020188-2699467.post@n4.nabble.com> <7465b6170702140438g5d93eb2an87f817022e705950@mail.gmail.com> <1312577020187-3722232.post@n4.nabble.com> Message-ID: <4E3C5F8D.5000803@gmail.com> I know next-to-nothing about linking dynamic libraries in Windows, but if I saw a message like that in Linux, I'd check my dynamic linker path to ensure the Boost.Python shared library is in it; any Python module you build is linked against against that library. If the dynamic linker can't find the boost_python library when it tries to load your module, it's very likely that it would complain about your module not being found, because it was unable to resolve some symbols in it. Hopefully a Windows expert on the list will chime in, but I gather the list is rather short on those. HTH Jim Bosch On 08/05/2011 01:43 PM, fjanoos wrote: > Hi, > I am having a similar problem as the o.p. > > I was trying to build Python wrappers for the hello_ext project given in the > boost tutorial using CMake instead of bjam using the instructions as per > http://mail.python.org/pipermail/cplusplus-sig/2007-June/012247.html > > The setup is Boost 1.47.0, Python 2.7, Windows 7 x64, MSVC 2008 (9.0) x64. > > After doing CMake and MSVC - it produces a hello_ext.dll. Even if i rename > this to hello_ext.pyd, Python throws the following error. Any suggestions / > ideas ? > > Thanks, > -fj > > In [12]: ls > > Directory of > D:\software\boost\boost-source\libs\python\example\tutorial\build\Release > > 05-Aug-11 16:36 . > 05-Aug-11 16:36 .. > 05-Aug-11 16:34 732 hello_ext.exp > 05-Aug-11 16:34 945,152 hello_ext.idb > 05-Aug-11 16:34 1,750 hello_ext.lib > 05-Aug-11 16:34 11,264 hello_ext.pyd > 4 File(s) 958,898 bytes > 2 Dir(s) 370,089,795,584 bytes free > > In [13]: import hello_ext > --------------------------------------------------------------------------- > ImportError Traceback (most recent call last) > D:\software\boost\boost-source\libs\python\example\tutorial\build\Release\ 8768> in() > ----> 1 import hello_ext > > ImportError: DLL load failed: The specified module could not be found. > > > > -- > View this message in context: http://boost.2283326.n4.nabble.com/C-sig-How-to-import-a-boost-python-dll-in-windows-tp2699467p3722232.html > Sent from the Python - c++-sig mailing list archive at Nabble.com. > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From fjanoos at yahoo.com Fri Aug 5 23:56:58 2011 From: fjanoos at yahoo.com (fjanoos) Date: Fri, 5 Aug 2011 14:56:58 -0700 (PDT) Subject: [C++-sig] How to import a boost::python dll in windows? In-Reply-To: <4E3C5F8D.5000803@gmail.com> References: <1312577020188-2699467.post@n4.nabble.com> <7465b6170702140438g5d93eb2an87f817022e705950@mail.gmail.com> <1312577020187-3722232.post@n4.nabble.com> <4E3C5F8D.5000803@gmail.com> Message-ID: <1312581401.87064.YahooMailNeo@web37904.mail.mud.yahoo.com> Hi, The problem was with using the dynamic version of the windows libraries (/MD) - after changing the project settings to use the static libraries (/MT) this worked out just fine. thanks, -firdaus ________________________________ From: Jim Bosch-2 [via Boost] To: fjanoos Sent: Friday, August 5, 2011 5:35 PM Subject: Re: How to import a boost::python dll in windows? I know next-to-nothing about linking dynamic libraries in Windows, but if I saw a message like that in Linux, I'd check my dynamic linker path to ensure the Boost.Python shared library is in it; any Python module you build is linked against against that library. ?If the dynamic linker can't find the boost_python library when it tries to load your module, it's very likely that it would complain about your module not being found, because it was unable to resolve some symbols in it. Hopefully a Windows expert on the list will chime in, but I gather the list is rather short on those. HTH Jim Bosch On 08/05/2011 01:43 PM, fjanoos wrote: > Hi, > I am having a similar problem as the o.p. > > I was trying to build Python wrappers for the hello_ext project given in the > boost tutorial using CMake instead of bjam using the instructions as per > http://mail.python.org/pipermail/cplusplus-sig/2007-June/012247.html > > The setup is Boost 1.47.0, Python 2.7, Windows 7 x64, MSVC 2008 (9.0) x64. > > After doing CMake and MSVC - it produces a hello_ext.dll. Even if i rename > this to hello_ext.pyd, Python throws the following error. Any suggestions / > ideas ? > > Thanks, > -fj > > In [12]: ls > > ? Directory of > D:\software\boost\boost-source\libs\python\example\tutorial\build\Release > > 05-Aug-11 ?16:36 ? ? ? ? ? . > 05-Aug-11 ?16:36 ? ? ? ? ? .. > 05-Aug-11 ?16:34 ? ? ? ? ? ? ? 732 hello_ext.exp > 05-Aug-11 ?16:34 ? ? ? ? ? 945,152 hello_ext.idb > 05-Aug-11 ?16:34 ? ? ? ? ? ? 1,750 hello_ext.lib > 05-Aug-11 ?16:34 ? ? ? ? ? ?11,264 hello_ext.pyd > ? ? ? ? ? ? ? ? 4 File(s) ? ? ? ?958,898 bytes > ? ? ? ? ? ? ? ? 2 Dir(s) ?370,089,795,584 bytes free > > In [13]: import hello_ext > --------------------------------------------------------------------------- > ImportError ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Traceback (most recent call last) > D:\software\boost\boost-source\libs\python\example\tutorial\build\Release\ 8768> ?in() > ----> ?1 import hello_ext > > ImportError: DLL load failed: The specified module could not be found. > > > > -- > View this message in context: http://boost.2283326.n4.nabble.com/C-sig-How-to-import-a-boost-python-dll-in-windows-tp2699467p3722232.html > Sent from the Python - c++-sig mailing list archive at Nabble.com. > _______________________________________________ > Cplusplus-sig mailing list > [hidden email] > http://mail.python.org/mailman/listinfo/cplusplus-sig _______________________________________________ Cplusplus-sig mailing list [hidden email] http://mail.python.org/mailman/listinfo/cplusplus-sig ________________________________ If you reply to this email, your message will be added to the discussion below:http://boost.2283326.n4.nabble.com/C-sig-How-to-import-a-boost-python-dll-in-windows-tp2699467p3722336.html To unsubscribe from [C++-sig]How to import a boost::python dll in windows?, click here. -- View this message in context: http://boost.2283326.n4.nabble.com/C-sig-How-to-import-a-boost-python-dll-in-windows-tp2699467p3722373.html Sent from the Python - c++-sig mailing list archive at Nabble.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.weston at gmail.com Sat Aug 6 01:10:26 2011 From: tyler.weston at gmail.com (Tyler Weston) Date: Fri, 5 Aug 2011 18:10:26 -0500 Subject: [C++-sig] CMake and getting starting with boost.python In-Reply-To: <1312574000005-3722133.post@n4.nabble.com> References: <1312574000005-3722133.post@n4.nabble.com> Message-ID: Try this. I believe the required FindBoost.cmake is standard in the cmake/shared/cmake-x.y/modules. Not all of these options may apply to your build. # # BOOST # set(Boost_USE_STATIC_LIBS OFF) set(Boost_USE_MULTITHREADED ON) set(Boost_USE_STATIC_RUNTIME OFF) set(BOOST_ROOT "path/to/boost") # uncomment to debug find_package(Boost): # set(Boost_DEBUG TRUE) find_package(Boost 1.46.1 COMPONENTS system filesystem regex date_time thread serialization python ) On Fri, Aug 5, 2011 at 2:53 PM, fjanoos wrote: > Dear Braddock, > > I'm trying to build Python wrappers with Boost for a fairly large C++ > project - and most of its configuration is in CMake. Do you have any more > information on using Boost Python through cmake. > > Specifically - i'm interested in configuring cmake to auto-detect (or > atleast require) the boost installation and necessary libraries. > > thanks, > -fj > > -- > View this message in context: > http://boost.2283326.n4.nabble.com/C-sig-CMake-and-getting-starting-with-boost-python-tp2700144p3722133.html > Sent from the Python - c++-sig mailing list archive at Nabble.com. > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fjanoos at yahoo.com Tue Aug 9 23:17:08 2011 From: fjanoos at yahoo.com (fjanoos) Date: Tue, 9 Aug 2011 14:17:08 -0700 (PDT) Subject: [C++-sig] Wrapping std::vector with boost::python::list Message-ID: <1312924628432-3731281.post@n4.nabble.com> Hello, I'm new to boost::python and was having trouble wrapping a C++ function of the form void FindVCs(int vId, vector& vcs); Here vcs is allocated in the caller and populated by FindVCs. Initially, I considered wrapping it something like this: boost::python::list* FindVCs_wrap(int vid) { vector vcs; FindVCs(vid,vcs); //wrap and return boost::python::list* out_list = new boost::python::list; unsigned int num_elems = vcs.size(); for( unsigned int idx = 0; idx < num_elems; idx++){ out_list->append(vcs[idx]); } return out_list; } however, this gives me compile time errors Error 5 error C2027: use of undefined type 'boost::python::detail::specify_a_return_value_policy_to_wrap_functions_returning' Then I change the return type to boost::shared_ptr FindVCs_wrap(int vid) { ... return boost::shared_ptr(out_list); } This compiles fine but then at runtime Python raises: TypeError: No to_python (by-value) converter found for C++ type: class boost::shared_ptr Any ideas of what I am doing wrong ? Also, it is possible to instantiate and directly return a std::vector object from Python without going through the boost::python::list object ? I've been googling this for 2 days and I can't seem to find any relevant information. Any pointers would be appreciated. thanks, -fj -- View this message in context: http://boost.2283326.n4.nabble.com/Wrapping-std-vector-int-with-boost-python-list-tp3731281p3731281.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From fjanoos at yahoo.com Tue Aug 9 23:21:37 2011 From: fjanoos at yahoo.com (fjanoos) Date: Tue, 9 Aug 2011 14:21:37 -0700 (PDT) Subject: [C++-sig] Wrapping std::vector with boost::python::list Message-ID: <1312924897105-3731291.post@n4.nabble.com> Hello, I'm new to boost::python and was having trouble wrapping a C++ function of the form void FindVCs(int vId, vector& vcs); Here vcs is allocated in the caller and populated by FindVCs. Initially, I considered wrapping it something like this: boost::python::list* FindVCs_wrap(int vid) { vector vcs; FindVCs(vid,vcs); //wrap and return boost::python::list* out_list = new boost::python::list; unsigned int num_elems = vcs.size(); for( unsigned int idx = 0; idx < num_elems; idx++){ out_list->append(vcs[idx]); } return out_list; } however, this gives me compile time errors Error 5 error C2027: use of undefined type 'boost::python::detail::specify_a_return_value_policy_to_wrap_functions_returning' Then I change the return type to boost::shared_ptr FindVCs_wrap(int vid) { ... return boost::shared_ptr(out_list); } This compiles fine but then at runtime Python raises: TypeError: No to_python (by-value) converter found for C++ type: class boost::shared_ptr Any ideas of what I am doing wrong ? thanks, -fj -- View this message in context: http://boost.2283326.n4.nabble.com/Wrapping-std-vector-int-with-boost-python-list-tp3731291p3731291.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From talljimbo at gmail.com Tue Aug 9 23:37:13 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 09 Aug 2011 14:37:13 -0700 Subject: [C++-sig] Wrapping std::vector with boost::python::list In-Reply-To: <1312924897105-3731291.post@n4.nabble.com> References: <1312924897105-3731291.post@n4.nabble.com> Message-ID: <4E41A889.4020609@gmail.com> Just return boost::python::object by value; it's actually a smart pointer that carries a PyObject* and uses Python's reference counting, and you can implicitly create one from boost::python::list (since list derives from object). Just returning boost::python::list by value might also work, but that copy constructor might be overloaded to do deep copies (I'm not sure), and you seem to be trying to avoid that. Jim On 08/09/2011 02:21 PM, fjanoos wrote: > Hello, > > I'm new to boost::python and was having trouble wrapping a C++ function of > the form > void FindVCs(int vId, vector& vcs); > > Here vcs is allocated in the caller and populated by FindVCs. > > Initially, I considered wrapping it something like this: > > boost::python::list* FindVCs_wrap(int vid) > { > vector vcs; > > FindVCs(vid,vcs); > //wrap and return > boost::python::list* out_list = new boost::python::list; > unsigned int num_elems = vcs.size(); > for( unsigned int idx = 0; idx< num_elems; idx++){ > out_list->append(vcs[idx]); > } > return out_list; > } > > however, this gives me compile time errors > Error 5 error C2027: use of undefined type > 'boost::python::detail::specify_a_return_value_policy_to_wrap_functions_returning' > > Then I change the return type to > > boost::shared_ptr FindVCs_wrap(int vid) > { > ... > return boost::shared_ptr(out_list); > } > > This compiles fine but then at runtime Python raises: > > TypeError: No to_python (by-value) converter found for C++ type: class > boost::shared_ptr ::python::list> > > > Any ideas of what I am doing wrong ? > > thanks, > -fj > > > > -- > View this message in context: http://boost.2283326.n4.nabble.com/Wrapping-std-vector-int-with-boost-python-list-tp3731291p3731291.html > Sent from the Python - c++-sig mailing list archive at Nabble.com. > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From stefan at seefeld.name Tue Aug 9 23:35:14 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Tue, 09 Aug 2011 17:35:14 -0400 Subject: [C++-sig] Wrapping std::vector with boost::python::list In-Reply-To: <1312924628432-3731281.post@n4.nabble.com> References: <1312924628432-3731281.post@n4.nabble.com> Message-ID: <4E41A812.9080300@seefeld.name> On 08/09/2011 05:17 PM, fjanoos wrote: > Hello, > > I'm new to boost::python and was having trouble wrapping a C++ function of > the form > void FindVCs(int vId, vector& vcs); > > Here vcs is allocated in the caller and populated by FindVCs. Note that this should work, but requires you to instruct Python that 'vcs' is an "inout" value, so it gets passed into the function by-reference, not by-value (the default). Please read the tutorial (and reference) sections on call policies: http://www.boost.org/doc/libs/1_46_0/libs/python/doc/tutorial/doc/html/python/functions.html#python.call_policies > > Initially, I considered wrapping it something like this: > > boost::python::list* FindVCs_wrap(int vid) > { > vector vcs; > > FindVCs(vid,vcs); > //wrap and return > boost::python::list* out_list = new boost::python::list; > unsigned int num_elems = vcs.size(); > for( unsigned int idx = 0; idx < num_elems; idx++){ > out_list->append(vcs[idx]); > } > return out_list; > } > > however, this gives me compile time errors > Error 5 error C2027: use of undefined type > 'boost::python::detail::specify_a_return_value_policy_to_wrap_functions_returning' This is a little trick (you could call it a kludge) to flag a user error using metaprogramming: the last line above should be spoken out aloud: Boost.Python noticed an ambiguous situation, and instead of guessing what (return) value ownership semantics you intended, it requests you to specify it (return value policies are very similar to call policies, so the same docs above apply). > > > Then I change the return type to > > boost::shared_ptr FindVCs_wrap(int vid) > { > ... > return boost::shared_ptr(out_list); > } > > This compiles fine but then at runtime Python raises: > > TypeError: No to_python (by-value) converter found for C++ type: class > boost::shared_ptr ::python::list> Python list objects are themselves shared pointers, so Python wouldn't know what to do with a shared_ptr. Regards, Stefan -- ...ich hab' noch einen Koffer in Berlin... From ae_contact at espanel.net Wed Aug 10 00:57:13 2011 From: ae_contact at espanel.net (Arnaud Espanel) Date: Tue, 9 Aug 2011 18:57:13 -0400 Subject: [C++-sig] Functions exposed using boost::python not reported by the python profiler Message-ID: Hi -- When running a python program through the python profiler, calls to a function exposed directly through C python api are reported, but not the calls to a function exposed through boost::python. I'd like to see boost::python calls reported on their own as well... but have not managed to. To demonstrate this, a simple "greet" fonction that returns a string has been implemented in two versions. The first implementation ("cext" module) has been exposed directly using python C api. The second implementation ("bpyext" module) has been exposed using boost::python (the corresponding source code is at the end of this message). When profiling these functions: >>> import cProfile as profile >>> profile.run('cext.greet()') 3 function calls in 0.000 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 :1() 1 0.000 0.000 0.000 0.000 {cext.greet} 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} >>> profile.run('bpyext.greet()') 2 function calls in 0.000 CPU seconds Ordered by: standard name ncalls tottime percall cumtime percall filename:lineno(function) 1 0.000 0.000 0.000 0.000 :1() 1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects} We can see that the call to the C extension module (cext.greet) get reported, but not the call to the boost::python extension (bpyext.greet). Looking at how the profile module works, the profile module is notified by the python interpreter on each function call/return. This notification happens through the function set by os.setprofile(function). Let's provide some simple "profiler" function that just prints whenever it is called by the interpreter: >>> import cext, bpyext, sys >>> def profiler(frame,event,arg): ... print 'PROF %r %r' % (event, arg) ... >>> sys.setprofile(profiler) PROF 'return' None >>> cext.greet() PROF 'call' None PROF 'c_call' PROF 'c_return' 'hello from C' PROF 'return' None >>> bpyext.greet() PROF 'call' None 'Hello from boost::python' PROF 'return' None It works as expected for the C extension (we can see the c_call/c_return events), but no events are triggered when the boost::python function is called. Why is this the case? Is there something that can be done in order to trigger these c_call/c_return events for boost::python extensions? Thanks, Arnaud For reference, here is the code of the C/boost::python modules. Tests were done on Linux x86_64, using python 2.6.5 and boost 1.40.0 ======================== cext.c ======================== include static PyObject * greet(PyObject *self, PyObject *args) { ? return Py_BuildValue("s", "hello from C"); } static PyMethodDef CextMethods[] = { ? ? {"greet", ?greet, METH_VARARGS, ? ? ?"Say hi"}, ? ? {NULL, NULL, 0, NULL} }; PyMODINIT_FUNC initcext(void) { ? (void) Py_InitModule("cext", CextMethods); } ======================= bpyext.cpp ====================== #include #include std::string greet() { return "Hello from boost::python"; } using namespace boost::python; BOOST_PYTHON_MODULE(bpyext) { def("greet", &greet) ; } From talljimbo at gmail.com Wed Aug 10 02:23:26 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 09 Aug 2011 17:23:26 -0700 Subject: [C++-sig] Functions exposed using boost::python not reported by the python profiler In-Reply-To: References: Message-ID: <4E41CF7E.30608@gmail.com> On 08/09/2011 03:57 PM, Arnaud Espanel wrote: > Hi -- > When running a python program through the python profiler, calls to a > function exposed directly through C python api are reported, but not > the calls to a function exposed through boost::python. I'd like to see > boost::python calls reported on their own as well... but have not > managed to. This is my first look into how Python profiling works, but I think the basic problem is that Boost.Python-wrapped functions are really just callable objects. Their implementation doesn't use the standard Python C-API procedure for exposing functions (which requires lots of static variables), and they of course aren't pure Python functions either. So none of the triggers for trace events within Python itself every get called. The solution to is probably to modify the internals of Boost.Python's special Function type to explicitly trigger those events. That's probably what the behavior of Boost.Python should be, and I suspect it would be a fairly small change to make. It's worrying that the functions that trigger the events don't appear to be a documented part of the Python C-API, though. The only way to figure out how to do it would be to dive into the Python code itself, and hope those functions are stable between Python releases. Jim > To demonstrate this, a simple "greet" fonction that returns a string > has been implemented in two versions. The first implementation ("cext" > module) has been exposed directly using python C api. The second > implementation ("bpyext" module) has been exposed using boost::python > (the corresponding source code is at the end of this message). > > When profiling these functions: > >>>> import cProfile as profile >>>> profile.run('cext.greet()') > 3 function calls in 0.000 CPU seconds > > Ordered by: standard name > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.000 0.000 0.000 0.000:1() > 1 0.000 0.000 0.000 0.000 {cext.greet} > 1 0.000 0.000 0.000 0.000 {method 'disable' of > '_lsprof.Profiler' objects} > > >>>> profile.run('bpyext.greet()') > 2 function calls in 0.000 CPU seconds > > Ordered by: standard name > > ncalls tottime percall cumtime percall filename:lineno(function) > 1 0.000 0.000 0.000 0.000:1() > 1 0.000 0.000 0.000 0.000 {method 'disable' of > '_lsprof.Profiler' objects} > > We can see that the call to the C extension module (cext.greet) get > reported, but not the call to the boost::python extension > (bpyext.greet). Looking at how the profile module works, the profile > module is notified by the python interpreter on each function > call/return. This notification happens through the function set by > os.setprofile(function). Let's provide some simple "profiler" function > that just prints whenever it is called by the interpreter: > >>>> import cext, bpyext, sys >>>> def profiler(frame,event,arg): > ... print 'PROF %r %r' % (event, arg) > ... >>>> sys.setprofile(profiler) > PROF 'return' None >>>> cext.greet() > PROF 'call' None > PROF 'c_call' > PROF 'c_return' > 'hello from C' > PROF 'return' None >>>> bpyext.greet() > PROF 'call' None > 'Hello from boost::python' > PROF 'return' None > > It works as expected for the C extension (we can see the > c_call/c_return events), but no events are triggered when the > boost::python function is called. Why is this the case? Is there > something that can be done in order to trigger these c_call/c_return > events for boost::python extensions? > > Thanks, > Arnaud > > > For reference, here is the code of the C/boost::python modules. Tests > were done on Linux x86_64, using python 2.6.5 and boost 1.40.0 > ======================== cext.c ======================== > include > > static PyObject * > greet(PyObject *self, PyObject *args) > { > return Py_BuildValue("s", "hello from C"); > } > > static PyMethodDef CextMethods[] = { > {"greet", greet, METH_VARARGS, > "Say hi"}, > {NULL, NULL, 0, NULL} > }; > > PyMODINIT_FUNC > initcext(void) > { > (void) Py_InitModule("cext", CextMethods); > } > > ======================= bpyext.cpp ====================== > #include > #include > > std::string greet() { > return "Hello from boost::python"; > } > > using namespace boost::python; > > BOOST_PYTHON_MODULE(bpyext) { > def("greet",&greet) > ; > } > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From stefan at seefeld.name Wed Aug 10 02:29:17 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Tue, 09 Aug 2011 20:29:17 -0400 Subject: [C++-sig] Functions exposed using boost::python not reported by the python profiler In-Reply-To: <4E41CF7E.30608@gmail.com> References: <4E41CF7E.30608@gmail.com> Message-ID: <4E41D0DD.9090308@seefeld.name> On 08/09/2011 08:23 PM, Jim Bosch wrote: > On 08/09/2011 03:57 PM, Arnaud Espanel wrote: >> Hi -- >> When running a python program through the python profiler, calls to a >> function exposed directly through C python api are reported, but not >> the calls to a function exposed through boost::python. I'd like to see >> boost::python calls reported on their own as well... but have not >> managed to. > > This is my first look into how Python profiling works, but I think the > basic problem is that Boost.Python-wrapped functions are really just > callable objects. Their implementation doesn't use the standard > Python C-API procedure for exposing functions (which requires lots of > static variables), and they of course aren't pure Python functions > either. So none of the triggers for trace events within Python itself > every get called. > > The solution to is probably to modify the internals of Boost.Python's > special Function type to explicitly trigger those events. That's > probably what the behavior of Boost.Python should be, and I suspect it > would be a fairly small change to make. > > It's worrying that the functions that trigger the events don't appear > to be a documented part of the Python C-API, though. The only way to > figure out how to do it would be to dive into the Python code itself, > and hope those functions are stable between Python releases. I would actually go one step further and report this issue (of undocumented / non-public functions required for tracing) to the Python bug tracker (http://bugs.python.org), and hope that this gets properly addressed. Stefan -- ...ich hab' noch einen Koffer in Berlin... From strattonbrazil at gmail.com Wed Aug 10 05:33:26 2011 From: strattonbrazil at gmail.com (Josh Stratton) Date: Tue, 9 Aug 2011 20:33:26 -0700 Subject: [C++-sig] arguments to c++ functions Message-ID: Where can I find docs for sending arguments to C\C++ functions? I've followed the hello world example, which seems to automatically convert a python string to a char*, but I'm going to need more conversions later like how to handle a char**. Something like ["a","b","c"] doesn't seem to work. From babakage at gmail.com Fri Aug 12 13:28:36 2011 From: babakage at gmail.com (babak) Date: Fri, 12 Aug 2011 04:28:36 -0700 (PDT) Subject: [C++-sig] wrapped std::vector slice affecting items in a python list Message-ID: <1313148516764-3738972.post@n4.nabble.com> Hi, I've come across some unusual behaviour that I was hoping some one might be able to explain to me. In python I have a list and a wrapped std::vector (using the vector_indexing_suite) where the list contains the items in the vector. If I slice insert items into the vector then the items in the python list change. The attached example illustrates this. Is this behaviour expected ? I'm quite confused as to what's going on so any help clarifying this would be greatly appreciated. thanks, Babak http://boost.2283326.n4.nabble.com/file/n3738972/test.py test.py http://boost.2283326.n4.nabble.com/file/n3738972/source.cpp source.cpp -- View this message in context: http://boost.2283326.n4.nabble.com/wrapped-std-vector-slice-affecting-items-in-a-python-list-tp3738972p3738972.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From j.reid at mail.cryst.bbk.ac.uk Fri Aug 12 13:36:15 2011 From: j.reid at mail.cryst.bbk.ac.uk (John Reid) Date: Fri, 12 Aug 2011 12:36:15 +0100 Subject: [C++-sig] automatic conversion from python unicode object to C++ std::string? Message-ID: I recently upgraded to ipython 0.11. In 0.11 sys.argv entries are of type unicode rather than string. All of my scripts that call into boost.python extensions fail as my exposed functions expect arguments of type std::string. Presumably I could recode all my functions to use std::wstring and everything would work. This would be a lot of work so for the moment, I'm encoding everything in python from unicode to strings. I was wondering if there is a simple way to tell my boost.python extensions to do this encoding automatically. Thanks, John. From talljimbo at gmail.com Fri Aug 12 20:35:07 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 12 Aug 2011 11:35:07 -0700 Subject: [C++-sig] arguments to c++ functions In-Reply-To: References: Message-ID: <4E45725B.1080809@gmail.com> On 08/09/2011 08:33 PM, Josh Stratton wrote: > Where can I find docs for sending arguments to C\C++ functions? I've > followed the hello world example, which seems to automatically convert > a python string to a char*, but I'm going to need more conversions > later like how to handle a char**. Something like ["a","b","c"] > doesn't seem to work. I don't think there are automatic conversions for char** - it can mean too many things, and there's no automatic way to pass in the length. If you want to pass in a sequence of Python strings, there are plenty of ways to get C++ code to accept that. But if the C++ code you're dealing with has char** arguments, you'll probably have to provide manual wrappers for those functions; something like this: namespace bp = boost::python; void original(char** s, int n); void wrapper(bp::object const & o) { int n = bp::len(o); std::vector v(n); boost::scoped_array s(new char*[n]); for (int i = 0; i < n; ++i) { v[i] = bp::extract(o[i]); s[i] = v[i].c_str(); } original(s, n); } Good luck! Jim Bosch From talljimbo at gmail.com Fri Aug 12 21:15:01 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 12 Aug 2011 12:15:01 -0700 Subject: [C++-sig] automatic conversion from python unicode object to C++ std::string? In-Reply-To: References: Message-ID: <4E457BB5.2060702@gmail.com> On 08/12/2011 04:36 AM, John Reid wrote: > I recently upgraded to ipython 0.11. In 0.11 sys.argv entries are of > type unicode rather than string. All of my scripts that call into > boost.python extensions fail as my exposed functions expect arguments of > type std::string. Presumably I could recode all my functions to use > std::wstring and everything would work. This would be a lot of work so > for the moment, I'm encoding everything in python from unicode to > strings. I was wondering if there is a simple way to tell my > boost.python extensions to do this encoding automatically. > Hmm. You might be able to write a custom rvalue_from_python conversion from the Python unicode type to std::string. You could only do that once, and it would work everywhere, but it would mean unicode objects would be convertible to std::string everywhere, and I'm not sure that's desirable. I don't think the from-python conversions are terribly well-documented, but you can learn a lot from boost/python/converter/rvalue_from_python_data.hpp. If you need help, just ask. Good luck! Jim Bosch From talljimbo at gmail.com Fri Aug 12 21:30:58 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 12 Aug 2011 12:30:58 -0700 Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: <1313148516764-3738972.post@n4.nabble.com> References: <1313148516764-3738972.post@n4.nabble.com> Message-ID: <4E457F72.3020004@gmail.com> On 08/12/2011 04:28 AM, babak wrote: > Hi, > > I've come across some unusual behaviour that I was hoping some one might be > able to explain to me. > > In python I have a list and a wrapped std::vector (using the > vector_indexing_suite) where the list contains the items in the vector. If I > slice insert items into the vector then the items in the python list change. > The attached example illustrates this. > > Is this behaviour expected ? > > I'm quite confused as to what's going on so any help clarifying this would > be greatly appreciated. > thanks, > Babak > > http://boost.2283326.n4.nabble.com/file/n3738972/test.py test.py > http://boost.2283326.n4.nabble.com/file/n3738972/source.cpp source.cpp > Some of the stuff vector_indexing_suite is sufficiently complex that at least I'd say this behavior is not unexpected. After all, when you slice a Python list, and modify the elements in the sliced bit, you modify the original list as well (unless your elements are immutable things like strings or numbers, and in that case you aren't really modifying them, you're replacing them). I think the thing you get back from slicing a std::vector is probably a proxy object designed to have exactly that behavior (note: I haven't checked; that's just how I'd have implemented it). In any case, I'd recommend against wrapping a std::vector of a mutable C++ type directly in Python. It's almost always unsafe, because any operation on the vector that invalidates iterators will destroy the elements of the vector, even if there are Python objects that point to those elements elsewhere with nonzero reference counts. It's much safer to use a vector of shared_ptr, or always convert to Python lists and avoid vector_indexing_suite. This isn't a Boost.Python-specific problem, it's an inherent incompatibility between C++'s value-based containers and Python's shared-reference ones. Jim Bosch From Holger.Joukl at LBBW.de Mon Aug 15 17:11:07 2011 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Mon, 15 Aug 2011 17:11:07 +0200 Subject: [C++-sig] void* (void pointer) function arguments Message-ID: Hi, I'm trying to wrap a C++-API that uses void* to pass around arbitrary, application-specific stuff. I am a bit unsure about how to work the void-pointers. A viable way seems to wrap the original API functions with thin wrappers that take a boost::python::object and hand the "raw" PyObject* into the original function: // Expose the method as taking any bp::object and hand the "raw" PyObject pointer to // the void-ptr-expecting API-function int Worker_destroy2_DestructionCallbackPtr_constVoidPtr( Worker* worker, DestructionCallback* cb, bp::object& closure) { return worker->destroy2(cb, closure.ptr()); } The information available is then retrieved by some callback function (which is pure virtual in the API and needs to be overridden in Python). Before calling into Python I then cast the void* back to a PyObject*, make a boost::python::object from it and invoke the Python override: class DestructionCallbackWrap : public DestructionCallback, public bp::wrapper { virtual void callback(Worker* worker, void* closure) { std::cout << ">>> " << __PRETTY_FUNCTION__ << std::endl; // everything ending up here from Python side is a PyObject bp::handle<> handle(bp::borrowed(reinterpret_cast (closure))); bp::object closureObj(handle); this->get_override("callback")(bp::ptr(worker), closureObj); std::cout << "<<< " << __PRETTY_FUNCTION__ << std::endl; } }; As I've tried to gather information about void* handling with boost.python back and forth without finding much (an FAQ entry seems to have existed once upon a time?) - is this a sane approach? Or is there some automagical void* or const void* handling in boost.python that I am totally missing? Any hint much appreciated, Holger P.S.: Here's a full minimal example for reference: // file void_ptr_cb.hpp // API to wrap class Worker; class DestructionCallback { public: DestructionCallback() {} virtual ~DestructionCallback() {} virtual void callback(Worker* worker, void* closure) = 0; }; class Worker { public: Worker() {} virtual ~Worker() {} virtual int destroy() { return 0; } int destroy2(DestructionCallback* cb, const void* closure=NULL) { std::cout << ">>> " << __PRETTY_FUNCTION__ << std::endl; cb->callback(this, const_cast(closure)); std::cout << "<<< " << __PRETTY_FUNCTION__ << std::endl; return 0; } private: Worker(const Worker& worker); }; // file wrap_void_ptr_cb.cpp #include #include #include "void_ptr_cb.hpp" namespace bp = boost::python; // Helper classes needed for boost.python wrapping class DestructionCallbackWrap : public DestructionCallback, public bp::wrapper { virtual void callback(Worker* worker, void* closure) { std::cout << ">>> " << __PRETTY_FUNCTION__ << std::endl; // everything ending up here from Python side is a PyObject bp::handle<> handle(bp::borrowed(reinterpret_cast (closure))); bp::object closureObj(handle); this->get_override("callback")(bp::ptr(worker), closureObj); std::cout << "<<< " << __PRETTY_FUNCTION__ << std::endl; } }; class WorkerWrap : public Worker, public bp::wrapper { public: virtual int destroy() { if (bp::override f = this->get_override("destroy")) return f(); // *note* return Worker::destroy(); } int default_destroy() { return this->Worker::destroy(); } }; // Expose the method as taking any bp::object and hand the "raw" PyObject pointer to // the void-ptr-expecting API-function int Worker_destroy2_DestructionCallbackPtr_constVoidPtr( Worker* worker, DestructionCallback* cb, bp::object& closure) { return worker->destroy2(cb, closure.ptr()); } BOOST_PYTHON_MODULE(void_ptr_cb) { bp::class_ ("DestructionCallback", bp::init<>()) .def("callback", bp::pure_virtual(&DestructionCallback::callback)) ; bp::class_("Worker") .def("destroy", &Worker::destroy, &WorkerWrap::default_destroy) .def("destroy2", &Worker_destroy2_DestructionCallbackPtr_constVoidPtr, (bp::arg("cb"), bp::arg("closure")=bp::object())) ; }; # file Jamroot # Run with: # BOOST_ROOT=/var/tmp/lb54320/boost_apps/boost_1_46_1 BOOST_BUILD_PATH=/var/tmp/lb54320/boost_apps/boost_1_46_1 \ # /var/tmp/$USER/boost_apps/boost_1_46_1/bjam -d+2 toolset=gcc-4.5.1 \ # --build-dir=/var/tmp/$USER/boost_apps/boost_1_46_1/build/py2.7/boost/1.46.1/ cxxflags="-DDEBUG_HIGH \ # -DBOOST_PYTHON_TRACE_REGISTRY" link=shared threading=multi variant=release void_ptr_cb # # get the environment variable "USER" import os ; local _USER = [ os.environ USER ] ; #ECHO $(_USER) ; local _WORKDIR = /var/tmp/$(_USER)/boost_apps ; local _BOOST_MODULE = boost_1_46_1 ; local _BOOST_ROOT = $(_WORKDIR)/$(_BOOST_MODULE) ; local _BOOST_VERSION = 1.46.1 ; #ECHO $(_BOOST_ROOT) ; use-project boost : $(_BOOST_ROOT) ; # Set up the project-wide requirements that everything uses the # boost_python library from the project whose global ID is # /boost/python. project minimal_void_ptr_cb : requirements /boost/python//boost_python debug:$(_BOOST_ROOT)/stage/py2.7/boost/$ (_BOOST_VERSION)/debug/lib release:$(_BOOST_ROOT)/stage/py2.7/boost/$ (_BOOST_VERSION)/lib ; python-extension void_ptr_cb : # sources + // Add all files here otherwise we get undefined symbol errors like wrap_void_ptr_cb.cpp : # requirements * : # default-build * : # usage-requirements * ; #!/apps/local/gcc/4.5.1/bin/python2.7 # file test.py import os import sys _USER = os.getenv("USER") EXPATH = ('/var/tmp/%s/boost_apps/boost_1_46_1/build/py2.7/boost/1.46.1/' 'minimal_void_ptr_cb/gcc-4.5.1/release/threading-multi' % (_USER)) sys.path.insert(1, EXPATH) import void_ptr_cb class MyDestructionCallback(void_ptr_cb.DestructionCallback): def callback(self, worker, closure): print "MyDestructionCallback.callback(%s, %s, %s)" % (self, worker, closure) class MyDestructionCallback2(void_ptr_cb.DestructionCallback): def callback(self, worker, closure): print "MyDestructionCallback.callback2(%s, %s, %s)" % (self, worker, closure) closure.do('what?') class SomeClass(object): def do(self, something=None): print "SomeClass.do(something=%s)" % repr(something) md = MyDestructionCallback() md.callback('worker', 'closure') print w = void_ptr_cb.Worker() w.destroy() w.destroy2(md, None) print some = SomeClass() w = void_ptr_cb.Worker() w.destroy() w.destroy2(md, some) some.do('else') print md2 = MyDestructionCallback2() w = void_ptr_cb.Worker() w.destroy() w.destroy2(md2, SomeClass()) Test run output: 0 holger at devel .../minimal_void_ptr_cb $ ./test.py MyDestructionCallback.callback(<__main__.MyDestructionCallback object at 0x2a6cc0>, worker, closure) >>> int Worker::destroy2(DestructionCallback*, const void*) >>> virtual void DestructionCallbackWrap::callback(Worker*, void*) MyDestructionCallback.callback(<__main__.MyDestructionCallback object at 0x2a6cc0>, , None) <<< virtual void DestructionCallbackWrap::callback(Worker*, void*) <<< int Worker::destroy2(DestructionCallback*, const void*) >>> int Worker::destroy2(DestructionCallback*, const void*) >>> virtual void DestructionCallbackWrap::callback(Worker*, void*) MyDestructionCallback.callback(<__main__.MyDestructionCallback object at 0x2a6cc0>, , <__main__.SomeClass object at 0x2b7710>) <<< virtual void DestructionCallbackWrap::callback(Worker*, void*) <<< int Worker::destroy2(DestructionCallback*, const void*) SomeClass.do(something='else') >>> int Worker::destroy2(DestructionCallback*, const void*) >>> virtual void DestructionCallbackWrap::callback(Worker*, void*) MyDestructionCallback.callback2(<__main__.MyDestructionCallback2 object at 0x2a6cf0>, , <__main__.SomeClass object at 0x2b7730>) SomeClass.do(something='what?') <<< virtual void DestructionCallbackWrap::callback(Worker*, void*) <<< int Worker::destroy2(DestructionCallback*, const void*) Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From talljimbo at gmail.com Mon Aug 15 20:44:36 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 15 Aug 2011 11:44:36 -0700 Subject: [C++-sig] void* (void pointer) function arguments In-Reply-To: References: Message-ID: <4E496914.5000007@gmail.com> On 08/15/2011 08:11 AM, Holger Joukl wrote: > > Hi, > > I'm trying to wrap a C++-API that uses void* to pass around arbitrary, > application-specific stuff. > > I am a bit unsure about how to work the void-pointers. A viable way seems > to wrap the original > API functions with thin wrappers that take a boost::python::object and hand > the "raw" PyObject* > into the original function: > I don't see anything wrong with this approach, aside from the fact that you have to make an explicit wrapper for all of your functions that take void pointers. It might be a little less safe, but you should be able to make things more automatic by casting function pointers that accept a void* to a signature with void* replaced by PyObject*; Boost.Python does know how to wrap functions that take PyObject*. For example, in the wrapper for your "Worker" class, you'd have: .def("destroy2", (int (Worker::*)(DestructionCallback*,PyObject*))&Worker::destroy, (bp::arg("cb"), bp::arg("closure")=bp::object()) ) I think this should do the same thing, though you may want to check that default arguments work as expected. Of course this could lead to big problems if you pass around things that aren't in fact PyObject* as void pointers in the same places, though I think your solution suffers from this too. There really isn't anything automatic for dealing with void pointers in Boost.Python, because it's so template-based - it really can't learn anything from a void pointer. Good luck! Jim Bosch From Holger.Joukl at LBBW.de Tue Aug 16 11:31:03 2011 From: Holger.Joukl at LBBW.de (Holger Joukl) Date: Tue, 16 Aug 2011 11:31:03 +0200 Subject: [C++-sig] void* (void pointer) function arguments In-Reply-To: <4E496914.5000007@gmail.com> References: <4E496914.5000007@gmail.com> Message-ID: Hi Jim, > > I am a bit unsure about how to work the void-pointers. A viable way seems > > to wrap the original > > API functions with thin wrappers that take a boost::python::object and hand > > the "raw" PyObject* > > into the original function: > > > > I don't see anything wrong with this approach, aside from the fact that > you have to make an explicit wrapper for all of your functions that take > void pointers. It might be a little less safe, but you should be able > to make things more automatic by casting function pointers that accept a > void* to a signature with void* replaced by PyObject*; Boost.Python does > know how to wrap functions that take PyObject*. For example, in the > wrapper for your "Worker" class, you'd have: Just tested, it works: BOOST_PYTHON_MODULE(void_ptr_cb) { bp::class_ ("DestructionCallback", bp::init<>()) .def("callback", bp::pure_virtual(&DestructionCallback::callback)) ; bp::class_("Worker") .def("destroy", &Worker::destroy, &WorkerWrap::default_destroy) .def("destroy2", (int (Worker::*)(DestructionCallback*, PyObject*))&Worker::destroy2, (bp::arg("cb"), bp::arg("closure")=bp::object())) ; }; So I could save the thin wrappers that get called from the Python side. > I think this should do the same thing, though you may want to check that > default arguments work as expected. Of course this could lead to big > problems if you pass around things that aren't in fact PyObject* as void > pointers in the same places, though I think your solution suffers from > this too. Absolutely :). The default args seem to work fine with your proposal, though. > Good luck! Thanks a lot and all the best Holger Landesbank Baden-Wuerttemberg Anstalt des oeffentlichen Rechts Hauptsitze: Stuttgart, Karlsruhe, Mannheim, Mainz HRA 12704 Amtsgericht Stuttgart From babakage at gmail.com Tue Aug 16 16:32:41 2011 From: babakage at gmail.com (babak) Date: Tue, 16 Aug 2011 07:32:41 -0700 (PDT) Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: <4E457F72.3020004@gmail.com> References: <1313148516764-3738972.post@n4.nabble.com> <4E457F72.3020004@gmail.com> Message-ID: <1313505161303-3747397.post@n4.nabble.com> Hi Jim Thanks for your response. I'm abit surprised that this is the case, I kind of thought that indexing_suite was there to bridge this incompatibility. Is it normal then for large projects reliant on value based containers to copy data when going from python to c++ ? thanks babak -- View this message in context: http://boost.2283326.n4.nabble.com/wrapped-std-vector-slice-affecting-items-in-a-python-list-tp3738972p3747397.html Sent from the Python - c++-sig mailing list archive at Nabble.com. From talljimbo at gmail.com Tue Aug 16 19:35:35 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 16 Aug 2011 10:35:35 -0700 Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: <1313505161303-3747397.post@n4.nabble.com> References: <1313148516764-3738972.post@n4.nabble.com> <4E457F72.3020004@gmail.com> <1313505161303-3747397.post@n4.nabble.com> Message-ID: <4E4AAA67.4060909@gmail.com> On 08/16/2011 07:32 AM, babak wrote: > Hi Jim > > Thanks for your response. > > I'm abit surprised that this is the case, I kind of thought that > indexing_suite was there to bridge this incompatibility. Is it normal then > for large projects reliant on value based containers to copy data when going > from python to c++ ? > I don't really know what normal is; my particular area is scientific computing, so I actually use numpy arrays in almost all the cases where I'm dealing with huge containers. I do tend to copy data when dealing with small containers of more complex objects, and when it's really an issue, I make my C++ objects cheap to copy (via copy on write or something). Things are also a lot safer if you only wrap vectors as const; it's exposing the mutators to Python that gets really difficult. Finally, if your C++ code works on iterators rather than containers, you can avoid copying by using stl_input_iterator to iterate directly over the Python containers. One large project I'm a part of uses vectors of shared_ptr extensively on the C++ side to combat this problem (even though they use SWIG, not Boost.Python). If you have control over the C++, that's often the best solution, regardless of how you're exposing them to Python. I would that any large project that uses SWIG with value-based containers is probably mostly ignoring the safety issues; unlike Boost.Python, SWIG simply doesn't have the facilities to protect you from null pointers and dangling references, so its STL wrappers don't do anything to prevent such things. The onus is on the Python user to be safe. On the Boost.Python side, indexing_suite essentially does the best it can to make things safe and full-featured, and while it can't totally foolproof your code, it does make it a lot more difficult to segfault accidentally. If you find yourself needing it a lot, you may want to looking into indexing suite v2, which I believe is best obtained through the Py++ package. Jim From babakage at gmail.com Tue Aug 16 20:29:53 2011 From: babakage at gmail.com (Babak Khataee) Date: Tue, 16 Aug 2011 19:29:53 +0100 Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: <4E4AAA67.4060909@gmail.com> References: <1313148516764-3738972.post@n4.nabble.com> <4E457F72.3020004@gmail.com> <1313505161303-3747397.post@n4.nabble.com> <4E4AAA67.4060909@gmail.com> Message-ID: Okay thanks for the info. Things are also a lot safer if you only wrap vectors as const; it's exposing > the mutators to Python that gets really difficult. > Is the const-ness of a wrapped object just a side effect of not exposing methods which modify it or is it due to something else more explicit ? thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant.tang at gmail.com Tue Aug 16 21:43:10 2011 From: grant.tang at gmail.com (Grant Tang) Date: Tue, 16 Aug 2011 14:43:10 -0500 Subject: [C++-sig] question about implicit type conversion of vector to a custom type Message-ID: This is a question about using implicitly_convertible to convert vector to another custom class type. I already have boost python type conversion code to convert Python list/tuple to c++ vector, which works fine. There is another custom class EMObject, which is a thin wrapper to many builit-in type. It takes a built-in type like int, float, string, and vector as a single constructor argument and return type. So those built-in type can convert to and from EMObject implicitly. The functions in c++ which takes this EMObject as an argument are expose to Python. Now I need call these functions in Python with argument from Python list or tuple. Since I already convert python list/tuple to c++ vector, I just need declare implicit type conversion from vector to EMObject like following: implicitly_convertible, EMAN::EMObject>(); implicitly_convertible, EMAN::EMObject>(); implicitly_convertible, EMAN::EMObject>(); This seems a perfect solution to my question. But unfortunately I find out only the first one working, regardless which one is the first. Could anybody tell me why this happen? -- Grant -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Tue Aug 16 22:41:06 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 16 Aug 2011 13:41:06 -0700 Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: References: <1313148516764-3738972.post@n4.nabble.com> <4E457F72.3020004@gmail.com> <1313505161303-3747397.post@n4.nabble.com> <4E4AAA67.4060909@gmail.com> Message-ID: <4E4AD5E2.9070500@gmail.com> On 08/16/2011 11:29 AM, Babak Khataee wrote: > Okay thanks for the info. > > Things are also a lot safer if you only wrap vectors as const; it's > exposing the mutators to Python that gets really difficult. > > > Is the const-ness of a wrapped object just a side effect of not exposing > methods which modify it or is it due to something else more explicit ? The former. It's really only the methods that can invalidate iterators (i.e. those that add or remove elements) that you need to worry about. Non-const access to individual elements is totally safe on its own. Jim From talljimbo at gmail.com Tue Aug 16 22:58:01 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 16 Aug 2011 13:58:01 -0700 Subject: [C++-sig] question about implicit type conversion of vector to a custom type In-Reply-To: References: Message-ID: <4E4AD9D9.9000501@gmail.com> On 08/16/2011 12:43 PM, Grant Tang wrote: > This is a question about using implicitly_convertible to convert vector > to another custom class type. > > I already have boost python type conversion code to convert Python > list/tuple to c++ vector, which works fine. There is another custom > class EMObject, which is a thin wrapper to many builit-in type. It takes > a built-in type like int, float, string, and vector as a single > constructor argument and return type. So those built-in type can convert > to and from EMObject implicitly. The functions in c++ which takes this > EMObject as an argument are expose to Python. Now I need call these > functions in Python with argument from Python list or tuple. Since I > already convert python list/tuple to c++ vector, I just need declare > implicit type conversion from vector to EMObject like following: > > implicitly_convertible, EMAN::EMObject>(); > implicitly_convertible, EMAN::EMObject>(); > implicitly_convertible, EMAN::EMObject>(); > > This seems a perfect solution to my question. But unfortunately I find > out only the first one working, regardless which one is the first. Could > anybody tell me why this happen? > Unfortunately, there's no "best-match" type checking in Boost.Python; when trying to convert a Python object to a particular C++ type, it simply runs over a list of all the converters registered (using RTTI) for that C++ type. Each of those converters then gets a chance to check whether it "matches" the Python object; if it thinks it does, none of the other converters get tried. I'd guess that's what's happening to you - implicitly_convertible doesn't really do enough work in that match stage, so the first conversion you've registered says "I match!" and then fails by throwing an exception. The fix might be in how you've written your vector<> conversions; when given a list or tuple that contains the wrong element type, they need to report that they "don't match", rather than matching any list or tuple and then throwing an exception when the element type is incorrect. Good luck! Jim Bosch From grant.tang at gmail.com Wed Aug 17 07:19:36 2011 From: grant.tang at gmail.com (Grant Tang) Date: Wed, 17 Aug 2011 00:19:36 -0500 Subject: [C++-sig] question about implicit type conversion of vector to a custom type In-Reply-To: <4E4AD9D9.9000501@gmail.com> References: <4E4AD9D9.9000501@gmail.com> Message-ID: "Jim Bosch" wrote in message news:4E4AD9D9.9000501 at gmail.com... On 08/16/2011 12:43 PM, Grant Tang wrote: > This is a question about using implicitly_convertible to convert vector > to another custom class type. > > I already have boost python type conversion code to convert Python > list/tuple to c++ vector, which works fine. There is another custom > class EMObject, which is a thin wrapper to many builit-in type. It takes > a built-in type like int, float, string, and vector as a single > constructor argument and return type. So those built-in type can convert > to and from EMObject implicitly. The functions in c++ which takes this > EMObject as an argument are expose to Python. Now I need call these > functions in Python with argument from Python list or tuple. Since I > already convert python list/tuple to c++ vector, I just need declare > implicit type conversion from vector to EMObject like following: > > implicitly_convertible, EMAN::EMObject>(); > implicitly_convertible, EMAN::EMObject>(); > implicitly_convertible, EMAN::EMObject>(); > > This seems a perfect solution to my question. But unfortunately I find > out only the first one working, regardless which one is the first. Could > anybody tell me why this happen? > Unfortunately, there's no "best-match" type checking in Boost.Python; when trying to convert a Python object to a particular C++ type, it simply runs over a list of all the converters registered (using RTTI) for that C++ type. Each of those converters then gets a chance to check whether it "matches" the Python object; if it thinks it does, none of the other converters get tried. I'd guess that's what's happening to you - implicitly_convertible doesn't really do enough work in that match stage, so the first conversion you've registered says "I match!" and then fails by throwing an exception. The fix might be in how you've written your vector<> conversions; when given a list or tuple that contains the wrong element type, they need to report that they "don't match", rather than matching any list or tuple and then throwing an exception when the element type is incorrect. Good luck! Jim Bosch Thank you for your reply, Jim. My vector conversion code is pretty simple and straightforward. This is the conversion class in typeconversion.h file: template struct vector_from_python { vector_from_python() { python::converter::registry::push_back(&convertible, &construct, python::type_id >()); } static void* convertible(PyObject* obj_ptr) { if (!(PyList_Check(obj_ptr) || PyTuple_Check(obj_ptr) || PyIter_Check(obj_ptr) || PyRange_Check(obj_ptr))) { return 0; } return obj_ptr; } static void construct(PyObject* obj_ptr, python::converter::rvalue_from_python_stage1_data* data) { void* storage = ((python::converter::rvalue_from_python_storage >*) data)->storage.bytes; new (storage) vector(); data->convertible = storage; vector& result = *((vector*) storage); python::handle<> obj_iter(PyObject_GetIter(obj_ptr)); while(1) { python::handle<> py_elem_hdl(python::allow_null(PyIter_Next(obj_iter.get()))); if (PyErr_Occurred()) { python::throw_error_already_set(); } if (!py_elem_hdl.get()) { break; } python::object py_elem_obj(py_elem_hdl); python::extract elem_proxy(py_elem_obj); result.push_back(elem_proxy()); } } }; I specialize this template with different type in BOOST_PYTHON_MODULE: vector_from_python(); vector_from_python(); vector_from_python(); This solution looks like work fine for converting list/tuple to vector<>. But I got this problem when need implicitly convert vector<> to EMObject. Do you mean I should write a conversion class for each type of vector instead of use a template class? Grant From babakage at gmail.com Wed Aug 17 20:22:11 2011 From: babakage at gmail.com (Babak Khataee) Date: Wed, 17 Aug 2011 19:22:11 +0100 Subject: [C++-sig] wrapped std::vector slice affecting items in a python list In-Reply-To: <4E4AD5E2.9070500@gmail.com> References: <1313148516764-3738972.post@n4.nabble.com> <4E457F72.3020004@gmail.com> <1313505161303-3747397.post@n4.nabble.com> <4E4AAA67.4060909@gmail.com> <4E4AD5E2.9070500@gmail.com> Message-ID: okay cool. I think that's a reasonable compromise. It should also (hopefully) make it possible to move away from the indexing suite as well. thanks for your help! On 16 August 2011 21:41, Jim Bosch wrote: > On 08/16/2011 11:29 AM, Babak Khataee wrote: > >> Okay thanks for the info. >> >> Things are also a lot safer if you only wrap vectors as const; it's >> exposing the mutators to Python that gets really difficult. >> >> >> Is the const-ness of a wrapped object just a side effect of not exposing >> methods which modify it or is it due to something else more explicit ? >> > > The former. It's really only the methods that can invalidate iterators > (i.e. those that add or remove elements) that you need to worry about. > Non-const access to individual elements is totally safe on its own. > > Jim > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant.tang at gmail.com Thu Aug 18 18:37:21 2011 From: grant.tang at gmail.com (Grant Tang) Date: Thu, 18 Aug 2011 11:37:21 -0500 Subject: [C++-sig] question about implicit type conversion of vector to a custom type In-Reply-To: <4E4AD9D9.9000501@gmail.com> References: <4E4AD9D9.9000501@gmail.com> Message-ID: "Jim Bosch" wrote in message news:4E4AD9D9.9000501 at gmail.com... >Unfortunately, there's no "best-match" type checking in Boost.Python; when >trying to convert a Python object to a particular C++ type, it simply runs >over a list of all the converters registered (using RTTI) for that C++ >type. > >Each of those converters then gets a chance to check whether it "matches" >the Python object; if it thinks it does, none of the other converters get >tried. > >I'd guess that's what's happening to you - implicitly_convertible doesn't >really do enough work in that match stage, so the first conversion you've >registered says "I match!" and then fails by throwing an exception. > >The fix might be in how you've written your vector<> conversions; when >given a list or tuple that contains the wrong element type, they need to >report that they "don't match", rather than matching any list or tuple and >then throwing an exception when the element type is incorrect. > >Good luck! > >Jim Bosch I change my vector<> conversion(see it in my last reply) to add specialization for int, float, string etc. I added type check in convertible() function: template <> struct vector_from_python { vector_from_python() { python::converter::registry::push_back(&convertible, &construct, python::type_id >()); } static void* convertible(PyObject* obj_ptr) { if (!(PyList_Check(obj_ptr) || PyTuple_Check(obj_ptr) || PyIter_Check(obj_ptr) || PyRange_Check(obj_ptr))) { return 0; } PyObject * first_obj = PyObject_GetItem(obj_ptr, PyInt_FromLong(0)); if( !PyObject_TypeCheck(first_obj, &PyFloat_Type) ) { return 0; } return obj_ptr; } static void construct(PyObject* obj_ptr, python::converter::rvalue_from_python_stage1_data* data) { void* storage = ((python::converter::rvalue_from_python_storage >*) data)->storage.bytes; new (storage) vector(); data->convertible = storage; vector& result = *((vector*) storage); python::handle<> obj_iter(PyObject_GetIter(obj_ptr)); while(1) { python::handle<> py_elem_hdl(python::allow_null(PyIter_Next(obj_iter.get()))); if (PyErr_Occurred()) { python::throw_error_already_set(); } if (!py_elem_hdl.get()) { break; } python::object py_elem_obj(py_elem_hdl); python::extract elem_proxy(py_elem_obj); result.push_back(elem_proxy()); } } }; This time the implicit type conversion works perfectly. But I got a new problem: the memory leak! The memory leak happens only for float type, whenever I convert the float list in python to vector of float in c++, the memory for float list is leaked. I put the call in a function, and called the gc.collect() after exit the function, the memory is still not recycled. Why is the memory of the python list is not freed after exit the scope? Grant From talljimbo at gmail.com Thu Aug 18 19:44:02 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 18 Aug 2011 10:44:02 -0700 Subject: [C++-sig] question about implicit type conversion of vector to a custom type In-Reply-To: References: <4E4AD9D9.9000501@gmail.com> Message-ID: <4E4D4F62.8020904@gmail.com> On 08/18/2011 09:37 AM, Grant Tang wrote: > > I change my vector<> conversion(see it in my last reply) to add > specialization for int, float, string etc. > I added type check in convertible() function: > Sorry I didn't reply to your last email earlier, but it looks like you're on the right track. Here's a slightly modified version of your convertible function: static void* convertible(PyObject* obj_ptr) { // Unfortunately, this no longer works on pure iterators, because // they don't necessarily support "obj[0]". // PySequence_Check should work on lists, tuples, and other // things that support __getitem__ with integer arguments. if (!(PySequence_Check(obj_ptr)) { return 0; } // Succeed if there are no items in the sequence. if (PySequence_Size(obj_ptr) == 0) { return obj_ptr; } // PySequence_GetItem takes an int directly; otherwise you'd need // to DECREF the int object you pass to PyObject_GetItem as well. PyObject * first_obj = PySequence_GetItem(obj_ptr, 0); if( !PyObject_TypeCheck(first_obj, &PyFloat_Type) ) { Py_DECREF(first_obj); // avoid memory leaks on failure return 0; } Py_DECREF(first_obj); // avoid memory leaks on success return obj_ptr; } > This time the implicit type conversion works perfectly. But I got a new > problem: the memory leak! > The memory leak happens only for float type, whenever I convert the > float list in python to vector > of float in c++, the memory for float list is leaked. I put the call in > a function, and called the gc.collect() > after exit the function, the memory is still not recycled. > > Why is the memory of the python list is not freed after exit the scope? > It's the missing Py_DECREF calls; it's only because of a Python implementation detail that you didn't have leaks with the other types. The Python C-API requires you to call Py_DECREF whenever you're done with an object. Alternately, you can put your PyObject pointers in bp::handle<> or bp::object, and they'll be managed automatically. Jim From grant.tang at gmail.com Thu Aug 18 23:52:27 2011 From: grant.tang at gmail.com (Grant Tang) Date: Thu, 18 Aug 2011 16:52:27 -0500 Subject: [C++-sig] question about implicit type conversion of vector to a custom type In-Reply-To: <4E4D4F62.8020904@gmail.com> References: <4E4AD9D9.9000501@gmail.com> <4E4D4F62.8020904@gmail.com> Message-ID: "Jim Bosch" wrote in message news:4E4D4F62.8020904 at gmail.com... On 08/18/2011 09:37 AM, Grant Tang wrote: > > I change my vector<> conversion(see it in my last reply) to add > specialization for int, float, string etc. > I added type check in convertible() function: > Sorry I didn't reply to your last email earlier, but it looks like you're on the right track. Here's a slightly modified version of your convertible function: static void* convertible(PyObject* obj_ptr) { // Unfortunately, this no longer works on pure iterators, because // they don't necessarily support "obj[0]". // PySequence_Check should work on lists, tuples, and other // things that support __getitem__ with integer arguments. if (!(PySequence_Check(obj_ptr)) { return 0; } // Succeed if there are no items in the sequence. if (PySequence_Size(obj_ptr) == 0) { return obj_ptr; } // PySequence_GetItem takes an int directly; otherwise you'd need // to DECREF the int object you pass to PyObject_GetItem as well. PyObject * first_obj = PySequence_GetItem(obj_ptr, 0); if( !PyObject_TypeCheck(first_obj, &PyFloat_Type) ) { Py_DECREF(first_obj); // avoid memory leaks on failure return 0; } Py_DECREF(first_obj); // avoid memory leaks on success return obj_ptr; } > This time the implicit type conversion works perfectly. But I got a new > problem: the memory leak! > The memory leak happens only for float type, whenever I convert the > float list in python to vector > of float in c++, the memory for float list is leaked. I put the call in > a function, and called the gc.collect() > after exit the function, the memory is still not recycled. > > Why is the memory of the python list is not freed after exit the scope? > It's the missing Py_DECREF calls; it's only because of a Python implementation detail that you didn't have leaks with the other types. The Python C-API requires you to call Py_DECREF whenever you're done with an object. Alternately, you can put your PyObject pointers in bp::handle<> or bp::object, and they'll be managed automatically. Jim Bingo! This fixes my problem. Thank you very much Jim! Grant From strattonbrazil at gmail.com Thu Aug 25 07:07:39 2011 From: strattonbrazil at gmail.com (Josh Stratton) Date: Wed, 24 Aug 2011 22:07:39 -0700 Subject: [C++-sig] getting a list of strings from a static method Message-ID: I'm very new to boost python and trying to figure out how to properly get a list of strings from a static method in my main namespace. I'm not sure if I need to extract the list from the object or if it's already a list. boost::python::object list = exec("Foo.extensions()", _pyMainNamespace); // where Foo.extensions() is a static method returning a list of strings My end goal is to have a QList of QStrings, but if I can somehow get them to a boost list of std::strings I can take it from there. From super24bitsound at hotmail.com Thu Aug 25 13:17:34 2011 From: super24bitsound at hotmail.com (Jay Riley) Date: Thu, 25 Aug 2011 07:17:34 -0400 Subject: [C++-sig] Boost Python loss of values In-Reply-To: References: Message-ID: I'm having a really weird issue in boost python. I'm focusing on a particular property/method to simplify the example. Here's the situation: In my program, I have a class called Attack. With the following layout (simplified for example) class Attack : public Action { public: virtual int CalculateDamage(const std::vector& users, BattleCharacter* target, const std::vector& targets, BattleField *field); protected: bool Hit; } I exposed Attack to python, making it overridable, as follows: struct AttackWrapper : Game::Battles::Actions::Attack { int AttackWrapper::CalculateDamage(const std::vector& users, Game::Battles::BattleCharacter* target, const std::vector& targets, Game::Battles::BattleField *field) { return call_method(self, "CalculateDamage", users, ptr(target), targets, ptr(field)); } int AttackWrapper::CalculateDamageDefault(const std::vector& users, Game::Battles::BattleCharacter* target, const std::vector& targets, Game::Battles::BattleField *field) { return this->Attack::CalculateDamage(users, target, Targets, field); } } And the python exposing is done as follows: class_, bases >("Attack") .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); I initially thought everything was working fine, as I can override the `CalculateDamage` method within python and have it work correctly. However, When I want to use the normal `Attack->CalculateDamage`, the following happens: I only call `CalculateDamage` when Hit is true, and I can confirm via break point when I hit this line, Hit is true: return call_method(self, "CalculateDamage", users, ptr(target), targets, ptr(field)); Now, because I haven't overriden `CalculateDamage` in Python for this attack instance, it ends up resolving to `AttackWrapper::CalculateDamageDefault`. But by the time I enter AttackWrapper::CalculateDamageDefault, Hit is no longer true. That is, when I break on this line: return this->Attack::CalculateDamage(users, target, Targets, field); Hit is false. So somewhere between return call_method(self, "CalculateDamage", users, ptr(target), targets, ptr(field)); resolving to return this->Attack::CalculateDamage(users, target, Targets, field); my property's value is lost. I have no idea what could be causing this. Has anyone encountered something like this before? The attacks I'm using for testing are defined as follows: class ScriptedAttack(Attack): def __init__(self, Type, ID, Name, Flags, Targs = ActionTargets.Any, AllowTargettingOverride = False, Power = 0, MPCost = 0, SPCost = 0, Accuracy = 0.9, CritChance = 0.1, DefineOwnUse = False, EleWeights = None, StatusEffectChances = None): if (EleWeights == None and StatusEffectChances == None): Attack.__init__(self, Type, ID, Name, Flags, Targs, AllowTargettingOVerride, Power, MPCost, SPCost, Accuracy, CritChance, DefineOwnUse) else: Elemap = ElementMap() if (EleWeights != None): for Element, Weight in EleWeights.iteritems(): Elemap[Element] = Weight SEChances = SEChanceMap() if (StatusEffectChances != None): for StatusEffect, Chance in StatusEffectChances.iteritems(): SEChances[StatusEffect] = Chance Attack.__init__(self, Type, ID, Name, Flags, Elemap, Targs, AllowTargettingOverride, Power, MPCost, SPCost, Accuracy, CritChance, DefineOwnUse, SEChances) def Clone(self): return copy.deepcopy(self) Fire = ScriptedAttack(ActionType.MagicAction, PrimaryEngine.GetUID(), "Fire", AttackFlags.Projectile | AttackFlags.Elemental, ActionTargets.Any, True, 32, 14, 0, 1.0, 0.1, False, {Elements.Fire: 1.0}) Fira = ScriptedAttack(ActionType.MagicAction, PrimaryEngine.GetUID(), "Fira", AttackFlags.Projectile | AttackFlags.Elemental, ActionTargets.Any, True, 63, 35, 0, 1.0, 0.1, False, {Elements.Fire: 1.0}) ActLibrary.AddAttack(Fire) ActLibrary.AddAttack(Fira) ActLibrary.AddAttack takes in a boost::shared_ptr and stores it into a hash table. I lookup the attack and use the instance stored there to do CalculateDamage. It almost seems like the object is being copied, but I have no idea why that'd be so. Any help would be appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Thu Aug 25 19:40:01 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 25 Aug 2011 10:40:01 -0700 Subject: [C++-sig] getting a list of strings from a static method In-Reply-To: References: Message-ID: <4E5688F1.8060306@gmail.com> On 08/24/2011 10:07 PM, Josh Stratton wrote: > I'm very new to boost python and trying to figure out how to properly > get a list of strings from a static method in my main namespace. I'm > not sure if I need to extract the list from the object or if it's > already a list. > > boost::python::object list = exec("Foo.extensions()", > _pyMainNamespace); // where Foo.extensions() is a static method > returning a list of strings > Something like this: namespace bp = boost::python; bp::object pyList = bp::exec("Foo.extensions()", _pyMainNamespace); std::vector vec(bp::len(pyList)); for (std::size_t i = 0; i < vec.size(); ++i) { vec[i] = bp::extract(pyList[i]); } > My end goal is to have a QList of QStrings, but if I can somehow get > them to a boost list of std::strings I can take it from there. I'll leave converting to Qt stuff up to you. Note that you can also do extract(), which may be more efficient if you have really large strings and don't want to copy them. Good Luck! Jim Bosch From talljimbo at gmail.com Thu Aug 25 22:18:04 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 25 Aug 2011 13:18:04 -0700 Subject: [C++-sig] Boost Python loss of values In-Reply-To: References: Message-ID: <4E56ADFC.3030701@gmail.com> On 08/25/2011 04:17 AM, Jay Riley wrote: > > And the python exposing is done as follows: > > class_, bases > >("Attack") > .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); > This bit looks a little suspect, and I'm surprised that it compiles - class_ should only take 4 arguments if one of them is boost::noncopyable. I think you mean: class_< Attack, boost::shared_ptr, bases > (...) See http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html for details of the arguments to class_. I don't have a good idea as to why this would cause the problem you're seeing (maybe you're slicing your AttackWrapper instances into Attack instances?) but I'd recommend fixing it first. Good Luck! Jim Bosch From talljimbo at gmail.com Thu Aug 25 22:59:18 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Thu, 25 Aug 2011 13:59:18 -0700 Subject: [C++-sig] New Major-Release Boost.Python Development Message-ID: <4E56B7A6.2030008@gmail.com> I'd like to start work on a new major release of Boost.Python. While the library is currently well-maintained in terms of bugfixes, I get the sense that neither the original developers nor the current maintainer have the time or inclination to work on new features. I'd also like to propose some changes that are slightly backwards-incompatible, as well as some that mess with the internals to an extent that I'd feel better about doing it outside Boost itself, to make it easier for adventurous users to play with the new version without affecting people who depend on having an extremely stable library in Boost. To that end, I'm inclined to copy the library to somewhere else (possibly the boost sandbox, but more likely a separate site), work on it, produce some minor releases, and re-submit it to Boost for review. Perhaps the external site would continue on as the home of more fine-grained releases, or maybe we would fully reintegrate with Boost at that point (especially if Boost addresses some of its own project management and release control issues by that point, which I know is being discussed but to my knowledge doesn't really have a timeline yet). I am willing to take the lead on this project; I have a number of features that exist as extensions in the boost sandbox already that would work better if they could be more fully integrated into the Boost.Python core, and I think I have the necessary understanding of the full code base to coordinate things. I'd like to save a full discussion of what features a new version would include for another thread, but I am hoping other people on the list might volunteer some time to work on aspects they have coded up elsewhere - I know many such extensions exist. So I have a few questions for anyone who's paying attention: - For the original Boost.Python developers and current maintainers, and other people familiar with developing Boost libraries: do you have any preference on how to approach this? I don't want to step on any toes, especially toes attached to people who are responsible for the excellent library we already have. - For other Boost.Python experts on this list: do you have existing code or development time you'd like to contribute? Thanks! Jim Bosch From stefan at seefeld.name Thu Aug 25 23:25:52 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Thu, 25 Aug 2011 17:25:52 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: <4E56B7A6.2030008@gmail.com> References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E56BDE0.2050208@seefeld.name> On 08/25/2011 04:59 PM, Jim Bosch wrote: > > To that end, I'm inclined to copy the library to somewhere else > (possibly the boost sandbox, but more likely a separate site), work on > it, produce some minor releases, and re-submit it to Boost for review. > Perhaps the external site would continue on as the home of more > fine-grained releases, or maybe we would fully reintegrate with Boost > at that point (especially if Boost addresses some of its own project > management and release control issues by that point, which I know is > being discussed but to my knowledge doesn't really have a timeline yet). Jim, this is an interesting idea. There has been lots of general (dare I say generic ?) discussion concerning process improvements (which unfortunately most of the time diverted into tool discussions). Among the fundamental issues is a modularization of boost. I think it would be great if boost.python could follow through on its own, by becoming a separate entity. Thus, I'm fully supportive of such a move. > - For the original Boost.Python developers and current maintainers, > and other people familiar with developing Boost libraries: do you have > any preference on how to approach this? I don't want to step on any > toes, especially toes attached to people who are responsible for the > excellent library we already have. I think branching off and moving to a separate site seems a fair choice. It would be great to "come back" under the "Boost.org" umbrella once that will be possible, i.e. once the Boost.org structure allows for that. (In the sake of progress I'm refraining from voicing my personal preferences for tools or hosting sites. I'm sure I can learn and adapt to whatever gets agreed on.) > > - For other Boost.Python experts on this list: do you have existing > code or development time you'd like to contribute? I'd definitely like to help. I have a wishlist of my own for improvements that I'd like to see, and which I'd be happy to share and work on. Thanks, Stefan -- ...ich hab' noch einen Koffer in Berlin... From rwgrosse-kunstleve at lbl.gov Fri Aug 26 01:26:52 2011 From: rwgrosse-kunstleve at lbl.gov (Ralf Grosse-Kunstleve) Date: Thu, 25 Aug 2011 16:26:52 -0700 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: <4E56B7A6.2030008@gmail.com> References: <4E56B7A6.2030008@gmail.com> Message-ID: Hi Jim, CC to Dave. This is great news. My main interests have been stability and not increasing the memory footprint of boost.python extensions. I'm not in a position to further develop boost.python. Troy and Ravi have done a significant amount of work. I hope they will comment for themselves. I'd prefer if developments stayed under the boost umbrella, e.g. as boost/python/v3, but I don't feel very strongly about this. Ralf On Thu, Aug 25, 2011 at 1:59 PM, Jim Bosch wrote: > I'd like to start work on a new major release of Boost.Python. While the > library is currently well-maintained in terms of bugfixes, I get the sense > that neither the original developers nor the current maintainer have the > time or inclination to work on new features. I'd also like to propose some > changes that are slightly backwards-incompatible, as well as some that mess > with the internals to an extent that I'd feel better about doing it outside > Boost itself, to make it easier for adventurous users to play with the new > version without affecting people who depend on having an extremely stable > library in Boost. > > To that end, I'm inclined to copy the library to somewhere else (possibly > the boost sandbox, but more likely a separate site), work on it, produce > some minor releases, and re-submit it to Boost for review. Perhaps the > external site would continue on as the home of more fine-grained releases, > or maybe we would fully reintegrate with Boost at that point (especially if > Boost addresses some of its own project management and release control > issues by that point, which I know is being discussed but to my knowledge > doesn't really have a timeline yet). > > I am willing to take the lead on this project; I have a number of features > that exist as extensions in the boost sandbox already that would work better > if they could be more fully integrated into the Boost.Python core, and I > think I have the necessary understanding of the full code base to coordinate > things. I'd like to save a full discussion of what features a new version > would include for another thread, but I am hoping other people on the list > might volunteer some time to work on aspects they have coded up elsewhere - > I know many such extensions exist. > > So I have a few questions for anyone who's paying attention: > > - For the original Boost.Python developers and current maintainers, and > other people familiar with developing Boost libraries: do you have any > preference on how to approach this? I don't want to step on any toes, > especially toes attached to people who are responsible for the excellent > library we already have. > > - For other Boost.Python experts on this list: do you have existing code or > development time you'd like to contribute? > > > Thanks! > > Jim Bosch > > ______________________________**_________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/**mailman/listinfo/cplusplus-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndbecker2 at gmail.com Fri Aug 26 13:17:49 2011 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 26 Aug 2011 07:17:49 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development References: <4E56B7A6.2030008@gmail.com> Message-ID: What sort of improvements did you have in mind? From super24bitsound at hotmail.com Fri Aug 26 17:27:51 2011 From: super24bitsound at hotmail.com (Jay Riley) Date: Fri, 26 Aug 2011 11:27:51 -0400 Subject: [C++-sig] Boost Python loss of values In-Reply-To: <4E56ADFC.3030701@gmail.com> References: , <4E56ADFC.3030701@gmail.com> Message-ID: Hi Jim, Thanks for the suggestion, unfortunately it didn't work. It really feels like it's making a copy for some reason as once I return to the int AttackWrapper::CalculateDamage(const std::vector& users, Game::Battles::BattleCharacter* target, const std::vector& targets, Game::Battles::BattleField *field) { return call_method(self, "CalculateDamage", users, ptr(target), targets, ptr(field)); } function, the value are back to their expected value. Slicing wouldn't be a problem here would it, since Hit is a member of the base class anyways? > Date: Thu, 25 Aug 2011 13:18:04 -0700 > From: talljimbo at gmail.com > To: cplusplus-sig at python.org > CC: super24bitsound at hotmail.com > Subject: Re: [C++-sig] Boost Python loss of values > > On 08/25/2011 04:17 AM, Jay Riley wrote: > > > > > And the python exposing is done as follows: > > > > class_, bases > > >("Attack") > > .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); > > > > This bit looks a little suspect, and I'm surprised that it compiles - > class_ should only take 4 arguments if one of them is boost::noncopyable. > > I think you mean: > > class_< Attack, boost::shared_ptr, bases > > (...) > > See > > http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html > > for details of the arguments to class_. > > I don't have a good idea as to why this would cause the problem you're > seeing (maybe you're slicing your AttackWrapper instances into Attack > instances?) but I'd recommend fixing it first. > > Good Luck! > > Jim Bosch -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan at seefeld.name Fri Aug 26 17:41:31 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Fri, 26 Aug 2011 11:41:31 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E57BEAB.7080205@seefeld.name> On 08/26/2011 07:17 AM, Neal Becker wrote: > What sort of improvements did you have in mind? Two things on my list that are likely going to be somewhat disruptive are: * Support for subclassing boost.python's own metaclass. * A per-module type registry, to avoid conflicting converters in multi-module projects. Stefan -- ...ich hab' noch einen Koffer in Berlin... From s_sourceforge at nedprod.com Fri Aug 26 19:28:09 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Fri, 26 Aug 2011 18:28:09 +0100 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: <4E56B7A6.2030008@gmail.com> References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E57D7A9.2588.308DDB27@s_sourceforge.nedprod.com> On 25 Aug 2011 at 13:59, Jim Bosch wrote: > - For other Boost.Python experts on this list: do you have existing code > or development time you'd like to contribute? Firstly, I must commend you as you're a better man than I for initiating this. I mostly chase money these past few years, and I don't like myself for it. About a year ago I promised someone in here I'd refactor the BPL interface to native code to support calling a list of functors every time BPL goes in or out of native code. This would allow the Python GIL to be released or reacquired, or indeed many other useful things e.g. benchmarking. At the time I had a free window of about a month in the near future, so I thought my promise worth giving. Unfortunately, the project I was working on at the time had unexpected problems, and the month got gobbled, so I broke my promise. I won't make that same mistake this time, not least because I have a packed schedule for the upcoming year. I might also add that the number of people willing to fund development in BPL appears to have dropped off a cliff since 2008, so that doesn't help focus minds either. Maybe the release of C++1x might reinvigorate things hopefully. Nevertheless, I'd like to keep informed. If you put it on github I'll stick a watch on it :) BTW, how has the Python3k port worked out? I'm not sure if it's been mainlined yet has it? Best of luck! Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From stefan at seefeld.name Fri Aug 26 19:59:51 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Fri, 26 Aug 2011 13:59:51 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: <4E57D7A9.2588.308DDB27@s_sourceforge.nedprod.com> References: <4E56B7A6.2030008@gmail.com> <4E57D7A9.2588.308DDB27@s_sourceforge.nedprod.com> Message-ID: <4E57DF17.90208@seefeld.name> On 08/26/2011 01:28 PM, Niall Douglas wrote: > BTW, how has the Python3k port worked out? I'm not sure if it's been > mainlined yet has it? Best of luck! Niall I'm not using Python 3k myself, so I can't comment, but the P3K port most definitely went into trunk and has been part of the last couple of releases. Stefan -- ...ich hab' noch einen Koffer in Berlin... From talljimbo at gmail.com Fri Aug 26 20:25:13 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 11:25:13 -0700 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E57E509.70007@gmail.com> On 08/26/2011 04:17 AM, Neal Becker wrote: > What sort of improvements did you have in mind? My list includes: - Propagating constness to Python (essentially already done as an extension, but it could have a much nicer interface if I could mess with class_ itself). - Make custom registry and template-based conversions more accessible. The former may just need more documentation, but the rvalue converters in particular don't seem to have been intended as part of the public API originally, and I think they're an important part of the library. Template-based conversions are even more buried in the details - you have to specialize five or six classes to get it working. I'd like to make it possible to create a template-based conversion by (partial) specializing just one traits class. - Automatic conversions for newer boost libraries (Fusion, Pointer Container, and Filesystem are at the top of my list), and more for the STL and iostreams standard libraries. I'd like to integrate the indexing suite (v2) into Boost.Python proper. - Allow Boost.Python wrapped classes to inherit from Python classes. - An actual "boost.python" Python module to make exceptions and other types used in wrappers easily accessible from Python. - Some limited degree of priority-based overload matching. Not sure how best to approach this one yet, though. Jim From talljimbo at gmail.com Fri Aug 26 21:01:18 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 12:01:18 -0700 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E57ED7E.6040305@gmail.com> On 08/25/2011 04:26 PM, Ralf Grosse-Kunstleve wrote: > Hi Jim, > > CC to Dave. > > This is great news. > My main interests have been stability and not increasing the memory > footprint of boost.python extensions. I'm not in a position to further > develop boost.python. > Troy and Ravi have done a significant amount of work. I hope they will > comment for themselves. > I'd prefer if developments stayed under the boost umbrella, e.g. as > boost/python/v3, but I don't feel very strongly about this. > Thanks! I'd have no problem at all calling it boost/python/v3 (in fact I'd hope to). Essentially, my concern is that v3 would fall into a muddy category between an accepted Boost library and a proposed Boost library, and I don't have a good example of how that ought to work, with regard to whether it should exist in the Boost main repository, or even whether it can even use the Boost label. And I would like to have both v2 and v3 available simultaneously as distinct libraries (the way Python 2 and Python 3 are, for instance), at least while v3 is undergoing lots of changes. I'm not sure how to fit that model into the Boost umbrella - I'm happy to do it, but I guess I'm hoping for someone to tell me how it should fit; I don't want to presume that v3 is automatically a Boost library without permission, so it seemed safer to move it outside until it could officially win back its Boost status through review. Jim > > On Thu, Aug 25, 2011 at 1:59 PM, Jim Bosch > wrote: > > I'd like to start work on a new major release of Boost.Python. > While the library is currently well-maintained in terms of > bugfixes, I get the sense that neither the original developers nor > the current maintainer have the time or inclination to work on new > features. I'd also like to propose some changes that are slightly > backwards-incompatible, as well as some that mess with the internals > to an extent that I'd feel better about doing it outside Boost > itself, to make it easier for adventurous users to play with the new > version without affecting people who depend on having an extremely > stable library in Boost. > > To that end, I'm inclined to copy the library to somewhere else > (possibly the boost sandbox, but more likely a separate site), work > on it, produce some minor releases, and re-submit it to Boost for > review. Perhaps the external site would continue on as the home of > more fine-grained releases, or maybe we would fully reintegrate with > Boost at that point (especially if Boost addresses some of its own > project management and release control issues by that point, which I > know is being discussed but to my knowledge doesn't really have a > timeline yet). > > I am willing to take the lead on this project; I have a number of > features that exist as extensions in the boost sandbox already that > would work better if they could be more fully integrated into the > Boost.Python core, and I think I have the necessary understanding of > the full code base to coordinate things. I'd like to save a full > discussion of what features a new version would include for another > thread, but I am hoping other people on the list might volunteer > some time to work on aspects they have coded up elsewhere - I know > many such extensions exist. > > So I have a few questions for anyone who's paying attention: > > - For the original Boost.Python developers and current maintainers, > and other people familiar with developing Boost libraries: do you > have any preference on how to approach this? I don't want to step > on any toes, especially toes attached to people who are responsible > for the excellent library we already have. > > - For other Boost.Python experts on this list: do you have existing > code or development time you'd like to contribute? > > > Thanks! > > Jim Bosch > > _________________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/__mailman/listinfo/cplusplus-sig > > > > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From talljimbo at gmail.com Fri Aug 26 22:00:11 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 13:00:11 -0700 Subject: [C++-sig] Boost Python loss of values In-Reply-To: References: , <4E56ADFC.3030701@gmail.com> Message-ID: <4E57FB4B.6080404@gmail.com> On 08/26/2011 08:27 AM, Jay Riley wrote: > Hi Jim, > > Thanks for the suggestion, unfortunately it didn't work. It really feels > like it's making a copy for some reason as once I return to the > > int AttackWrapper::CalculateDamage(const > std::vector& users, > Game::Battles::BattleCharacter* target, const > std::vector& targets, Game::Battles::BattleField > *field) > { > return call_method(self, "CalculateDamage", users, ptr(target), > targets, ptr(field)); > } > > function, the value are back to their expected value. Slicing wouldn't > be a problem here would it, since Hit is a member of the base class anyways? > Yes, that's right. What exactly is "self", above? I assume it's a data member of AttackWrapper of type PyObject *, which shouldn't produce a copy, but if it's something else, well, that could be a factor. Jim > > Date: Thu, 25 Aug 2011 13:18:04 -0700 > > From: talljimbo at gmail.com > > To: cplusplus-sig at python.org > > CC: super24bitsound at hotmail.com > > Subject: Re: [C++-sig] Boost Python loss of values > > > > On 08/25/2011 04:17 AM, Jay Riley wrote: > > > > > > > > And the python exposing is done as follows: > > > > > > class_, bases > > > >("Attack") > > > .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); > > > > > > > This bit looks a little suspect, and I'm surprised that it compiles - > > class_ should only take 4 arguments if one of them is boost::noncopyable. > > > > I think you mean: > > > > class_< Attack, boost::shared_ptr, bases > > > (...) > > > > See > > > > http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html > > > > for details of the arguments to class_. > > > > I don't have a good idea as to why this would cause the problem you're > > seeing (maybe you're slicing your AttackWrapper instances into Attack > > instances?) but I'd recommend fixing it first. > > > > Good Luck! > > > > Jim Bosch > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From dave at boostpro.com Fri Aug 26 22:09:39 2011 From: dave at boostpro.com (Dave Abrahams) Date: Fri, 26 Aug 2011 12:09:39 -0800 Subject: [C++-sig] New Major-Release Boost.Python Development References: <4E56B7A6.2030008@gmail.com> Message-ID: Trying to catch up here, so responding to everything all at once. on Thu Aug 25 2011, Jim Bosch wrote: Just how tall are you, Jimbo? > I'd like to start work on a new major release of Boost.Python. That certainly is welcome news. > While the library is currently well-maintained in terms of bugfixes, I > get the sense that neither the original developers nor the current > maintainer have the time or inclination to work on new features. Well, speaking for myself, mostly time. I'd be inclined to do a rewrite along the lines of the langbinding ideas if I had time. Another thing I think should be examined and refreshed is the documentation style. > I'd also like to propose some changes that are slightly > backwards-incompatible, as well as some that mess with the internals > to an extent that I'd feel better about doing it outside Boost itself, > to make it easier for adventurous users to play with the new version > without affecting people who depend on having an extremely stable > library in Boost. There's no need to do this "outside of Boost." A branch in the Boost repository is a perfect place for exploratory development that will eventually be released as part of Boost. > To that end, I'm inclined to copy the library to somewhere else > (possibly the boost sandbox, but more likely a separate site), work on > it, produce some minor releases, and re-submit it to Boost for > review. If you want to go through another review process, that's up to you. Getting review feedback definitely has its advantages. Please, though, use the sandbox or some other area of the Boost svn repository at least until we get my hoped-for Git transition completed. > Perhaps the external site would continue on as the home of more > fine-grained releases, or maybe we would fully reintegrate with Boost > at that point (especially if Boost addresses some of its own project > management and release control issues by that point, which I know is > being discussed but to my knowledge doesn't really have a timeline > yet). > > I am willing to take the lead on this project; Someone willing to take the lead is the most important part. > I have a number of features that exist as extensions in the boost > sandbox already that would work better if they could be more fully > integrated into the Boost.Python core, Good. Although, if I were you I would also carefully re-examine the core to see if it has the best design. > and I think I have the necessary understanding of the full code base > to coordinate things. Awesome. > I'd like to save a full discussion of what features a new version > would include for another thread, but I am hoping other people on the > list might volunteer some time to work on aspects they have coded up > elsewhere - I know many such extensions exist. > > So I have a few questions for anyone who's paying attention: > > - For the original Boost.Python developers and current maintainers, > and other people familiar with developing Boost libraries: do you have > any preference on how to approach this? I don't want to step on any > toes, especially toes attached to people who are responsible for the > excellent library we already have. See above. No toe-stepping concerns on my end. I have a preference that you build on the ideas we came up with for Boost.Langbinding, but certainly wouldn't insist on it. on Thu Aug 25 2011, Stefan Seefeld wrote: > > Jim, > > this is an interesting idea. There has been lots of general (dare I > say generic ?) discussion concerning process improvements (which > unfortunately most of the time diverted into tool discussions). Among > the fundamental issues is a modularization of boost. I think it would > be great if boost.python could follow through on its own, by becoming > a separate entity. Separate from Boost? I guess that's a possibility but I'm not sure I see the advantage. on Thu Aug 25 2011, Ralf Grosse-Kunstleve wrote: > Hi Jim, > > CC to Dave. > > This is great news. > My main interests have been stability and not increasing the memory > footprint of boost.python extensions. I'm not in a position to further > develop boost.python. > Troy and Ravi have done a significant amount of work. Yes, and hopefully integrating that could be part of any next steps. > I hope they will comment for themselves. > > I'd prefer if developments stayed under the boost umbrella, e.g. as > boost/python/v3, but I don't feel very strongly about this. Me too. We managed a transition from v1 to v2 within Boost, and I think we could do the same for v3. > On 08/26/2011 07:17 AM, Neal Becker wrote: >> What sort of improvements did you have in mind? > > Two things on my list that are likely going to be somewhat disruptive are: > > * Support for subclassing boost.python's own metaclass. Cool. > > * A per-module type registry, to avoid conflicting converters in > multi-module projects. Interesting idea. How does sharing types across multiple modules work in that scenario? on Fri Aug 26 2011, "Niall Douglas" wrote: > About a year ago I promised someone in here I'd refactor the BPL > interface to native code to support calling a list of functors every > time BPL goes in or out of native code. This would allow the Python > GIL to be released or reacquired, or indeed many other useful things > e.g. benchmarking. Sounds useful. on Fri Aug 26 2011, Jim Bosch wrote: > On 08/26/2011 04:17 AM, Neal Becker wrote: >> What sort of improvements did you have in mind? > > My list includes: > > - Propagating constness to Python (essentially already done as an > extension, but it could have a much nicer interface if I could mess > with class_ itself). Oooh, now that one is tricky. I'd like to see the design you have in mind. Python doesn't have constness; it has immutability, which is subtly different. > - Make custom registry and template-based conversions more > accessible. +1 > The former may just need more documentation, but the rvalue converters > in particular don't seem to have been intended as part of the public > API originally, and I think they're an important part of the > library. Template-based conversions are even more buried in the > details - you have to specialize five or six classes to get it > working. I'd like to make it possible to create a template-based > conversion by (partial) specializing just one traits class. Hmm, well, IIRC anything you do by pure partial specialization does not go in the registry and can't participate in cross-module communication, as it's entirely static. I guess we need a more formal separation of the two basic techniques for conversion, not to mention documentation, so people know to what they're opting in. > - Automatic conversions for newer boost libraries (Fusion, Pointer > Container, and Filesystem are at the top of my list), and more for the > STL and iostreams standard libraries. I'd like to integrate the > indexing suite (v2) into Boost.Python proper. Interesting and useful improvements. > - Allow Boost.Python wrapped classes to inherit from Python classes. +1 > - An actual "boost.python" Python module to make exceptions and other > types used in wrappers easily accessible from Python. Nice. > - Some limited degree of priority-based overload matching. Not sure > how best to approach this one yet, though. +1 This is a solved problem... just not in Boost.Python. Daniel Wallin worked it out for luabind and we were going to incorporate it into langbinding. Happy to discuss it further. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From talljimbo at gmail.com Sat Aug 27 01:39:05 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 16:39:05 -0700 Subject: [C++-sig] [Boost.Python v3] Planning and Logistics In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E582E99.60001@gmail.com> In the interest of keeping this discussion easy-to-follow, I'm going to reply to Dave's email twice, with new subjects - I'll stick to questions about logistics in this email, and talk about features and scope in another. In summary, I'm getting the sense that a branch in the mainline (not sandbox) Boost SVN is the way to go. I imagine most communication would just happen on the C++-sig list, maybe with the [Boost.Python v3] subtitle I've used in this email. On 08/26/2011 01:09 PM, Dave Abrahams wrote: > > Trying to catch up here, so responding to everything all at once. > > on Thu Aug 25 2011, Jim Bosch wrote: > > Just how tall are you, Jimbo? 6'4". Not that tall, in the grand scheme of things, but it was a teenage internet moniker that stuck with me. >> I'd also like to propose some changes that are slightly >> backwards-incompatible, as well as some that mess with the internals >> to an extent that I'd feel better about doing it outside Boost itself, >> to make it easier for adventurous users to play with the new version >> without affecting people who depend on having an extremely stable >> library in Boost. > > There's no need to do this "outside of Boost." A branch in the Boost > repository is a perfect place for exploratory development that will > eventually be released as part of Boost. >> To that end, I'm inclined to copy the library to somewhere else >> (possibly the boost sandbox, but more likely a separate site), work on >> it, produce some minor releases, and re-submit it to Boost for >> review. > > If you want to go through another review process, that's up to you. > Getting review feedback definitely has its advantages. Please, though, > use the sandbox or some other area of the Boost svn repository at least > until we get my hoped-for Git transition completed. I'd love to see a Git transition too, but I'm actually more familiar with SVN at the moment, and I can certainly see the advantages of working in the same repository as the previous version. > > on Thu Aug 25 2011, Stefan Seefeld wrote: > >> >> Jim, >> >> this is an interesting idea. There has been lots of general (dare I >> say generic ?) discussion concerning process improvements (which >> unfortunately most of the time diverted into tool discussions). Among >> the fundamental issues is a modularization of boost. I think it would >> be great if boost.python could follow through on its own, by becoming >> a separate entity. > > Separate from Boost? I guess that's a possibility but I'm not sure I > see the advantage. > From my perspective, the advantages are mostly just the same reasons Boost has been talking about increased modularity, with regard to having stable and development versions for some packages and not for others, and to allow users to be able to install some components without installing all of them. Boost.Python (right now, at least) depends on a very small core part of Boost, and to my knowledge no other Boost libraries have real dependencies on Boost.Python (not counting optional dependencies, like the Boost.Graph python bindings). If/when Boost as a whole goes more modular, I think any advantage of being separate would disappear, and the ideal case for Boost.Python would be for that to happen. > on Thu Aug 25 2011, Ralf Grosse-Kunstleve wrote: >> Troy and Ravi have done a significant amount of work. > > Yes, and hopefully integrating that could be part of any next steps. > >> I hope they will comment for themselves. >> >> I'd prefer if developments stayed under the boost umbrella, e.g. as >> boost/python/v3, but I don't feel very strongly about this. > > Me too. We managed a transition from v1 to v2 within Boost, and I think > we could do the same for v3. > Good to hear. I wasn't using Boost.Python back when that happened, but it's nice to know there's a precedent. Thanks! Jim From talljimbo at gmail.com Sat Aug 27 01:39:11 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 16:39:11 -0700 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E582E9F.6030808@gmail.com> On 08/26/2011 01:09 PM, Dave Abrahams wrote: > > Well, speaking for myself, mostly time. I'd be inclined to do a rewrite > along the lines of the langbinding ideas if I had time. > I had only been vaguely aware of langbinding until I followed up on your last email. It's a very nice separation, though after bad experiences using SWIG I am a little wary about trying to build a one-size-fits-all front-end for different languages. It seems like a reasonable way to proceed would be to try to convert Boost.Python to the langbinding interface, but not be obsessive about avoiding Python-specific hacks in the frontend right now. Once we're more feature-complete in Python, we can worry about finding language-agnostic ways to do things that weren't anticipated in the original langbinding design. > Another thing I think should be examined and refreshed is the > documentation style. > Agreed. And I wouldn't just limit it to the official documentation; there are a lot of little tidbits of useful but often very old Boost.Python knowledge scattered around the internet (often on Python-affiliated sites or wikis). It'd be nice to unify a lot of that, or at least update the obsolete stuff and add links to a more complete set of official documentation. > > Good. Although, if I were you I would also carefully re-examine the > core to see if it has the best design. > I wasn't originally planning on doing a full re-evaluation, but after looking over the langbinding proposal and hearing some of the other ideas, I think that will probably be necessary. > > on Thu Aug 25 2011, Ralf Grosse-Kunstleve wrote: > >> Troy and Ravi have done a significant amount of work. > > Yes, and hopefully integrating that could be part of any next steps. > >> I hope they will comment for themselves. I was under the impression their work had already been integrated into Boost.Python v2. Is there a large repository of additional Boost.Python work elsewhere that I should be aware of? I am aware of Py++ and the extensions associated with it, and some of that could definitely go into Boost.Python proper (and I think I've heard Roman state that he wouldn't have a problem with someone else doing the work to make it so, and he just didn't have time to do it himself). >> >> * Support for subclassing boost.python's own metaclass. > > Cool. > >> >> * A per-module type registry, to avoid conflicting converters in >> multi-module projects. > > Interesting idea. How does sharing types across multiple modules work > in that scenario? > Hmm. I'm guessing the global type registry would still be the default, and per-module registries would override these when available? It sounds like Stefan has a clear use case in mind, and I'd be curious to know what it is. > on Fri Aug 26 2011, "Niall Douglas" wrote: > >> About a year ago I promised someone in here I'd refactor the BPL >> interface to native code to support calling a list of functors every >> time BPL goes in or out of native code. This would allow the Python >> GIL to be released or reacquired, or indeed many other useful things >> e.g. benchmarking. > > Sounds useful. > This is one of the areas - GIL, threading, etc. - where I'm less knowledgeable, especially when multiple platforms are involved. It sounds like a very useful feature for a lot of people, but I have to admit I probably wouldn't dive into this without a lot of backup. > on Fri Aug 26 2011, Jim Bosch wrote: > >> On 08/26/2011 04:17 AM, Neal Becker wrote: >>> What sort of improvements did you have in mind? >> >> My list includes: >> >> - Propagating constness to Python (essentially already done as an >> extension, but it could have a much nicer interface if I could mess >> with class_ itself). > > Oooh, now that one is tricky. I'd like to see the design you have in > mind. Python doesn't have constness; it has immutability, which is > subtly different. > Essentially, C++ objects returned as const references, pointers, or smart pointers get converted into a Python proxy object with methods that forward to the real wrapped object, but only if those methods are marked as const. The proxy objects have rvalue converters but no lvalue converters. I essentially wrote a wrapper around class_ to check for constness of member functions and make the proxy class. When you use class_ directly, there's no const-correctness, but you can do: const_aware(class_("class_name")) .def(...) etc., to get const-correct wrappers. It doesn't work at all well with visitors, though, and I think I could find a better syntax with the ability to mess with class_ itself. Most of the current design can be found in the boost sandbox, in the python_extensions package. >> - Make custom registry and template-based conversions more >> accessible. > > +1 > >> The former may just need more documentation, but the rvalue converters >> in particular don't seem to have been intended as part of the public >> API originally, and I think they're an important part of the >> library. Template-based conversions are even more buried in the >> details - you have to specialize five or six classes to get it >> working. I'd like to make it possible to create a template-based >> conversion by (partial) specializing just one traits class. > > Hmm, well, IIRC anything you do by pure partial specialization does not > go in the registry and can't participate in cross-module communication, > as it's entirely static. I guess we need a more formal separation of the > two basic techniques for conversion, not to mention documentation, so > people know to what they're opting in. > Yeah, that's exactly right. To get it to work cross-module you have to include a header with the specializations in all modules (and worse, in all wrapper source files in those modules). It makes things a little too magical for my taste, but I don't see a way around that, and template-based converters are really helpful in wrapping array-heavy numerical code. Documentation is indeed a big part of this. >> - Automatic conversions for newer boost libraries (Fusion, Pointer >> Container, and Filesystem are at the top of my list), and more for the >> STL and iostreams standard libraries. I'd like to integrate the >> indexing suite (v2) into Boost.Python proper. > > Interesting and useful improvements. > >> - Allow Boost.Python wrapped classes to inherit from Python classes. > > +1 > >> - An actual "boost.python" Python module to make exceptions and other >> types used in wrappers easily accessible from Python. > > Nice. > >> - Some limited degree of priority-based overload matching. Not sure >> how best to approach this one yet, though. > > +1 > This is a solved problem... just not in Boost.Python. Daniel Wallin > worked it out for luabind and we were going to incorporate it into > langbinding. Happy to discuss it further. > That's great to hear. I'll have to spend some time looking at luabind. I'm sure I'll have more questions later. Thanks! Jim From ndbecker2 at gmail.com Sat Aug 27 01:47:38 2011 From: ndbecker2 at gmail.com (Neal Becker) Date: Fri, 26 Aug 2011 19:47:38 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development References: <4E56B7A6.2030008@gmail.com> <4E57E509.70007@gmail.com> Message-ID: The top of my list is improved interface to numpy. I know there is work going on in the form of ndarray, which seems promising. From talljimbo at gmail.com Sat Aug 27 02:21:13 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Fri, 26 Aug 2011 17:21:13 -0700 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: References: <4E56B7A6.2030008@gmail.com> <4E57E509.70007@gmail.com> Message-ID: <4E583879.6010305@gmail.com> On 08/26/2011 04:47 PM, Neal Becker wrote: > The top of my list is improved interface to numpy. I know there is work going > on in the form of ndarray, which seems promising. I'm still hesitant to consider ndarray part of Boost.Python; it's really a separate library, and I think providing a full multidimensional array class to be outside the scope of what Boost.Python should be. But I do intend to keep developing ndarray in parallel with Boost.Python, and some code may well flow from ndarray to Boost.Python. This summer, Stefan Seefeld has been mentoring Ankit Daftery on a GSOC project to clean up the low-level Boost.Python Numpy interface in the Boost sandbox (which began life as part of ndarray). While it may not be an official part of Boost.Python or Boost (it may be submitted as a separate library) immediately, that will provide an improved interface to Numpy on a much shorter timescale than the larger Boost.Python upgrades I'm thinking of for v3. Ultimately I would like to incorporate this low-level Numpy support as an optional component in the mainline Boost.Python v3, but I don't want to tie Boost.Python users to a particular C++ array class, even my own. Jim From stefan at seefeld.name Sat Aug 27 17:55:58 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Sat, 27 Aug 2011 11:55:58 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: References: <4E56B7A6.2030008@gmail.com> Message-ID: <4E59138E.8000904@seefeld.name> On 08/26/2011 04:09 PM, Dave Abrahams wrote: > on Thu Aug 25 2011, Stefan Seefeld wrote: >> Jim, >> >> this is an interesting idea. There has been lots of general (dare I >> say generic ?) discussion concerning process improvements (which >> unfortunately most of the time diverted into tool discussions). Among >> the fundamental issues is a modularization of boost. I think it would >> be great if boost.python could follow through on its own, by becoming >> a separate entity. > Separate from Boost? I guess that's a possibility but I'm not sure I > see the advantage. Jim already followed up on that, and I fully agree with that. If things can happen within Boost, all the better. >> * A per-module type registry, to avoid conflicting converters in >> multi-module projects. > Interesting idea. How does sharing types across multiple modules work > in that scenario? That's a good question. I don't have an answer to that. In fact, the idea of having per-module type registries grew out of discussions I had with Troy quite a while ago, where we considered all the bad things that could happen if multiple modules tried to export the same types. As always: explicit is better than implicit. > >> - Some limited degree of priority-based overload matching. Not sure >> how best to approach this one yet, though. > +1 > This is a solved problem... just not in Boost.Python. Daniel Wallin > worked it out for luabind and we were going to incorporate it into > langbinding. Happy to discuss it further. > > I'm happy to see some discussion on langbinding in this context. I also agree with Jim's pragmatic approach to this he is proposing in another mail. Stefan -- ...ich hab' noch einen Koffer in Berlin... From stefan at seefeld.name Sat Aug 27 18:00:29 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Sat, 27 Aug 2011 12:00:29 -0400 Subject: [C++-sig] New Major-Release Boost.Python Development In-Reply-To: <4E583879.6010305@gmail.com> References: <4E56B7A6.2030008@gmail.com> <4E57E509.70007@gmail.com> <4E583879.6010305@gmail.com> Message-ID: <4E59149D.8060208@seefeld.name> On 08/26/2011 08:21 PM, Jim Bosch wrote: > On 08/26/2011 04:47 PM, Neal Becker wrote: >> The top of my list is improved interface to numpy. I know there is >> work going >> on in the form of ndarray, which seems promising. > > I'm still hesitant to consider ndarray part of Boost.Python; it's > really a separate library, and I think providing a full > multidimensional array class to be outside the scope of what > Boost.Python should be. But I do intend to keep developing ndarray in > parallel with Boost.Python, and some code may well flow from ndarray > to Boost.Python. > > This summer, Stefan Seefeld has been mentoring Ankit Daftery on a GSOC > project to clean up the low-level Boost.Python Numpy interface in the > Boost sandbox (which began life as part of ndarray). While it may not > be an official part of Boost.Python or Boost (it may be submitted as a > separate library) immediately, that will provide an improved interface > to Numpy on a much shorter timescale than the larger Boost.Python > upgrades I'm thinking of for v3. > > Ultimately I would like to incorporate this low-level Numpy support as > an optional component in the mainline Boost.Python v3, but I don't > want to tie Boost.Python users to a particular C++ array class, even > my own. I really think that this should be a separate module (right now we call it tentatively "boost.numpy"). It introduces additional dependencies which not all boost.python users may be interested in. And yes, I had planned that we could address enough of the remaining issues to be able to submit boost.numpy for formal review within the next couple of months, i.e. ideally even before the end of this year. FWIW, Stefan -- ...ich hab' noch einen Koffer in Berlin... From dave at boostpro.com Sat Aug 27 22:29:56 2011 From: dave at boostpro.com (Dave Abrahams) Date: Sat, 27 Aug 2011 12:29:56 -0800 Subject: [C++-sig] [Boost.Python v3] Planning and Logistics References: <4E56B7A6.2030008@gmail.com> <4E582E99.60001@gmail.com> Message-ID: on Fri Aug 26 2011, Jim Bosch wrote: > In the interest of keeping this discussion easy-to-follow, I'm going > to reply to Dave's email twice, with new subjects - I'll stick to > questions about logistics in this email, and talk about features and > scope in another. > > In summary, I'm getting the sense that a branch in the mainline (not > sandbox) Boost SVN is the way to go. It's all the same repository. Working in the sandbox is essentially equivalent to working in a branch of trunk; it's just a matter of where the code lives in the SVN directory hierarchy. > I imagine most communication would just happen on the C++-sig list, > maybe with the [Boost.Python v3] subtitle I've used in this email. Sure. > On 08/26/2011 01:09 PM, Dave Abrahams wrote: >> >> Trying to catch up here, so responding to everything all at once. >> >> on Thu Aug 25 2011, Jim Bosch wrote: >> >> Just how tall are you, Jimbo? > > 6'4". Not that tall, in the grand scheme of things, but it was a > teenage internet moniker that stuck with me. Oh, then I should change my email to bigfatdave at somewhere.com >>> I'd also like to propose some changes that are slightly >>> backwards-incompatible, as well as some that mess with the internals >>> to an extent that I'd feel better about doing it outside Boost >>> itself, to make it easier for adventurous users to play with the new >>> version without affecting people who depend on having an extremely >>> stable library in Boost. >> >> There's no need to do this "outside of Boost." A branch in the Boost >> repository is a perfect place for exploratory development that will >> eventually be released as part of Boost. >> >> >>> To that end, I'm inclined to copy the library to somewhere else >>> (possibly the boost sandbox, but more likely a separate site), work on >>> it, produce some minor releases, and re-submit it to Boost for >>> review. >> >> If you want to go through another review process, that's up to you. >> Getting review feedback definitely has its advantages. Please, though, >> use the sandbox or some other area of the Boost svn repository at least >> until we get my hoped-for Git transition completed. > > I'd love to see a Git transition too, but I'm actually more familiar > with SVN at the moment, and I can certainly see the advantages of > working in the same repository as the previous version. > >> >> on Thu Aug 25 2011, Stefan Seefeld wrote: >> >>> >>> Jim, >>> >>> this is an interesting idea. There has been lots of general (dare I >>> say generic ?) discussion concerning process improvements (which >>> unfortunately most of the time diverted into tool discussions). Among >>> the fundamental issues is a modularization of boost. I think it would >>> be great if boost.python could follow through on its own, by becoming >>> a separate entity. >> >> Separate from Boost? I guess that's a possibility but I'm not sure I >> see the advantage. > > From my perspective, the advantages are mostly just the same reasons > Boost has been talking about increased modularity, with regard to > having stable and development versions for some packages and not for > others, and to allow users to be able to install some components > without installing all of them. Boost.Python (right now, at least) > depends on a very small core part of Boost, and to my knowledge no > other Boost libraries have real dependencies on Boost.Python (not > counting optional dependencies, like the Boost.Graph python bindings). > If/when Boost as a whole goes more modular, I think any advantage of > being separate would disappear, and the ideal case for Boost.Python > would be for that to happen. In that case, if I were you, I would actually start using Git with the modularized / CMake-ified Boost at http://github.com/boost-lib. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From dave at boostpro.com Sat Aug 27 22:40:17 2011 From: dave at boostpro.com (Dave Abrahams) Date: Sat, 27 Aug 2011 12:40:17 -0800 Subject: [C++-sig] [Boost.Python v3] Features and Scope References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> Message-ID: on Fri Aug 26 2011, Jim Bosch wrote: > On 08/26/2011 01:09 PM, Dave Abrahams wrote: >> >> Well, speaking for myself, mostly time. I'd be inclined to do a rewrite >> along the lines of the langbinding ideas if I had time. >> > > I had only been vaguely aware of langbinding until I followed up on > your last email. It's a very nice separation, though after bad > experiences using SWIG I am a little wary about trying to build a > one-size-fits-all front-end for different languages. > > It seems like a reasonable way to proceed would be to try to convert > Boost.Python to the langbinding interface, but not be obsessive about > avoiding Python-specific hacks in the frontend right now. Absolutely perfect; that's what I was thinking. > Once we're more feature-complete in Python, we can worry about finding > language-agnostic ways to do things that weren't anticipated in the > original langbinding design. > >> Another thing I think should be examined and refreshed is the >> documentation style. > > Agreed. And I wouldn't just limit it to the official documentation; > there are a lot of little tidbits of useful but often very old > Boost.Python knowledge scattered around the internet (often on > Python-affiliated sites or wikis). It'd be nice to unify a lot of > that, or at least update the obsolete stuff and add links to a more > complete set of official documentation. Sure. >> Good. Although, if I were you I would also carefully re-examine the >> core to see if it has the best design. > > I wasn't originally planning on doing a full re-evaluation, but after > looking over the langbinding proposal and hearing some of the other > ideas, I think that will probably be necessary. Happy to discuss with you anywhere you get stuck. By the way, there's actually a bunch of langbinding code checked into the SVN repo somewhere. Ah, here: http://svn.boost.org/svn/boost/sandbox/langbinding/ >> on Thu Aug 25 2011, Ralf Grosse-Kunstleve wrote: >> >>> Troy and Ravi have done a significant amount of work. >> >> Yes, and hopefully integrating that could be part of any next steps. >> >>> I hope they will comment for themselves. > > I was under the impression their work had already been integrated into > Boost.Python v2. Is there a large repository of additional > Boost.Python work elsewhere that I should be aware of? It's my impression that the integration was stalled due to Ralf's concerns about object code size. But I could be wrong. >> Interesting idea. How does sharing types across multiple modules work >> in that scenario? > > Hmm. I'm guessing the global type registry would still be the > default, and per-module registries would override these when > available? It sounds like Stefan has a clear use case in mind, and > I'd be curious to know what it is. Me too. >> on Fri Aug 26 2011, Jim Bosch wrote: >> >>> On 08/26/2011 04:17 AM, Neal Becker wrote: >>>> What sort of improvements did you have in mind? >>> >>> My list includes: >>> >>> - Propagating constness to Python (essentially already done as an >>> extension, but it could have a much nicer interface if I could mess >>> with class_ itself). >> >> Oooh, now that one is tricky. I'd like to see the design you have in >> mind. Python doesn't have constness; it has immutability, which is >> subtly different. > > Essentially, C++ objects returned as const references, pointers, or > smart pointers get converted into a Python proxy object with methods > that forward to the real wrapped object, but only if those methods are > marked as const. The proxy objects have rvalue converters but no > lvalue converters. I like it! -- Dave Abrahams BoostPro Computing http://www.boostpro.com From stefan at seefeld.name Sat Aug 27 23:08:03 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Sat, 27 Aug 2011 17:08:03 -0400 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> Message-ID: <4E595CB3.1020802@seefeld.name> On 08/27/2011 04:40 PM, Dave Abrahams wrote: >> Hmm. I'm guessing the global type registry would still be the >> default, and per-module registries would override these when >> available? It sounds like Stefan has a clear use case in mind, and >> I'd be curious to know what it is. > Me too. I believe what we were discussing at the time was a situation where an extension module would not only define converters for its own types, but also common types that may are required by the API. This could in particular include common types from libstdc++. If multiple extension modules do this, than a Python program that happens to use them in the same application may end up with undefined behavior (does this constitute an ODR violation under the hood ?) To make this work, the common type converters need to be factored out and shared. This in itself is impractical (since the original authors may not be aware of this need). Furthermore, a converter may require module-specific behavior, i.e. converting a std::string in the context of one module may be different from the desired conversion in another. Binding converters to particular modules (and requiring to explicitly import / exporting converters) seems like a solution to the above. Stefan -- ...ich hab' noch einen Koffer in Berlin... From roman.yakovenko at gmail.com Sun Aug 28 07:46:57 2011 From: roman.yakovenko at gmail.com (Roman Yakovenko) Date: Sun, 28 Aug 2011 08:46:57 +0300 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E582E9F.6030808@gmail.com> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> Message-ID: On Sat, Aug 27, 2011 at 2:39 AM, Jim Bosch wrote: > I am aware of Py++ and the extensions associated with it, and some of that > could definitely go into Boost.Python proper (and I think I've heard Roman > state that he wouldn't have a problem with someone else doing the work to > make it so, and he just didn't have time to do it himself). If you will find them useful, I'll be glad to contribute and help with the integration. Py++ also contains the modified version of indexing suite v2 (more containers, bug fixes and additional methods for existing containers). Regards, Roman From s_sourceforge at nedprod.com Sun Aug 28 17:41:38 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Sun, 28 Aug 2011 16:41:38 +0100 Subject: [C++-sig] [Boost.Python v3] Planning and Logistics In-Reply-To: References: <4E56B7A6.2030008@gmail.com>, Message-ID: <4E5A61B2.17574.3A790CA9@s_sourceforge.nedprod.com> On 27 Aug 2011 at 12:29, Dave Abrahams wrote: > In that case, if I were you, I would actually start using Git with the > modularized / CMake-ified Boost at http://github.com/boost-lib. If you do go for git, I have found repo embedded per-branch issue tracking (e.g. http://bugseverywhere.org/) to be a god send for productivity because you can raise issues with your own code without bothering the mainline issue tracker about branch specific (and indeed often personal) issues. It has made as much difference to my productivity as adopting git did because I no longer need to keep (and often misplace) post it notes reminding me of things to do. I even coded up a GUI for it for the TortoiseXXX family of revision tracking GUIs which you can find at http://www.nedprod.com/programs/Win32/BEurtle/. This lets you mark off BE issue fixes with GIT/whatever commits. HTH, Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909. From talljimbo at gmail.com Sun Aug 28 20:14:53 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Sun, 28 Aug 2011 11:14:53 -0700 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E595CB3.1020802@seefeld.name> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> Message-ID: <4E5A859D.8000400@gmail.com> On 08/27/2011 02:08 PM, Stefan Seefeld wrote: > On 08/27/2011 04:40 PM, Dave Abrahams wrote: >>> Hmm. I'm guessing the global type registry would still be the >>> default, and per-module registries would override these when >>> available? It sounds like Stefan has a clear use case in mind, and >>> I'd be curious to know what it is. >> Me too. > > I believe what we were discussing at the time was a situation where an > extension module would not only define converters for its own types, but > also common types that may are required by the API. This could in > particular include common types from libstdc++. > > If multiple extension modules do this, than a Python program that > happens to use them in the same application may end up with undefined > behavior (does this constitute an ODR violation under the hood ?) > > To make this work, the common type converters need to be factored out > and shared. This in itself is impractical (since the original authors > may not be aware of this need). Furthermore, a converter may require > module-specific behavior, i.e. converting a std::string in the context > of one module may be different from the desired conversion in another. > > Binding converters to particular modules (and requiring to explicitly > import / exporting converters) seems like a solution to the above. > That's definitely a problem that needs to be addressed. I've encountered it too. I hope to provide more libstd++ conversions in Boost.Python itself, which should alleviate the need for this a bit, but it does need a proper solution. To solve it, I think you'd want anything registered by a specific module to appear both in that module's registry and the global registry, with the module's registry taking precedence. Are there any cases where you'd want something only to appear in the module-specific registry? Jim From dave at boostpro.com Sun Aug 28 20:39:06 2011 From: dave at boostpro.com (Dave Abrahams) Date: Sun, 28 Aug 2011 10:39:06 -0800 Subject: [C++-sig] [Boost.Python v3] Features and Scope References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> Message-ID: on Sat Aug 27 2011, Stefan Seefeld wrote: > On 08/27/2011 04:40 PM, Dave Abrahams wrote: >>> Hmm. I'm guessing the global type registry would still be the >>> default, and per-module registries would override these when >>> available? It sounds like Stefan has a clear use case in mind, and >>> I'd be curious to know what it is. >> Me too. > > I believe what we were discussing at the time was a situation where an > extension module would not only define converters for its own types, but > also common types that may are required by the API. That's currently supported by the global registry. > This could in particular include common types from libstdc++. > > If multiple extension modules do this, than a Python program that > happens to use them in the same application may end up with undefined > behavior (does this constitute an ODR violation under the hood ?) Essentially, yes. > To make this work, the common type converters need to be factored out > and shared. This in itself is impractical (since the original authors > may not be aware of this need). Furthermore, a converter may require > module-specific behavior, i.e. converting a std::string in the context > of one module may be different from the desired conversion in another. > > Binding converters to particular modules (and requiring to explicitly > import / exporting converters) seems like a solution to the above. It might be, but your description of what you're actually proposing is pretty vague, still, so it's hard to tell. -- Dave Abrahams BoostPro Computing http://www.boostpro.com From stefan at seefeld.name Mon Aug 29 02:13:50 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Sun, 28 Aug 2011 20:13:50 -0400 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> Message-ID: <4E5AD9BE.6050202@seefeld.name> On 08/28/2011 02:39 PM, Dave Abrahams wrote: > on Sat Aug 27 2011, Stefan Seefeld wrote: > > >> Binding converters to particular modules (and requiring to explicitly >> import / exporting converters) seems like a solution to the above. > It might be, but your description of what you're actually proposing is > pretty vague, still, so it's hard to tell. I haven't fully thought this through myself yet. In terms of UI I imagine this to work like this: import ext_mod would still keep the converters private to ext_mod by default, so only functions and methods from ext_mod itself would have access to it. To make those converters accessible globally would require an extra step, for example: from ext_mod import __converters__ Thus each python module can decide itself whether or not it wants to access them. Implementation-wise this would imply that a single global registry would be replaced by a linked list of per-module registries. The obvious disadvantage is that converter lookup will be relatively slow. There could still be ways to keep the current behavior, both for backward compatibility but also as an optimization, if the author knows that there is no danger of symbol collision. Stefan -- ...ich hab' noch einen Koffer in Berlin... From strattonbrazil at gmail.com Mon Aug 29 05:22:49 2011 From: strattonbrazil at gmail.com (Josh Stratton) Date: Sun, 28 Aug 2011 20:22:49 -0700 Subject: [C++-sig] sending a c++ class to a python function Message-ID: I'm getting an error when I try to pass down my object that results in a seg fault. I've registered my class I'm sending down, but when I actually send it, my program exits at this line in the library right after I call the importFile() function... return call(get_managed_object(self, tag), BOOST_PP_ENUM_PARAMS_Z(1, N, a)); // here's the class I'm trying to send down class Scene { public: MeshP mesh(int key); void clearScene(); CameraP createCamera(QString name); MeshP createMesh(QString name); void setMesh(int meshKey, MeshP mesh) { _meshes[meshKey] = mesh; } QHashIterator meshes() { return QHashIterator(_meshes); } QHashIterator cameras() { return QHashIterator(_cameras); } CameraP fetchCamera(QString name); QList importExtensions(); void importFile(QString fileName); void evalPythonFile(QString fileName); Scene(); protected: int uniqueCameraKey(); int uniqueMeshKey(); QString uniqueName(QString prefix); private: QHash _meshes; QHash _cameras; //QHash _lights; QSet _names; // PythonQtObjectPtr _context; object _pyMainModule; object _pyMainNamespace; public slots: void pythonStdOut(const QString &s) { std::cout << s.toStdString() << std::flush; } void pythonStdErr(const QString &s) { std::cout << s.toStdString() << std::flush; } }; // first I create the mapping, which I'm not sure is correct, trying to follow: http://misspent.wordpress.com/2009/09/27/how-to-write-boost-python-converters/ struct SceneToPython { static PyObject* convert(Scene const& scene) { return boost::python::incref(boost::python::object(scene).ptr()); } }; // then I register it boost::python::to_python_converter(); // then I send it down from inside my Scene object try { object processFileFunc = _pyMainModule.attr("MeshImporter").attr("processFile"); processFileFunc(this, fileName); // seems to error here } catch (boost::python::error_already_set const &) { QString perror = parse_python_exception(); std::cerr << "Error in Python: " << perror.toStdString() << std::endl; } I'm not really sure what actually is wrong besides something being setup incorrectly. Do I need to make a python-to-C++ converter as well even if I'm not sending it back to C++? Or is my convert() function just improperly implemented? I wasn't sure how much I need to actually get it to map correctly. Thanks. From hans_meine at gmx.net Mon Aug 29 10:03:29 2011 From: hans_meine at gmx.net (Hans Meine) Date: Mon, 29 Aug 2011 10:03:29 +0200 Subject: [C++-sig] New Major-Release Boost.Python Development Message-ID: <201108291003.29102.hans_meine@gmx.net> Am 26.08.2011 um 20:25 schrieb Jim Bosch: > - ? the rvalue converters in particular don't seem to have been intended as part of the public API originally, and I think they're an important part of the library. Correct, great! > - Automatic conversions for newer boost libraries (Fusion, Pointer Container, and Filesystem are at the top of my list), and more for the STL and iostreams standard libraries. I'd like to integrate the indexing suite (v2) into Boost.Python proper. Probably makes sense, in particular the STL part is an often (understandibly) expected feature. > - Allow Boost.Python wrapped classes to inherit from Python classes. Yes yes yes! (list/dict inheritence in particular is /so/ useful.) > - Some limited degree of priority-based overload matching. Not sure how best to approach this one yet, though. A related goal (also template-related) is better support for error messages in the face of overloaded functions. In our VIGRA library [1], we have a lot of overloaded functions performing image analysis tasks on NumPy arrays. Currently, we face the following problems: - The automatically generated signatures in the docstrings are hard to read. Don?t know if there is a better way of presenting signatures of heavily templated code (improvements from the error messages of modern compilers or STLfilt come to my mind), or if there should simply be a hook for custom postprocessing. (Maybe just __doc__ being writeable from Python?) - The same goes for error messages when calling with the wrong arguments. - Furthermore, we have custom converters that check certain properties of the passed arrays, e.g. dimensionality, memory layout, or dtype. With overloading, all variants are tried in turn, but there?s no good way to "collect" error messages, and tell the user why no registered overload would match. (The C++ signatures don?t necessarily give the user enough information.) My feeling is that all this is strongly related to the features you already have in mind, but hopefully my notes help in steering this into an even better, more general direction. Oh, and there?s another missing feature: - Better support for introspection, e.g. by the ?inspect? module or documentation tools. Looking forward to a "more accessible" BPL, Hans^ [1] http://hci.iwr.uni-heidelberg.de/vigra/doc/vigranumpy/index.html From stefan at seefeld.name Mon Aug 29 18:41:22 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Mon, 29 Aug 2011 12:41:22 -0400 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5A859D.8000400@gmail.com> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> <4E5A859D.8000400@gmail.com> Message-ID: <4E5BC132.3000503@seefeld.name> On 08/28/2011 02:14 PM, Jim Bosch wrote: > > To solve it, I think you'd want anything registered by a specific > module to appear both in that module's registry and the global > registry, with the module's registry taking precedence. Are there any > cases where you'd want something only to appear in the module-specific > registry? Anything that gets injected into the global registry is prone to violate the ODR. Of course, also adding it into a local registry and letting that have precedence may mask the ODR violation issue. But in that case, it isn't clear why we'd add it to the global registry at all. It may make sense to define a policy that types exported explicitly by an extension module may be added to the global repository. In contrast, prerequisite types that are only by way of dependency added (e.g., common libstdc++ types) should be stored in a local / private registry. Stefan -- ...ich hab' noch einen Koffer in Berlin... From talljimbo at gmail.com Mon Aug 29 20:08:23 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 29 Aug 2011 11:08:23 -0700 Subject: [C++-sig] sending a c++ class to a python function In-Reply-To: References: Message-ID: <4E5BD597.3070709@gmail.com> Normally a to-python converter is needed when you have a function that returns a C++ object, and you want to wrap that function so the returned thing can be used in Python. I don't see any functions that return a Scene object. They will also enable expressions of the form "object(scene)", but you can't use that sort of expression to define the converter itself. When you're writing the conversion function, remember that Boost.Python really doesn't know anything about your class unless you tell it about it using a boost::python::class_ definition, and doing that will already define a to-python converter for it. The instructions you're following are for things like strings that already have a Python type they can be converted to; you probably want to use class_ for something like Scene. Now, if you just wanted to convert Scene to a dict instead of having a Scene type in Python, a custom conversion would indeed be the way to go, but you'd need to create a dict and fill it in the convert function. Anyhow, my best guess for the segfault is that you have an infinite recursion - when you call object(scene), it needs to look for a to-python converter in the registry. So it finds yours, which calls object(scene). Repeat. Jim On 08/28/2011 08:22 PM, Josh Stratton wrote: > I'm getting an error when I try to pass down my object that results in > a seg fault. I've registered my class I'm sending down, but when I > actually send it, my program exits at this line in the library right > after I call the importFile() function... > > return call(get_managed_object(self, tag), > BOOST_PP_ENUM_PARAMS_Z(1, N, a)); > > // here's the class I'm trying to send down > class Scene > { > public: > MeshP mesh(int key); > void clearScene(); > CameraP createCamera(QString name); > MeshP createMesh(QString name); > void setMesh(int meshKey, MeshP mesh) { > _meshes[meshKey] = mesh; } > QHashIterator meshes() { return > QHashIterator(_meshes); } > QHashIterator cameras() { return > QHashIterator(_cameras); } > CameraP fetchCamera(QString name); > QList importExtensions(); > void importFile(QString fileName); > void evalPythonFile(QString fileName); > Scene(); > protected: > int uniqueCameraKey(); > int uniqueMeshKey(); > QString uniqueName(QString prefix); > > private: > QHash _meshes; > QHash _cameras; > //QHash _lights; > QSet _names; > // PythonQtObjectPtr _context; > object _pyMainModule; > object _pyMainNamespace; > public slots: > void pythonStdOut(const QString&s) > { std::cout<< s.toStdString()<< std::flush; } > void pythonStdErr(const QString&s) > { std::cout<< s.toStdString()<< std::flush; } > }; > > // first I create the mapping, which I'm not sure is correct, trying > to follow: http://misspent.wordpress.com/2009/09/27/how-to-write-boost-python-converters/ > struct SceneToPython > { > static PyObject* convert(Scene const& scene) > { > return boost::python::incref(boost::python::object(scene).ptr()); > } > }; > > // then I register it > boost::python::to_python_converter(); > > // then I send it down from inside my Scene object > try { > object processFileFunc = > _pyMainModule.attr("MeshImporter").attr("processFile"); > processFileFunc(this, fileName); // seems to error here > } catch (boost::python::error_already_set const&) { > QString perror = parse_python_exception(); > std::cerr<< "Error in Python: "<< perror.toStdString()<< std::endl; > } > > I'm not really sure what actually is wrong besides something being > setup incorrectly. Do I need to make a python-to-C++ converter as > well even if I'm not sending it back to C++? Or is my convert() > function just improperly implemented? I wasn't sure how much I need > to actually get it to map correctly. Thanks. > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From talljimbo at gmail.com Tue Aug 30 08:42:53 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Mon, 29 Aug 2011 23:42:53 -0700 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5BC132.3000503@seefeld.name> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> <4E5A859D.8000400@gmail.com> <4E5BC132.3000503@seefeld.name> Message-ID: <4E5C866D.6060809@gmail.com> On 08/29/2011 09:41 AM, Stefan Seefeld wrote: > On 08/28/2011 02:14 PM, Jim Bosch wrote: >> >> To solve it, I think you'd want anything registered by a specific >> module to appear both in that module's registry and the global >> registry, with the module's registry taking precedence. Are there any >> cases where you'd want something only to appear in the module-specific >> registry? > > Anything that gets injected into the global registry is prone to violate > the ODR. Of course, also adding it into a local registry and letting > that have precedence may mask the ODR violation issue. But in that case, > it isn't clear why we'd add it to the global registry at all. > The reason to add it to the global registry is so if you know one of the modules you depend on registered a converter, you don't have to do it yourself. > It may make sense to define a policy that types exported explicitly by > an extension module may be added to the global repository. In contrast, > prerequisite types that are only by way of dependency added (e.g., > common libstdc++ types) should be stored in a local / private registry. > I don't see how having a global registry makes things any worse from an ODR perspective. It seems like it's mostly just the same "where do templates get instantiated" problem that compilers and linkers always have to deal with. In other words, instantiating something like: class_< std::vector > in two shared libraries doesn't seem any different from instantiating something like: std::vector in two shared libraries, and we basically always have to leave it up to the compiler/linker to solve the latter. Is the problem the fact that the global registry stores function pointers to template instantiations? I can see how that would appear to make the multiple (template) definitions more problematic, but it also seems like that's a problem compilers and linkers would have to have already solved. Of course, this may just be wishful thinking on my part; I admit I'm not very familiar with how these problems are handled in practice. Jim From hans_meine at gmx.net Tue Aug 30 10:26:49 2011 From: hans_meine at gmx.net (Hans Meine) Date: Tue, 30 Aug 2011 10:26:49 +0200 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5C866D.6060809@gmail.com> References: <4E56B7A6.2030008@gmail.com> <4E5BC132.3000503@seefeld.name> <4E5C866D.6060809@gmail.com> Message-ID: <201108301026.49897.hans_meine@gmx.net> Am Dienstag, 30. August 2011, 08:42:53 schrieb Jim Bosch: > I don't see how having a global registry makes things any worse from an > ODR perspective. It seems like it's mostly just the same "where do > templates get instantiated" problem that compilers and linkers always > have to deal with. There is a big difference in the case that two separate extensions (want to) use different converters, though. > In other words, instantiating something like: > > class_< std::vector > > > in two shared libraries doesn't seem any different from instantiating > something like: > > std::vector > > in two shared libraries, and we basically always have to leave it up to > the compiler/linker to solve the latter. Here you assume that class_< std::vector >(?) will generate the same code in both cases. For std::vector, this holds, but for class_< std::vector >(?), the (?) part?which you left out?is crucial. In practice, it happened to me with two extensions which did not even agree on the /type/ of converter (class_ vs. rvalue) for the same class (vigra::TinyVector, a small fixed-size vector). Although it would be nice to fix/allow this (local registries, which can still be imported/reused as a dependency), it /may/ be an even better fix to change the extensions and make them agree on the "optimal" converter. (The obvious problem is of course the definition of "optimal" which may be different depending on the application.) The problem with this is the very reason for the variety of std::vector converters out there (e.g. focusing on conversion to native Python types, conversion speed, or in-place writability/modifications). HTH, Hans From stefan at seefeld.name Tue Aug 30 13:04:21 2011 From: stefan at seefeld.name (Stefan Seefeld) Date: Tue, 30 Aug 2011 07:04:21 -0400 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5C866D.6060809@gmail.com> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> <4E5A859D.8000400@gmail.com> <4E5BC132.3000503@seefeld.name> <4E5C866D.6060809@gmail.com> Message-ID: <4E5CC3B5.1010202@seefeld.name> On 08/30/2011 02:42 AM, Jim Bosch wrote: > On 08/29/2011 09:41 AM, Stefan Seefeld wrote: >> On 08/28/2011 02:14 PM, Jim Bosch wrote: >>> >>> To solve it, I think you'd want anything registered by a specific >>> module to appear both in that module's registry and the global >>> registry, with the module's registry taking precedence. Are there any >>> cases where you'd want something only to appear in the module-specific >>> registry? >> >> Anything that gets injected into the global registry is prone to violate >> the ODR. Of course, also adding it into a local registry and letting >> that have precedence may mask the ODR violation issue. But in that case, >> it isn't clear why we'd add it to the global registry at all. >> > > The reason to add it to the global registry is so if you know one of > the modules you depend on registered a converter, you don't have to do > it yourself. As I suggested in reply to Dave, I think it would be better to require modules that depend on external converters to explicitly import them. Stefan -- ...ich hab' noch einen Koffer in Berlin... From strattonbrazil at gmail.com Tue Aug 30 16:45:21 2011 From: strattonbrazil at gmail.com (Josh Stratton) Date: Tue, 30 Aug 2011 07:45:21 -0700 Subject: [C++-sig] sending a c++ class to a python function In-Reply-To: <4E5BD597.3070709@gmail.com> References: <4E5BD597.3070709@gmail.com> Message-ID: Oh, okay. So I can create a module... #include "scene.h" BOOST_PYTHON_MODULE(scene) { class_("Scene"); } and then import it (even though) in my python I normally don't import things I'm not creating. I'm assuming this is a boost-python thing to get the class into scope, which gets rid of the "converter found for C++ type: Scene" error. from scene import Scene # in my python code In my terminal I get... Error in Python: : Python argument types in Mesh.buildByIndex(Scene, PrimitiveParts) did not match C++ signature: buildByIndex(QSharedPointer, PrimitiveParts): File "", line 21, in processFile for this function: static void buildByIndex(SceneP scene, PrimitiveParts parts); The function I'm calling has SceneP typedeffed as a QSharedPointer and I'm assuming this error is because I haven't made a Scene to QSharedPointer converter, which should just wrap the Scene object when it comes in requiring a custom conversion function. struct QSceneFromPythonScene { static PyObject* convert(Scene const& s) { return boost::python::incref(boost::python::object(SceneP(&s))); } }; But I'm not converting that properly, I guess. "invalid conversion of const Scene* to Scene*. On Mon, Aug 29, 2011 at 11:08 AM, Jim Bosch wrote: > Normally a to-python converter is needed when you have a function that > returns a C++ object, and you want to wrap that function so the returned > thing can be used in Python. ?I don't see any functions that return a Scene > object. ?They will also enable expressions of the form "object(scene)", but > you can't use that sort of expression to define the converter itself. > > When you're writing the conversion function, remember that Boost.Python > really doesn't know anything about your class unless you tell it about it > using a boost::python::class_ definition, and doing that will already define > a to-python converter for it. ?The instructions you're following are for > things like strings that already have a Python type they can be converted > to; you probably want to use class_ for something like Scene. ?Now, if you > just wanted to convert Scene to a dict instead of having a Scene type in > Python, a custom conversion would indeed be the way to go, but you'd need to > create a dict and fill it in the convert function. > > Anyhow, my best guess for the segfault is that you have an infinite > recursion - when you call object(scene), it needs to look for a to-python > converter in the registry. ?So it finds yours, which calls object(scene). > ?Repeat. > > Jim > > > > > On 08/28/2011 08:22 PM, Josh Stratton wrote: >> >> I'm getting an error when I try to pass down my object that results in >> a seg fault. ?I've registered my class I'm sending down, but when I >> actually send it, my program exits at this line in the library right >> after I call the importFile() function... >> >> ? ? ? ? return call(get_managed_object(self, tag), >> BOOST_PP_ENUM_PARAMS_Z(1, N, a)); >> >> // here's the class I'm trying to send down >> class Scene >> { >> public: >> ? ? MeshP ? ? ? ? ? ? ? ? ? ? ? mesh(int key); >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ?clearScene(); >> ? ? CameraP ? ? ? ? ? ? ? ? ? ? createCamera(QString name); >> ? ? MeshP ? ? ? ? ? ? ? ? ? ? ? createMesh(QString name); >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ?setMesh(int meshKey, MeshP mesh) { >> _meshes[meshKey] = mesh; } >> ? ? QHashIterator ? ?meshes() { return >> QHashIterator(_meshes); } >> ? ? QHashIterator ?cameras() { return >> QHashIterator(_cameras); } >> ? ? CameraP ? ? ? ? ? ? ? ? ? ? fetchCamera(QString name); >> ? ? QList ? ? ? ? ? ? ? importExtensions(); >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ?importFile(QString fileName); >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ?evalPythonFile(QString fileName); >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? Scene(); >> protected: >> ? ? int ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?uniqueCameraKey(); >> ? ? int ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?uniqueMeshKey(); >> ? ? QString ? ? ? ? ? ? ? ? ? ? ? ? ? ?uniqueName(QString prefix); >> >> private: >> ? ? QHash ? ? ? ? ? ? ? ? ? ?_meshes; >> ? ? QHash ? ? ? ? ? ? ? ? ?_cameras; >> ? ? //QHash ? ? ? _lights; >> ? ? QSet ? ? ? ? ? ? ? ? ? ? ? _names; >> // ? ?PythonQtObjectPtr ? ? ? ? ? ? ? ? ?_context; >> ? ? object ? ? ? ? ? ? ? ? ? ? ? ? ? ? _pyMainModule; >> ? ? object ? ? ? ? ? ? ? ? ? ? ? ? ? ? _pyMainNamespace; >> public slots: >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? pythonStdOut(const QString&s) >> { std::cout<< ?s.toStdString()<< ?std::flush; } >> ? ? void ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? pythonStdErr(const QString&s) >> { std::cout<< ?s.toStdString()<< ?std::flush; } >> }; >> >> // first I create the mapping, which I'm not sure is correct, trying >> to follow: >> http://misspent.wordpress.com/2009/09/27/how-to-write-boost-python-converters/ >> struct SceneToPython >> { >> ? ? static PyObject* convert(Scene const& ?scene) >> ? ? { >> ? ? ? ? return boost::python::incref(boost::python::object(scene).ptr()); >> ? ? } >> }; >> >> // then I register it >> ? ? boost::python::to_python_converter(); >> >> // then I send it down from inside my Scene object >> ? ? try { >> ? ? ? ? object processFileFunc = >> _pyMainModule.attr("MeshImporter").attr("processFile"); >> ? ? ? ? processFileFunc(this, fileName); // seems to error here >> ? ? } catch (boost::python::error_already_set const&) { >> ? ? ? ? QString perror = parse_python_exception(); >> ? ? ? ? std::cerr<< ?"Error in Python: "<< ?perror.toStdString()<< >> ?std::endl; >> ? ? } >> >> I'm not really sure what actually is wrong besides something being >> setup incorrectly. ?Do I need to make a python-to-C++ converter as >> well even if I'm not sending it back to C++? ?Or is my convert() >> function just improperly implemented? ?I wasn't sure how much I need >> to actually get it to map correctly. ?Thanks. >> _______________________________________________ >> Cplusplus-sig mailing list >> Cplusplus-sig at python.org >> http://mail.python.org/mailman/listinfo/cplusplus-sig > > From super24bitsound at hotmail.com Tue Aug 30 16:58:04 2011 From: super24bitsound at hotmail.com (Jay Riley) Date: Tue, 30 Aug 2011 10:58:04 -0400 Subject: [C++-sig] Boost Python loss of values In-Reply-To: <4E57FB4B.6080404@gmail.com> References: , , , <4E56ADFC.3030701@gmail.com>, , <4E57FB4B.6080404@gmail.com> Message-ID: Self is indeed a PyObject* this is a bit confusing > Date: Fri, 26 Aug 2011 13:00:11 -0700 > From: talljimbo at gmail.com > To: cplusplus-sig at python.org > Subject: Re: [C++-sig] Boost Python loss of values > > On 08/26/2011 08:27 AM, Jay Riley wrote: > > Hi Jim, > > > > Thanks for the suggestion, unfortunately it didn't work. It really feels > > like it's making a copy for some reason as once I return to the > > > > int AttackWrapper::CalculateDamage(const > > std::vector& users, > > Game::Battles::BattleCharacter* target, const > > std::vector& targets, Game::Battles::BattleField > > *field) > > { > > return call_method(self, "CalculateDamage", users, ptr(target), > > targets, ptr(field)); > > } > > > > function, the value are back to their expected value. Slicing wouldn't > > be a problem here would it, since Hit is a member of the base class anyways? > > > > Yes, that's right. What exactly is "self", above? I assume it's a data > member of AttackWrapper of type PyObject *, which shouldn't produce a > copy, but if it's something else, well, that could be a factor. > > Jim > > > > > > Date: Thu, 25 Aug 2011 13:18:04 -0700 > > > From: talljimbo at gmail.com > > > To: cplusplus-sig at python.org > > > CC: super24bitsound at hotmail.com > > > Subject: Re: [C++-sig] Boost Python loss of values > > > > > > On 08/25/2011 04:17 AM, Jay Riley wrote: > > > > > > > > > > > And the python exposing is done as follows: > > > > > > > > class_, bases > > > > >("Attack") > > > > .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); > > > > > > > > > > This bit looks a little suspect, and I'm surprised that it compiles - > > > class_ should only take 4 arguments if one of them is boost::noncopyable. > > > > > > I think you mean: > > > > > > class_< Attack, boost::shared_ptr, bases > > > > (...) > > > > > > See > > > > > > http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html > > > > > > for details of the arguments to class_. > > > > > > I don't have a good idea as to why this would cause the problem you're > > > seeing (maybe you're slicing your AttackWrapper instances into Attack > > > instances?) but I'd recommend fixing it first. > > > > > > Good Luck! > > > > > > Jim Bosch > > > > > > _______________________________________________ > > Cplusplus-sig mailing list > > Cplusplus-sig at python.org > > http://mail.python.org/mailman/listinfo/cplusplus-sig > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig -------------- next part -------------- An HTML attachment was scrubbed... URL: From talljimbo at gmail.com Tue Aug 30 19:42:49 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 30 Aug 2011 10:42:49 -0700 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <201108301026.49897.hans_meine@gmx.net> References: <4E56B7A6.2030008@gmail.com> <4E5BC132.3000503@seefeld.name> <4E5C866D.6060809@gmail.com> <201108301026.49897.hans_meine@gmx.net> Message-ID: <4E5D2119.20307@gmail.com> On 08/30/2011 01:26 AM, Hans Meine wrote: > Am Dienstag, 30. August 2011, 08:42:53 schrieb Jim Bosch: >> I don't see how having a global registry makes things any worse from an >> ODR perspective. It seems like it's mostly just the same "where do >> templates get instantiated" problem that compilers and linkers always >> have to deal with. > > There is a big difference in the case that two separate extensions (want to) > use different converters, though. > >> In other words, instantiating something like: >> >> class_< std::vector > >> >> in two shared libraries doesn't seem any different from instantiating >> something like: >> >> std::vector >> >> in two shared libraries, and we basically always have to leave it up to >> the compiler/linker to solve the latter. > > Here you assume that class_< std::vector >(?) will generate the same code > in both cases. For std::vector, this holds, but for > class_< std::vector >(?), the (?) part?which you left out?is crucial. > They aren't different as far as the One Definition Rule goes. You're just passing different arguments to the constructor, not providing a different definition. And I think that's true in both cases; sure, the class_ code is much more complex, but it's still perfectly legal C++, even across shared library boundaries. > In practice, it happened to me with two extensions which did not even agree on > the /type/ of converter (class_ vs. rvalue) for the same class > (vigra::TinyVector, a small fixed-size vector). > > Although it would be nice to fix/allow this (local registries, which can still > be imported/reused as a dependency), it /may/ be an even better fix to change > the extensions and make them agree on the "optimal" converter. (The obvious > problem is of course the definition of "optimal" which may be different > depending on the application.) The problem with this is the very reason for > the variety of std::vector converters out there (e.g. focusing on conversion > to native Python types, conversion speed, or in-place > writability/modifications). > I agree with all of the above, and these could all be solved by my proposal of having per-module registrations take precedence over gloal registrations. Having a single optimal converter is clearly the best solution when such a thing exists, and we can anticipate the type, so adding more built-in converters too Boost.Python is part of the solution. But this doesn't have anything to do with the One Definition Rule, and I still don't see that we're having any more problems in that regard than template libraries usually do. Jim From talljimbo at gmail.com Tue Aug 30 19:57:12 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 30 Aug 2011 10:57:12 -0700 Subject: [C++-sig] sending a c++ class to a python function In-Reply-To: References: <4E5BD597.3070709@gmail.com> Message-ID: <4E5D2478.2010706@gmail.com> On 08/30/2011 07:45 AM, Josh Stratton wrote: > Oh, okay. So I can create a module... > > #include "scene.h" > BOOST_PYTHON_MODULE(scene) > { > class_("Scene"); > } > > and then import it (even though) in my python I normally don't import > things I'm not creating. I'm assuming this is a boost-python thing to > get the class into scope, which gets rid of the "converter found for > C++ type: Scene" error. I don't really understand what you mean; if you want to use code that was defined in another Python module, you always have to import it. It's just that in this case the module happens to be written in C++. > from scene import Scene # in my python code > > In my terminal I get... > > Error in Python:: Python argument types in > Mesh.buildByIndex(Scene, PrimitiveParts) > did not match C++ signature: > buildByIndex(QSharedPointer, PrimitiveParts): File > "", line 21, in processFile > > for this function: > > static void buildByIndex(SceneP scene, > PrimitiveParts parts); > > The function I'm calling has SceneP typedeffed as a > QSharedPointer and I'm assuming this error is because I haven't > made a Scene to QSharedPointer converter, which should just > wrap the Scene object when it comes in requiring a custom conversion > function. This is another case where you probably want to use something other than a custom converter (and if you did use a custom converter, you'd want a from-python lvalue converter, not a to-python converter, anyways, and those are defined differently). What you probably want to do is tell Boost.Python that QSharedPointer is a smart pointer, and tell it to wrap your Scene objects inside one (at least if you're going to have to deal with them a lot). To do that, you'll want to specialize boost::python::pointee and provide a get_pointer function: namespace boost { namespace python { template struct pointee< QSharedPointer > { typedef T type; }; }} // in some namespace where ADL will find it... template inline Scene * get_pointer(QSharedPointer const & p) { return p.get(); // or whatever } Then, when you define the class, use: class_< Scene, QSharedPointer >(...) You can find more information in the reference documentation: http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html#classes Good Luck! Jim From talljimbo at gmail.com Tue Aug 30 20:06:50 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 30 Aug 2011 11:06:50 -0700 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5CC3B5.1010202@seefeld.name> References: <4E56B7A6.2030008@gmail.com> <4E582E9F.6030808@gmail.com> <4E595CB3.1020802@seefeld.name> <4E5A859D.8000400@gmail.com> <4E5BC132.3000503@seefeld.name> <4E5C866D.6060809@gmail.com> <4E5CC3B5.1010202@seefeld.name> Message-ID: <4E5D26BA.2030601@gmail.com> On 08/30/2011 04:04 AM, Stefan Seefeld wrote: > On 08/30/2011 02:42 AM, Jim Bosch wrote: >> On 08/29/2011 09:41 AM, Stefan Seefeld wrote: >>> On 08/28/2011 02:14 PM, Jim Bosch wrote: >>>> >>>> To solve it, I think you'd want anything registered by a specific >>>> module to appear both in that module's registry and the global >>>> registry, with the module's registry taking precedence. Are there any >>>> cases where you'd want something only to appear in the module-specific >>>> registry? >>> >>> Anything that gets injected into the global registry is prone to violate >>> the ODR. Of course, also adding it into a local registry and letting >>> that have precedence may mask the ODR violation issue. But in that case, >>> it isn't clear why we'd add it to the global registry at all. >>> >> >> The reason to add it to the global registry is so if you know one of >> the modules you depend on registered a converter, you don't have to do >> it yourself. > > As I suggested in reply to Dave, I think it would be better to require > modules that depend on external converters to explicitly import them. > I can understand the motivation for requiring manual imports of conversions, as a sort of "explicit is better than implicit" argument. But I don't think it saves us anything in terms of ODR violations. If I'm understanding the ODR issue (or lack thereof) correctly, we can make everyone mostly happy by providing an option whether conversion registrations should go into a per-module registry, a global one, or both. We'd then allow modules to import the per-module registry of another module into their own if they like. In all cases, we'd look up things in the per-module registry first, and then go to the global registry (where things registered by class_ would typically go). I don't like the idea of Python users being able to tell all modules to use a different set of conversions than the ones their authors intended. That seems fraught with peril, and unnecessary if module builders can pull converters from modules they depend on. Jim From talljimbo at gmail.com Tue Aug 30 20:09:09 2011 From: talljimbo at gmail.com (Jim Bosch) Date: Tue, 30 Aug 2011 11:09:09 -0700 Subject: [C++-sig] Boost Python loss of values In-Reply-To: References: , , , <4E56ADFC.3030701@gmail.com>, , <4E57FB4B.6080404@gmail.com> Message-ID: <4E5D2745.1010604@gmail.com> On 08/30/2011 07:58 AM, Jay Riley wrote: > Self is indeed a PyObject* > > this is a bit confusing > I'm afraid my next piece of advice is to simplify this like crazy, and see if you can replicate the problem in a very minimal example. If you can get it small enough, I can take a look at the whole thing and see if I can figure out what's going on. Jim > > Date: Fri, 26 Aug 2011 13:00:11 -0700 > > From: talljimbo at gmail.com > > To: cplusplus-sig at python.org > > Subject: Re: [C++-sig] Boost Python loss of values > > > > On 08/26/2011 08:27 AM, Jay Riley wrote: > > > Hi Jim, > > > > > > Thanks for the suggestion, unfortunately it didn't work. It really > feels > > > like it's making a copy for some reason as once I return to the > > > > > > int AttackWrapper::CalculateDamage(const > > > std::vector& users, > > > Game::Battles::BattleCharacter* target, const > > > std::vector& targets, Game::Battles::BattleField > > > *field) > > > { > > > return call_method(self, "CalculateDamage", users, ptr(target), > > > targets, ptr(field)); > > > } > > > > > > function, the value are back to their expected value. Slicing wouldn't > > > be a problem here would it, since Hit is a member of the base class > anyways? > > > > > > > Yes, that's right. What exactly is "self", above? I assume it's a data > > member of AttackWrapper of type PyObject *, which shouldn't produce a > > copy, but if it's something else, well, that could be a factor. > > > > Jim > > > > > > > > > > Date: Thu, 25 Aug 2011 13:18:04 -0700 > > > > From: talljimbo at gmail.com > > > > To: cplusplus-sig at python.org > > > > CC: super24bitsound at hotmail.com > > > > Subject: Re: [C++-sig] Boost Python loss of values > > > > > > > > On 08/25/2011 04:17 AM, Jay Riley wrote: > > > > > > > > > > > > > > And the python exposing is done as follows: > > > > > > > > > > class_, > bases > > > > > >("Attack") > > > > > .def("CalculateDamage", &AttackWrapper::CalculateDamageDefault); > > > > > > > > > > > > > This bit looks a little suspect, and I'm surprised that it compiles - > > > > class_ should only take 4 arguments if one of them is > boost::noncopyable. > > > > > > > > I think you mean: > > > > > > > > class_< Attack, boost::shared_ptr, bases > > > > > (...) > > > > > > > > See > > > > > > > > http://www.boost.org/doc/libs/1_47_0/libs/python/doc/v2/class.html > > > > > > > > for details of the arguments to class_. > > > > > > > > I don't have a good idea as to why this would cause the problem > you're > > > > seeing (maybe you're slicing your AttackWrapper instances into Attack > > > > instances?) but I'd recommend fixing it first. > > > > > > > > Good Luck! > > > > > > > > Jim Bosch > > > > > > > > > _______________________________________________ > > > Cplusplus-sig mailing list > > > Cplusplus-sig at python.org > > > http://mail.python.org/mailman/listinfo/cplusplus-sig > > > > _______________________________________________ > > Cplusplus-sig mailing list > > Cplusplus-sig at python.org > > http://mail.python.org/mailman/listinfo/cplusplus-sig > > > _______________________________________________ > Cplusplus-sig mailing list > Cplusplus-sig at python.org > http://mail.python.org/mailman/listinfo/cplusplus-sig From s_sourceforge at nedprod.com Tue Aug 30 20:14:49 2011 From: s_sourceforge at nedprod.com (Niall Douglas) Date: Tue, 30 Aug 2011 19:14:49 +0100 Subject: [C++-sig] [Boost.Python v3] Features and Scope In-Reply-To: <4E5D2119.20307@gmail.com> References: <4E56B7A6.2030008@gmail.com>, <201108301026.49897.hans_meine@gmx.net>, <4E5D2119.20307@gmail.com> Message-ID: <4E5D2899.22894.455204C7@s_sourceforge.nedprod.com> On 30 Aug 2011 at 10:42, Jim Bosch wrote: > I agree with all of the above, and these could all be solved by my > proposal of having per-module registrations take precedence over gloal > registrations. Having a single optimal converter is clearly the best > solution when such a thing exists, and we can anticipate the type, so > adding more built-in converters too Boost.Python is part of the solution. > > But this doesn't have anything to do with the One Definition Rule, and I > still don't see that we're having any more problems in that regard than > template libraries usually do. What I have done in my own libraries is to have per-process type registries keep their data on the basis of shared object or DLL. The "current" SO/DLL can be determined by having an inlined thunk template function pass in the address of some guaranteed per SO/DLL data e.g. a static char * does just fine. One then looks up the pointer address in a list of loaded SO/DLLs so one knows in which context a particular type resolution is being performed. This sounds like a lot of complexity, and at the start getting the data storage right does take time, but in the long run it saves huge amounts of complexity. Once it's working it's a god send because third parties libraries - which often break ODR whether intentionally or unintentionally - work as expected. I have a whole load of code implementing this in open source if consulting an implementation would be useful to you, and of course I'm always here by email. Long time members of this mailing list will know what I'm referring to. HTH, Niall -- Technology & Consulting Services - ned Productions Limited. http://www.nedproductions.biz/. VAT reg: IE 9708311Q. Company no: 472909.