From greglandrum at earthlink.net Sat Mar 2 01:08:50 2002 From: greglandrum at earthlink.net (Greg Landrum) Date: Fri, 01 Mar 2002 16:08:50 -0800 Subject: [C++-sig] profiling C++ extensions Message-ID: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> [Linux, python 2.1] Hi All, I'm not sure this is the appropriate forum for this question, but I figured I'd try here first. I've got a Boostified C++ class that I'm using from Python. The class supports pickling, but restoring things from pickles seems to be really, really slow. I figured that I'd profile the code and find out where the time is being spent, but I ran into a wall. I built both my extension modules and python itself with -pg. Running a test case and looking at the gprof output only shows me numbers for the python interpreter itself. This is none too edifying. So, my question is: is there any way to trick the profiler into gathering data for the extension modules as well as the interpreter? Thanks, -greg ---- greg Landrum (greglandrum at earthlink.net) Software Carpenter/Computational Chemist From david.abrahams at rcn.com Sat Mar 2 05:30:51 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Fri, 1 Mar 2002 23:30:51 -0500 Subject: [C++-sig] profiling C++ extensions References: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> Message-ID: <221901c1c1a3$9cbd82a0$0500a8c0@boostconsulting.com> Hi Greg, Sorry, I don't know the answer, but would be interested if you find out. Since this question is totally independent of wrapping technology and this forum is rather Boost.Python-centric at the moment, you'd probably do better to ask on the general python-list. Regards, Dave ----- Original Message ----- From: "Greg Landrum" > > [Linux, python 2.1] > Hi All, > > I'm not sure this is the appropriate forum for this question, but I figured > I'd try here first. > > I've got a Boostified C++ class that I'm using from Python. The class > supports pickling, but restoring things from pickles seems to be really, > really slow. I figured that I'd profile the code and find out where the > time is being spent, but I ran into a wall. > > I built both my extension modules and python itself with -pg. Running a > test case and looking at the gprof output only shows me numbers for the > python interpreter itself. This is none too edifying. > > So, my question is: is there any way to trick the profiler into gathering > data for the extension modules as well as the interpreter? > > Thanks, > -greg > ---- > greg Landrum (greglandrum at earthlink.net) > Software Carpenter/Computational Chemist > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig From greglandrum at mindspring.com Sun Mar 3 01:22:20 2002 From: greglandrum at mindspring.com (greg Landrum) Date: Sat, 02 Mar 2002 16:22:20 -0800 Subject: [C++-sig] profiling C++ extensions In-Reply-To: <221901c1c1a3$9cbd82a0$0500a8c0@boostconsulting.com> References: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> Message-ID: <5.1.0.14.2.20020302160208.0243d2c0@mail.mindspring.com> Thanks for the response Dave. I've posted to the python list and will see if I get anything useful. Based upon some googling I did this morning, I'm not hopeful. However, I did a bit of screwing around and I have narrowed my depickling problems down a bit. I'm now pretty sure that the inefficiency is taking place in the Boost layer. Here's why I think this. I've got a C++ library which I've wrapped with both Boost and CXX (I did the CXX wrapper long ago and then switched to Boost because I like it a *lot* better than CXX). I'm storing instances of this class (molecules) in ZODB, which means I have to be able to pickle it. That's no problem... I've got the pickling stuff working just fine. However, depickling molecules is really slow. I wanted to do the profiling to track down the source of this slowness. What I did instead is set up some boundary cases. First I tried depickling 1e5 2-tuples, to get a feeling for pickle overhead, this takes about 1.4 seconds on my machine. Then I tried instantiating 1e5 instances of a python class (just an empty class). This takes about 0.3 seconds. Then I tried instantiating 1e5 instances of my molecule class using its default constructor (which doesn't really do any work). If I use the CXX version of the library, this takes 0.9 seconds. Doing the same thing with the Boost wrapped version of the library takes 6.7 seconds. This is a killer for me. I need to be able to, on a regular basis, loop through 1e5 or 1e6 molecules and this object construction overhead is overwhelming the actual processing I'm doing. (the bright side is that I don't need to worry about optimizing my own code any further hahahahah ) Dave, I'm happy to do what I can to figure out how to solve this problem (I'd really rather keep using Boost), but I'd like to get some feeling from you as to whether or not it's even solvable. The timing information here was collected on a Win2K machine using MSVC++6, Boost 1.24 and Python 2.1. Thanks, -greg From david.abrahams at rcn.com Sun Mar 3 01:38:46 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Sat, 2 Mar 2002 19:38:46 -0500 Subject: [C++-sig] profiling C++ extensions References: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> <5.1.0.14.2.20020302160208.0243d2c0@mail.mindspring.com> Message-ID: <24c101c1c24b$c8cedb20$0500a8c0@boostconsulting.com> ----- Original Message ----- From: "greg Landrum" To: Sent: Saturday, March 02, 2002 7:22 PM Subject: Re: [C++-sig] profiling C++ extensions > > Thanks for the response Dave. I've posted to the python list and will see > if I get anything useful. Based upon some googling I did this morning, I'm > not hopeful. > > However, I did a bit of screwing around and I have narrowed my depickling > problems down a bit. I'm now pretty sure that the inefficiency is taking > place in the Boost layer. Here's why I think this. > > I've got a C++ library which I've wrapped with both Boost and CXX (I did > the CXX wrapper long ago and then switched to Boost because I like it a > *lot* better than CXX). I'm storing instances of this class (molecules) in > ZODB, which means I have to be able to pickle it. That's no problem... > I've got the pickling stuff working just fine. However, depickling > molecules is really slow. I wanted to do the profiling to track down the > source of this slowness. > > What I did instead is set up some boundary cases. First I tried depickling > 1e5 2-tuples, to get a feeling for pickle overhead, this takes about 1.4 > seconds on my machine. Then I tried instantiating 1e5 instances of > a python class (just an empty class). This takes about 0.3 seconds. Then > I tried instantiating 1e5 instances of my molecule class using its default > constructor (which doesn't really do any work). If I use the CXX version > of the library, this takes 0.9 seconds. Doing the same thing with the > Boost wrapped version of the library takes 6.7 seconds. This is a killer > for me. I need to be able to, on a regular basis, loop through 1e5 or 1e6 > molecules and this object construction overhead is overwhelming the actual > processing I'm doing. (the bright side is that I don't need to worry about > optimizing my own code any further hahahahah ) > > Dave, I'm happy to do what I can to figure out how to solve this problem > (I'd really rather keep using Boost), but I'd like to get some feeling from > you as to whether or not it's even solvable. I don't have a reason to be pessimistic, if that helps, but Ralf Kunstleve is the Boost.Python pickle guy, and he knows a bit more than I. Also I think he's planning to address pickling for Boost.Python V2 in the near future. I wonder what your CXX pickling interface looks like as compared to your Boost interface? How many python functions get called with the Boost version? I guess actual profile info would be more helpful. Have you tried Intel's profiling tools? http://www.intel.com/software/products/vtune/index.htm?iid=ipp_home+soft _vtune& I think there's a free beta. From greglandrum at mindspring.com Sun Mar 3 04:22:30 2002 From: greglandrum at mindspring.com (greg Landrum) Date: Sat, 02 Mar 2002 19:22:30 -0800 Subject: [C++-sig] profiling C++ extensions In-Reply-To: <24c101c1c24b$c8cedb20$0500a8c0@boostconsulting.com> References: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> <5.1.0.14.2.20020302160208.0243d2c0@mail.mindspring.com> Message-ID: <5.1.0.14.2.20020302191927.01e1d688@mail.mindspring.com> At 04:38 PM 3/2/2002, David Abrahams wrote: >I don't have a reason to be pessimistic, if that helps, but Ralf >Kunstleve is the Boost.Python pickle guy, and he knows a bit more than >I. Also I think he's planning to address pickling for Boost.Python V2 in >the near future. > >I wonder what your CXX pickling interface looks like as compared to your >Boost interface? How many python functions get called with the Boost >version? I haven't looked into this (at the moment I don't have pickling set up with the CXX wrapper, so I can't compare timings). I guess maybe I wasn't super clear in my last message because of my focus on pickling/depickling. The timing information I gave (.9 seconds with CXX, 6.7 with Boost) was for simple object instantiation, not pickling/depickling. >I guess actual profile info would be more helpful. Have you tried >Intel's profiling tools? >http://www.intel.com/software/products/vtune/index.htm?iid=ipp_home+soft >_vtune& That's a good idea. I'll see what I can find. I'll also spend a bit more time and see if I can figure out the huge timing differences in the CXX and boost object instantiations. -greg From david.abrahams at rcn.com Sun Mar 3 05:39:48 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Sat, 2 Mar 2002 23:39:48 -0500 Subject: [C++-sig] profiling C++ extensions References: <5.1.0.14.2.20020301160432.020fc008@mail.earthlink.net> <5.1.0.14.2.20020302160208.0243d2c0@mail.mindspring.com> <5.1.0.14.2.20020302191927.01e1d688@mail.mindspring.com> Message-ID: <24d601c1c26d$e077b590$0500a8c0@boostconsulting.com> ----- Original Message ----- From: "greg Landrum" > At 04:38 PM 3/2/2002, David Abrahams wrote: > > >I don't have a reason to be pessimistic, if that helps, but Ralf > >Kunstleve is the Boost.Python pickle guy, and he knows a bit more than > >I. Also I think he's planning to address pickling for Boost.Python V2 in > >the near future. > > > >I wonder what your CXX pickling interface looks like as compared to your > >Boost interface? How many python functions get called with the Boost > >version? > > I haven't looked into this (at the moment I don't have pickling set up with > the CXX wrapper, so I can't compare timings). > > I guess maybe I wasn't super clear in my last message because of my focus > on pickling/depickling. The timing information I gave (.9 seconds with > CXX, 6.7 with Boost) was for simple object instantiation, not > pickling/depickling. Oh. In that case, the structure of these instances may be quite significant. If you look at http://www.boost.org/libs/python/doc/data_structures.txt you can see that each Boost.Python extension instance contains both a Python dictionary and a vector with at least one element, which element points to an additional dynamically-allocated object that embeds your exported C++ object. That's four dynamic allocations per object. If your CXX object does not have an attribute dictionary, it can take as few as one dynamic allocation per object. Since you're instantiating 1e5 of these, I imagine it could make a difference that the same memory pool is being used over and over (as opposed to the Boost.Python case, where some of the allocations are happening with operator new and some with Python's allocator). As likely as this seems, this sort of speculation is always dangerous. Profiling is always better. > >I guess actual profile info would be more helpful. Have you tried > >Intel's profiling tools? > >http://www.intel.com/software/products/vtune/index.htm?iid=ipp_home+sof t > >_vtune& > > That's a good idea. I'll see what I can find. I'll also spend a bit more > time and see if I can figure out the huge timing differences in the CXX and > boost object instantiations. If you find out that it is in fact an allocation issue, you might be interested in using Boost.Python v2 instead. For classes exposed in the normal way, it already takes one fewer allocation. Furthermore, it can work with "traditional" extension types of the kind generated by CXX, so you could easily avoid the extra allocations by creating an ordinary extension type. I intend at some point to make it possible to expose C++ classes as fully-subclassable new-style classes but which use only a single dynamic allocation (and consequently have no dictionary), and would be happy to work with you if you want to attack that problem earlier. -Dave From rwgk at yahoo.com Sun Mar 3 08:08:39 2002 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Sat, 2 Mar 2002 23:08:39 -0800 (PST) Subject: [C++-sig] profiling C++ extensions In-Reply-To: <5.1.0.14.2.20020302191927.01e1d688@mail.mindspring.com> Message-ID: <20020303070839.53337.qmail@web20202.mail.yahoo.com> --- greg Landrum wrote: > I haven't looked into this (at the moment I don't have pickling set up with > the CXX wrapper, so I can't compare timings). > > I guess maybe I wasn't super clear in my last message because of my focus > on pickling/depickling. The timing information I gave (.9 seconds with > CXX, 6.7 with Boost) was for simple object instantiation, not > pickling/depickling. Crossing the language boundary 100000 times to do a trivial operation does not seem to be the best way of getting the most out of combining Python and C++. Have you considered a design where you pass (and pickle) C++ arrays of your objects? Ralf __________________________________________________ Do You Yahoo!? Yahoo! Sports - sign up for Fantasy Baseball http://sports.yahoo.com From david.abrahams at rcn.com Mon Mar 4 20:13:28 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Mon, 4 Mar 2002 14:13:28 -0500 Subject: [C++-sig] Virtual Functions Message-ID: <025201c1c3b0$ad98b090$0500a8c0@boostconsulting.com> Hi Min Xu, Last week I promised you that virtual function support would be implemented in a few days. It is still not (some more-pressing matters came up). I expect to have it by the middle of this week. Apologies, Dave +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+ From minxu at sci.ccny.cuny.edu Mon Mar 4 20:58:58 2002 From: minxu at sci.ccny.cuny.edu (Min Xu) Date: 04 Mar 2002 14:58:58 -0500 Subject: [C++-sig] Virtual Functions In-Reply-To: <025201c1c3b0$ad98b090$0500a8c0@boostconsulting.com> References: <025201c1c3b0$ad98b090$0500a8c0@boostconsulting.com> Message-ID: <1015271938.13582.8.camel@lax6> Thank you very much for your concerns. I am kept busy last week and I will look into the size of generated .so library this week. By the way, when I try to compile using the target "release" it complains about: inspiron:/usr/local/src/boost/libs/python# jam -sBOOST_ROOT=/usr/local/src/boost -sTOOLS=gcc -sBUILD=release -sPYTHON_VERSION=2.2 -sPYTHON_ROOT=/usr/ libbpl.so: required property full incompatible with on However, a "debug" target is OK. thank you again. On Mon, 2002-03-04 at 14:13, David Abrahams wrote: > Hi Min Xu, > > Last week I promised you that virtual function support would be > implemented in a few days. It is still not (some more-pressing matters > came up). I expect to have it by the middle of this week. > > Apologies, > Dave > > +---------------------------------------------------------------+ > David Abrahams > C++ Booster (http://www.boost.org) O__ == > Pythonista (http://www.python.org) c/ /'_ == > resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == > email: david.abrahams at rcn.com > +---------------------------------------------------------------+ > > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig > From david.abrahams at rcn.com Mon Mar 4 21:14:03 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Mon, 4 Mar 2002 15:14:03 -0500 Subject: [C++-sig] Virtual Functions References: <025201c1c3b0$ad98b090$0500a8c0@boostconsulting.com> <1015271938.13582.8.camel@lax6> Message-ID: <029401c1c3b9$32d8f550$0500a8c0@boostconsulting.com> ----- Original Message ----- From: "Min Xu" To: Sent: Monday, March 04, 2002 2:58 PM Subject: Re: [C++-sig] Virtual Functions > Thank you very much for your concerns. I am kept busy last week and I > will look into the size of generated .so library this week. > > By the way, when I try to compile using the target "release" it > complains about: > > inspiron:/usr/local/src/boost/libs/python# jam > -sBOOST_ROOT=/usr/local/src/boost -sTOOLS=gcc -sBUILD=release > -sPYTHON_VERSION=2.2 -sPYTHON_ROOT=/usr/ > libbpl.so: required property full incompatible > with on Good catch. Please update tools/build/features.jam and try again. From david.abrahams at rcn.com Tue Mar 5 23:24:17 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Tue, 5 Mar 2002 17:24:17 -0500 Subject: [C++-sig] Callbacks and virtual functions for Boost.Python v2 Message-ID: <073101c1c494$8066cff0$0500a8c0@boostconsulting.com> Here's the plan: I aim to provide an interface similar to that of Boost.Python v1's callback<>::call(...) for dealing with callbacks. The interface will look like: returning::call("method_name", self_object, a1, a2...); or returning::call(callable_object, a1, a2...); ARGUMENT HANDLING There is an issue concerning how to make Python objects from the arguments a1...aN. A new Python object must be created; should the C++ object be copied into that Python object, or should the Python object simply hold a reference/pointer to the C++ object? In general, the latter approach is unsafe, since the called function may store a reference to the Python object somewhere. If the Python object is used after the C++ object is destroyed, we'll crash Python. I plan to make the copying behavior the default, and to allow a non-copying behavior if the user writes boost::ref(a1) instead of a1 directly. At least this way, the user doesn't get dangerous behavior "by accident". It's also worth noting that the non-copying ("by-reference") behavior is in general only available for class types, and will fail at runtime with a Python exception if used otherwise** However, pointer types present a problem: My first thought is to refuse to compile if any aN has pointer type: after all, a user can always pass *aN to pass "by-value" or ref(*aN) to indicate a pass-by-reference behavior. However, this creates a problem for the expected NULL pointer => None conversion: it's illegal to dereference a null pointer value. We could use another construct, say "ptr(aN)", to deal with null pointers, but then what does it mean? We know what it does when aN is NULL, but it might either have by-value or by-reference behavior when aN is non-null. A reasonable compromise might be to give ptr(aN) reference behavior, and to ask people to write an expression such as aN ? to_python_value(*aN) : py::none() if they want the copying behavior. Of course, we'll want copying behavior (into a Python String object) by default when aN is char const*, but this exception should be easy enough to implement. I'm interested in hearing opinions about this compromise approach. RESULT HANDLING As for results, we have a similar problem: if ResultType is allowed to be a pointer or reference type, the lifetime of the object it refers to is probably being managed by a Python object. When that Python object is destroyed, our pointer dangles. The problem is particularly bad when the ResultType is char const* - the corresponding Python String object is typically uniquely-referenced, meaning that the pointer dangles as soon as returning::call() returns. Boost.Python v1 deals with this issue by refusing to compile any uses of callback::call(), but IMO this goes both too far and not far enough. It goes too far because there are cases where the owning String object survives beyond the call (just for instance when it's the name of a Python class), and it goes not far enough because we might just as well have the same problem with any returned pointer or reference. I propose to address this in Boost.Python v2 by 1. lifting the compile-time restriction on const char* callback returns 2. detecting the case when the reference count on the result Python object is 1 and throwing an exception inside of returning::call() when U is a pointer or reference type. I think this is acceptably safe because users have to explicitly specify a pointer/reference for U in returning, and they will be protected against dangles at runtime, at least long enough to get out of the returning::call() invocation. -Dave could reference an object which is going to be destroyed **It would be possible to make it fail at compile-time for non-class types such as int and char, but I'm not sure it's a good idea to impose this restriction yet. +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+ From greglandrum at earthlink.net Thu Mar 7 04:50:18 2002 From: greglandrum at earthlink.net (Greg Landrum) Date: Wed, 06 Mar 2002 19:50:18 -0800 Subject: [C++-sig] Profiling object instantiation... a preliminary report Message-ID: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> Re-establishing context: I have an extension class that I've wrapped using both Boost and Cxx. I need to be able to pickle and depickle large numbers (1e5-1e6) of these objects quickly for my target application (storing them in a ZODB and querying the resulting database). I discovered that simple instantiation of 1e5 objects took dramatically longer in Boost than in Cxx (6.7 seconds vs 0.9 seconds). I pulled down a copy of Intel's vtune profiler[1] and profiled my Boost and Cxx wrapped libraries to find the origin of the huge performance difference. After fiddling around for a while figuring out the appropriate magic to make vtune actually work, I found something surprising. I had exposed 4 different constructors for my Mol class. What I failed to realize is that when you overload functions (at least constructors) in BPL, this is handled using exceptions. This was at least partially responsible for the tremendous slow down in the Boost wrapper of my code. When I commented out some constructors, rebuilt, and reprofiled my code I got a run time of 1.9 seconds, only twice the Cxx run time. I'm a lot happier with this, but I'll continue to do a bit of exploration to see if I can bring that down more. I'm rather enjoying vtune now that I've figure out what can be done with it. If anyone wants more detailed information, I'd be happy to provide it. -greg [1] The beta of vtune is available as a free download for linux and windows. These results are under Win2K using MSVC++ v6. ---- greg Landrum (greglandrum at earthlink.net) Software Carpenter/Computational Chemist From david.abrahams at rcn.com Thu Mar 7 15:41:16 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Thu, 7 Mar 2002 09:41:16 -0500 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> Message-ID: <02cd01c1c5e6$33135720$0500a8c0@boostconsulting.com> ----- Original Message ----- From: "Greg Landrum" > I had exposed 4 different constructors for my Mol class. What I failed to > realize is that when you overload functions (at least constructors) in BPL, > this is handled using exceptions. Ah, well, that's been changed for Boost.Python v2. > This was at least partially responsible > for the tremendous slow down in the Boost wrapper of my code. Not surprising. > When I > commented out some constructors, rebuilt, and reprofiled my code I got a > run time of 1.9 seconds, only twice the Cxx run time. I'm a lot happier > with this, but I'll continue to do a bit of exploration to see if I can > bring that down more. Interesting. I'd be very interested to hear how Boost.Python v2 performs on your code, with or without overloads. -Dave From david.abrahams at rcn.com Fri Mar 8 16:51:07 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Fri, 8 Mar 2002 10:51:07 -0500 Subject: [C++-sig] V2 Tests have moved Message-ID: <069f01c1c6b9$109b30e0$0500a8c0@boostconsulting.com> FYI: The Jamfile for tests of Boost.Python v2 has moved from libs/python to libs/python/test. -Dave +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+ From rwgk at yahoo.com Wed Mar 13 03:04:23 2002 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 12 Mar 2002 18:04:23 -0800 (PST) Subject: [C++-sig] Visual C++ 6 error Message-ID: <20020313020423.11070.qmail@web20204.mail.yahoo.com> Using the boost cvs snapshot from a few hours ago I am experiencing problems with Visual C++ 6: cl.exe /nologo /MD /GR /GX /Zm350 -I"r:\cctbx\include" -I"r:\boost" -I"c:\Python21\include" -c uctbxmodule.cpp r:\boost\boost/python/detail/init_function.hpp(178) : error C2899: typename cannot be used outside a template declaration This is the code from init_function.hpp: template static init* create(signature1) { return new init1::const_reference>; } I think this will work with VC6 (and any other compiler): template static init* create(signature1) { typedef typename detail::parameter_traits::const_reference cr; return new init1; } Ralf __________________________________________________ Do You Yahoo!? Try FREE Yahoo! Mail - the world's greatest free email! http://mail.yahoo.com/ From david.abrahams at rcn.com Wed Mar 13 04:07:36 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Tue, 12 Mar 2002 22:07:36 -0500 Subject: [C++-sig] Visual C++ 6 error References: <20020313020423.11070.qmail@web20204.mail.yahoo.com> Message-ID: <00b001c1ca3c$b8b08830$0202a8c0@boostconsulting.com> Works for me, here. Which service pack you got? IF you want to check in that chage, though, be my guest. -Dave ----- Original Message ----- From: "Ralf W. Grosse-Kunstleve" To: Cc: Sent: Tuesday, March 12, 2002 9:04 PM Subject: [C++-sig] Visual C++ 6 error > Using the boost cvs snapshot from a few hours ago I am experiencing > problems with Visual C++ 6: > > cl.exe /nologo /MD /GR /GX /Zm350 -I"r:\cctbx\include" -I"r:\boost" > -I"c:\Python21\include" -c uctbxmodule.cpp > > r:\boost\boost/python/detail/init_function.hpp(178) : error C2899: > typename cannot be used outside a template declaration > > This is the code from init_function.hpp: > > template > static init* create(signature1) { > return new init1 typename detail::parameter_traits::const_reference>; > } > > I think this will work with VC6 (and any other compiler): > > template > static init* create(signature1) { > typedef typename detail::parameter_traits::const_reference cr; > return new init1; > } > > Ralf > > > __________________________________________________ > Do You Yahoo!? > Try FREE Yahoo! Mail - the world's greatest free email! > http://mail.yahoo.com/ > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig From Peter.Bienstman at rug.ac.be Sat Mar 16 21:58:07 2002 From: Peter.Bienstman at rug.ac.be (Peter Bienstman) Date: Sat, 16 Mar 2002 15:58:07 -0500 Subject: [C++-sig] runtime error with Intel compiler 6.0 beta under windows Message-ID: <000801c1cd2d$4618fe30$4b745b18@castor3141592> Hi, I recently converted my big project from gcc under windows to the beta of the intel 6.0 compiler. Compiling goes without problems, but now all my python scripts fail with: SystemError: error return without exception set. I did compile boost V1 and my C++ extensions with exception support (/GX). Any ideas? -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.abrahams at rcn.com Sun Mar 17 04:22:44 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Sat, 16 Mar 2002 22:22:44 -0500 Subject: [C++-sig] runtime error with Intel compiler 6.0 beta under windows References: <000801c1cd2d$4618fe30$4b745b18@castor3141592> Message-ID: <017201c1cd63$68a69ad0$6f99accf@boostconsulting.com> This has nothing to do with C++ exceptions. It's Python telling you that some C/C++ function it called returned a NULL PyObject* without first calling PyErr_SetString() or one of the other related Python error reporting function. Whether or not this has anything to do with Boost.Python I cannot say, but if the behavior only happens with the Intel compiler I'd be inclined to think it's a compiler bug. HTH, Dave ----- Original Message ----- From: "Peter Bienstman" To: Sent: Saturday, March 16, 2002 3:58 PM Subject: [C++-sig] runtime error with Intel compiler 6.0 beta under windows > Hi, > > I recently converted my big project from gcc under windows to the beta > of the intel 6.0 compiler. Compiling goes without problems, but now all > my python scripts fail with: > > SystemError: error return without exception set. > > I did compile boost V1 and my C++ extensions with exception support > (/GX). > > Any ideas? > From Peter.Bienstman at rug.ac.be Sun Mar 17 19:14:44 2002 From: Peter.Bienstman at rug.ac.be (Peter Bienstman) Date: Sun, 17 Mar 2002 13:14:44 -0500 Subject: [C++-sig] Re: runtime error with Intel compiler 6.0 beta under windows In-Reply-To: Message-ID: <003501c1cddf$9ecc8850$4b745b18@castor3141592> > From: "David Abrahams" > To: > Subject: Re: [C++-sig] runtime error with Intel compiler 6.0 > beta under windows > Date: Sat, 16 Mar 2002 22:22:44 -0500 > Reply-To: c++-sig at python.org > > This has nothing to do with C++ exceptions. It's Python > telling you that some C/C++ function it called returned a > NULL PyObject* without first calling PyErr_SetString() or one > of the other related Python error reporting function. Whether > or not this has anything to do with Boost.Python I cannot > say, but if the behavior only happens with the Intel compiler > I'd be inclined to think it's a compiler bug. > > HTH, > Dave For what it's worth, the problem also occurs with some of the Boost.Python testcases (e.g. abstract.cpp). The simple workaround is to compile boost_bython.lib with the Microsoft compiler, and then the extensions themselves with the Intel compiler. Unfortunately, I don't have the knowledge of the inner workings of Boost.Python to distill a small testcase for Intel tech support. Peter From david.abrahams at rcn.com Sun Mar 17 20:13:47 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Sun, 17 Mar 2002 14:13:47 -0500 Subject: [C++-sig] Re: runtime error with Intel compiler 6.0 beta under windows References: <003501c1cddf$9ecc8850$4b745b18@castor3141592> Message-ID: <02a301c1cde7$df84b400$6f99accf@boostconsulting.com> V1 (the supported DLL version) doesn't even compile with the intel 6 beta for me. I'm fixing that up to see if I can reproduce your problems... ----- Original Message ----- From: "Peter Bienstman" To: Sent: Sunday, March 17, 2002 1:14 PM Subject: [C++-sig] Re: runtime error with Intel compiler 6.0 beta under windows > > From: "David Abrahams" > > To: > > Subject: Re: [C++-sig] runtime error with Intel compiler 6.0 > > beta under windows > > Date: Sat, 16 Mar 2002 22:22:44 -0500 > > Reply-To: c++-sig at python.org > > > > This has nothing to do with C++ exceptions. It's Python > > telling you that some C/C++ function it called returned a > > NULL PyObject* without first calling PyErr_SetString() or one > > of the other related Python error reporting function. Whether > > or not this has anything to do with Boost.Python I cannot > > say, but if the behavior only happens with the Intel compiler > > I'd be inclined to think it's a compiler bug. > > > > HTH, > > Dave > > For what it's worth, the problem also occurs with some of the > Boost.Python testcases (e.g. abstract.cpp). The simple workaround is to > compile boost_bython.lib with the Microsoft compiler, and then the > extensions themselves with the Intel compiler. > > Unfortunately, I don't have the knowledge of the inner workings of > Boost.Python to distill a small testcase for Intel tech support. > > Peter > > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig > From greglandrum at mindspring.com Wed Mar 20 03:00:31 2002 From: greglandrum at mindspring.com (greg Landrum) Date: Tue, 19 Mar 2002 18:00:31 -0800 Subject: [C++-sig] Profiling object instantiation... a preliminary report In-Reply-To: <02cd01c1c5e6$33135720$0500a8c0@boostconsulting.com> References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> Message-ID: <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> Hi, [Sorry for the long delay, a variety of obstacles arose which prevented me from looking into this any further until today.] Here's a bit of context from before: I'm concerned about the amount of time required to create objects in Boost (it takes 2-3 times as long to create a Boost object as to create a more-or-less equivalent CXX object). David has suggested that this is probably due to the attribute dictionary. (Which immediately raises the question: if I were willing to live with objects with read-only attributes could I avoid this speed hit?). Playing around a bit more with Vtune (Intel's performance analyzer), I've tracked down where time is being spent in my code. Both these profiles are for creation of 100,000 objects. Because the big differences appear to be in the amount of time spent in Python code, I've included a bit of detail about time spent in Python21.dll **** CXX Wrapper: (runtime under profiler 3.2 sec) OEChem.pyd 1.48 sec (this is my extension module) Python21.dll 0.93 sec eval_code2 0.21 sec ntdll.dll 0.32 sec **** Boost.Python Wrapper: (runtime under profiler 5.2 seconds) Python21.dll 2.48 sec call_object 0.38 sec PyObject_GetAttrString 0.34 sec eval_code2 0.22 sec rdchem.pyd 1.61 sec (this is my extension module) ntdll.dll 0.59 sec The similarities in the amount of time spent in the extension code itself reassures me that nothing scary is going on behind the scenes; the slowdown is happening in the wrapper layer. Hopefully I've provided enough information here to let others understand the problem and its cause. I guess the next step would be to inspire someone to take this issue seriously enough to help solve it. :-) At 06:41 AM 3/7/2002, David Abrahams wrote: >Interesting. I'd be very interested to hear how Boost.Python v2 performs >on your code, with or without overloads. Unfortunately, at the moment I don't have the time required to do this just for timing information. The wrapper itself is in "production" use, so I don't feel comfortable completely switching to the new version until it's reasonably stable. I can't afford to be constantly updating my wrapper layer. -greg From david.abrahams at rcn.com Wed Mar 20 06:21:03 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Wed, 20 Mar 2002 00:21:03 -0500 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> Message-ID: <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> ----- Original Message ----- From: "greg Landrum" > Here's a bit of context from before: > I'm concerned about the amount of time required to create objects in Boost > (it takes 2-3 times as long to create a Boost object as to create a > more-or-less equivalent CXX object). > > David has suggested that this is probably due to the attribute > dictionary. I suggested that it might be due to a difference in the raw number of dynamic allocations being performed; the attribute dictionary would account for 1/3 of those. .(Which immediately raises the question: if I were willing to > live with objects with read-only attributes could I avoid this speed hit?). Not in Boost.Python v1. V2 will provide this capability, eventually. > Playing around a bit more with Vtune (Intel's performance analyzer), I've > tracked down where time is being spent in my code. > > Both these profiles are for creation of 100,000 objects. Because the big > differences appear to be in the amount of time spent in Python code, I've > included a bit of detail about time spent in Python21.dll Err, you didn't provide much detail at all, though... > **** CXX Wrapper: (runtime under profiler 3.2 sec) > OEChem.pyd 1.48 sec (this is my extension module) > Python21.dll 0.93 sec > eval_code2 0.21 sec > ntdll.dll 0.32 sec > > **** Boost.Python Wrapper: (runtime under profiler 5.2 seconds) > Python21.dll 2.48 sec > call_object 0.38 sec > PyObject_GetAttrString 0.34 sec This part could be due to attribute lookups. > eval_code2 0.22 sec > rdchem.pyd 1.61 sec (this is my extension module) > ntdll.dll 0.59 sec > > The similarities in the amount of time spent in the extension code itself > reassures me that nothing scary is going on behind the scenes; the > slowdown is happening in the wrapper layer. > > Hopefully I've provided enough information here to let others understand > the problem and its cause. I guess the next step would be to inspire > someone to take this issue seriously enough to help solve it. :-) I'm not planning on making any major changes to Boost.Python v1. If someone wants to look at this, of course, I wouldn't stop them, but since I'm planning to retire that codebase and stop maintaining it I really don't want to make that kind of an investment. > At 06:41 AM 3/7/2002, David Abrahams wrote: > > >Interesting. I'd be very interested to hear how Boost.Python v2 performs > >on your code, with or without overloads. > > Unfortunately, at the moment I don't have the time required to do this just > for timing information. The wrapper itself is in "production" use, so I > don't feel comfortable completely switching to the new version until it's > reasonably stable. I can't afford to be constantly updating my wrapper layer. Understood. FWIW, I don't expect the interface to change much during the course of development; most of the important changes involve adding new features. -Ddave From koethe at informatik.uni-hamburg.de Wed Mar 20 11:24:03 2002 From: koethe at informatik.uni-hamburg.de (Ullrich Koethe) Date: Wed, 20 Mar 2002 11:24:03 +0100 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> Message-ID: <3C986343.8884EB31@informatik.uni-hamburg.de> greg Landrum wrote: > Hi, > David has suggested that this is probably due to the attribute > dictionary. (Which immediately raises the question: if I were willing to > live with objects with read-only attributes could I avoid this speed hit?). > I don't know about the inner workings of CXX, but one possibility immediately springs into mind: In BPL wrapped objects are like class instances, and any access to a special function (such as "__getattr__" or "__add__") involves at least one dictionary lookup. If, on the other hand, CXX used normal Python objects, the special functions would be stored in the appropriate slots of the objects' function tables (such as tp_getattr, nb_add), and the lookup were much faster. David, could BPL v2 arrange it so that special functions are accessed via the function table rather than a dictionary lookup, if appropriate? This would be good for speed as __getattr__ and arithmetic functions tend to be used in inner loops. Regards Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universit?t Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe at computer.org Germany | | koethe at informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________| From david.abrahams at rcn.com Wed Mar 20 18:10:27 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Wed, 20 Mar 2002 12:10:27 -0500 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> <3C986343.8884EB31@informatik.uni-hamburg.de> Message-ID: <08e001c1d032$d2ffb480$6f99accf@boostconsulting.com> V2 wrapped objects are based on new-style classes. Python's new-style classes do fill in the function table each time an attribute matching a special function name is added, but the function placed in the table is just a function which does the attribute lookup. Since there's no way to generate an extern "C" linkage function with a template, I couldn't actually fill these in with direct calls in most cases anyway. However, the use of Python's new descriptor mechanism already cuts the number of attribute lookups by a factor of 2. By the way, Ralf's suggestion from many weeks ago that the best approach is probably to avoid crossing the language boundary for each of these unpicklings still sounds like the best way to speed the process up to me. -Dave ----- Original Message ----- From: "Ullrich Koethe" To: Sent: Wednesday, March 20, 2002 5:24 AM Subject: Re: [C++-sig] Profiling object instantiation... a preliminary report greg Landrum wrote: > Hi, > David has suggested that this is probably due to the attribute > dictionary. (Which immediately raises the question: if I were willing to > live with objects with read-only attributes could I avoid this speed hit?). > I don't know about the inner workings of CXX, but one possibility immediately springs into mind: In BPL wrapped objects are like class instances, and any access to a special function (such as "__getattr__" or "__add__") involves at least one dictionary lookup. If, on the other hand, CXX used normal Python objects, the special functions would be stored in the appropriate slots of the objects' function tables (such as tp_getattr, nb_add), and the lookup were much faster. David, could BPL v2 arrange it so that special functions are accessed via the function table rather than a dictionary lookup, if appropriate? This would be good for speed as __getattr__ and arithmetic functions tend to be used in inner loops. Regards Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universit?t Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe at computer.org Germany | | koethe at informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________| _______________________________________________ C++-sig mailing list C++-sig at python.org http://mail.python.org/mailman/listinfo/c++-sig From koethe at informatik.uni-hamburg.de Thu Mar 21 14:54:20 2002 From: koethe at informatik.uni-hamburg.de (Ullrich Koethe) Date: Thu, 21 Mar 2002 14:54:20 +0100 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> <3C986343.8884EB31@informatik.uni-hamburg.de> <08e001c1d032$d2ffb480$6f99accf@boostconsulting.com> Message-ID: <3C99E60C.133962E6@informatik.uni-hamburg.de> Hi, David Abrahams wrote: > > V2 wrapped objects are based on new-style classes. Python's new-style > classes do fill in the function table each time an attribute matching a > special function name is added, but the function placed in the table is > just a function which does the attribute lookup. > But then, why wouldn't it be possible to place a function into the table that does the lookup in another table (that just contains time-critical functions with C++ linkage) instead of a dictionary? > By the way, Ralf's suggestion from many weeks ago that the best approach > is probably to avoid crossing the language boundary for each of these > unpicklings still sounds like the best way to speed the process up to > me. In theory I agree, but in practice it turned out that I like to prototype algorithms in Python, so that 50% faster execution of the inner loop would still make a difference to me. Regards Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universit?t Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe at computer.org Germany | | koethe at informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________| From david.abrahams at rcn.com Thu Mar 21 19:48:40 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Thu, 21 Mar 2002 13:48:40 -0500 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> <3C986343.8884EB31@informatik.uni-hamburg.de> <08e001c1d032$d2ffb480$6f99accf@boostconsulting.com> <3C99E60C.133962E6@informatik.uni-hamburg.de> Message-ID: <01a401c1d109$2ad864a0$8a99accf@boostconsulting.com> ----- Original Message ----- From: "Ullrich Koethe" <> Yes. What makes you think you can do better than Python's highly-tuned dictionaries for this purpose? -Dave From koethe at informatik.uni-hamburg.de Thu Mar 21 20:46:32 2002 From: koethe at informatik.uni-hamburg.de (Ullrich Koethe) Date: Thu, 21 Mar 2002 20:46:32 +0100 Subject: [C++-sig] Profiling object instantiation... a preliminary report References: <5.1.0.14.2.20020306193539.0315fc38@mail.earthlink.net> <5.1.0.14.2.20020319172850.033026c0@mail.mindspring.com> <082301c1cfd3$cdc0fab0$6f99accf@boostconsulting.com> <3C986343.8884EB31@informatik.uni-hamburg.de> <08e001c1d032$d2ffb480$6f99accf@boostconsulting.com> <3C99E60C.133962E6@informatik.uni-hamburg.de> <01a401c1d109$2ad864a0$8a99accf@boostconsulting.com> Message-ID: <3C9A3898.5EDE38DC@informatik.uni-hamburg.de> David Abrahams wrote: > > < table > that does the lookup in another table (that just contains time-critical > functions with C++ linkage) instead of a dictionary? >> > > Yes. What makes you think you can do better than Python's highly-tuned > dictionaries for this purpose? > I don't want to replace Python's mechanism for all functions - only a fixed set of special functions will get special treatment. Since the set is fixed, those function pointers can be stored in an array rather than a hash table, so that lookup will be faster, no matter how clever the hash is implemented. slot for special function => forward to function array slot for not so special function => forward to function dictionary Whether the gain is worth the trouble is another question, though... Ulli -- ________________________________________________________________ | | | Ullrich Koethe Universit?t Hamburg / University of Hamburg | | FB Informatik / Dept. of Informatics | | AB Kognitive Systeme / Cognitive Systems Group | | | | Phone: +49 (0)40 42883-2573 Vogt-Koelln-Str. 30 | | Fax: +49 (0)40 42883-2572 D - 22527 Hamburg | | Email: u.koethe at computer.org Germany | | koethe at informatik.uni-hamburg.de | | WWW: http://kogs-www.informatik.uni-hamburg.de/~koethe/ | |________________________________________________________________| From david.abrahams at rcn.com Mon Mar 25 20:00:28 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Mon, 25 Mar 2002 14:00:28 -0500 Subject: [C++-sig] Boost.Python V2: Virtual function support/changes to class_<> interface Message-ID: <015501c1d42f$e6828770$0500a8c0@boostconsulting.com> Virtual function support is now implemented, including support for abstract base classes. See libs/python/test/virtual_functions.* in the CVS for examples. Some (mostly-backward-compatible) changes were made to the class_ interface: * The order of the optional arguments is arbitrary. The meaning of the arguments is deduced by the library * You can't use an arbitrary MPL sequence to specify base classes anymore. Instead, you must use an instantiation of boost::python::bases. * To expose a class T with no publicly-available copy constructor (e.g. an abstract base class), pass boost::noncopyable (from ) as one of the optional arguments. You can also use this option if you intend never to wrap a function returning T objects by-value or by const reference. * To specify how a C++ instance is held by a Python class instance, you no longer pass a generator as a template argument to class_<>. The rules are: 1. if you don't specify, or you pass T as an optional argument, the instance is held by-value 2. if you pass a class U derived from T as an optional argument, the actual type of the C++ object held is U. U's constructor(s) must accept an initial additional PyObject* which stores a back-reference to the owning Python object. This is the old familiar virtual function callback interface. 3. If you pass a smart pointer to T as an optional argument, the T instance is held by one of these smart pointers 3a. The traits class boost::python::detail::pointee is used to extract the element_type of the smart pointer. This works by default with any smart pointer that defines an element_type (boost::shared_ptr, std::auto_ptr, etc), but may need to be specialized for other smart pointers. 4. If you pass a smart pointer to U derived from T as an optional argument, the actual type of the C++ object is U. Similar constraints apply as in case 2. In cases 1 and 3, you can optionally specify that T itself gets initialized with an optional, hidden back-reference argument by specializing the boost::python::has_back_reference traits class: struct MyType { MyType(PyObject* self_, char* name_) : self(self_), name(name_) {} PyObject* self; std::string name; }; namespace boost { namespace python { template struct has_back_reference { BOOST_STATIC_CONSTANT(bool, value = true); }; }} ... .add( class .def_init(args()) ) ... Some people want this information stored in the class instance in order to do special reference-counting tricks, etc. +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+ From rwgk at yahoo.com Mon Mar 25 23:01:28 2002 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Mon, 25 Mar 2002 14:01:28 -0800 (PST) Subject: [C++-sig] Boost.Python V2: Virtual function support/changes to class_<> interface In-Reply-To: <015501c1d42f$e6828770$0500a8c0@boostconsulting.com> Message-ID: <20020325220128.56902.qmail@web20203.mail.yahoo.com> --- David Abrahams wrote: David, thanks for the update! > namespace boost { namespace python { > template > struct has_back_reference > { > BOOST_STATIC_CONSTANT(bool, value = true); > }; > }} I am confused. Is this correct? Or should it be template<> struct has_back_reference Thanks, Ralf __________________________________________________ Do You Yahoo!? Yahoo! Movies - coverage of the 74th Academy Awards? http://movies.yahoo.com/ From david.abrahams at rcn.com Mon Mar 25 23:32:00 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Mon, 25 Mar 2002 17:32:00 -0500 Subject: [C++-sig] Boost.Python V2: Virtual function support/changes to class_<> interface References: <20020325220128.56902.qmail@web20203.mail.yahoo.com> Message-ID: <025601c1d44d$6a96e9d0$0500a8c0@boostconsulting.com> "My bad"; Ralf's right! ----- Original Message ----- From: "Ralf W. Grosse-Kunstleve" > I am confused. Is this correct? Or should it be > > template<> > struct has_back_reference > > Thanks, > Ralf From david.abrahams at rcn.com Tue Mar 26 18:10:18 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Tue, 26 Mar 2002 12:10:18 -0500 Subject: [C++-sig] Boost.Python v2: implicit conversion support Message-ID: <016f01c1d4e9$1c141070$0202a8c0@boostconsulting.com> Support for C++ implicit conversions has now been added to the v2 codebase. Please see libs/python/test/implicit.* for examples. The facility looks like: boost::python::implicitly_convertible(); which declares that SourceType is implicitly convertible to DestType. This declaration allows a Python object which can be converted to SourceType to be passed to a wrapped C++ function in place of a DestType or DestType const& parameter. -Dave +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+ From rwgk at yahoo.com Tue Mar 26 19:23:39 2002 From: rwgk at yahoo.com (Ralf W. Grosse-Kunstleve) Date: Tue, 26 Mar 2002 10:23:39 -0800 (PST) Subject: [C++-sig] Boost.Python v2: implicit conversion support In-Reply-To: <016f01c1d4e9$1c141070$0202a8c0@boostconsulting.com> Message-ID: <20020326182339.28550.qmail@web20203.mail.yahoo.com> --- David Abrahams wrote: > Support for C++ implicit conversions has now been added to the v2 > codebase. Please see libs/python/test/implicit.* for examples. Cool! Could this mechanism also be used to convert, e.g., python tuples to std::vector? implicitly_convertible >(); I guess not since there is no appropriate C++ implicit conversion. What about something like this: struct my_converter_type { operator()(const boost::python::tuple& t, std::vector& v) const { // convert the elements of the tuple and store in v }; }; implicitly_convertible< boost::python::tuple, std::vector, my_converter_type>(); Thanks, Ralf __________________________________________________ Do You Yahoo!? Yahoo! Movies - coverage of the 74th Academy Awards? http://movies.yahoo.com/ From ponderor at lycos.com Thu Mar 28 05:20:26 2002 From: ponderor at lycos.com (Dean Goodmanson) Date: Wed, 27 Mar 2002 20:20:26 -0800 Subject: [C++-sig] newbie boost questions Message-ID: Greetings, The boost.org site (http://www.boost.org/libs/python/doc/) leaves me lost on how to download & install boost's Python wrapping tools. Am I missing something, or is this not the site to start with... Can you recommend some references for using Python as a C++ developer's testing ground? I apologize for my idiosyncrosies..python gets my reserve brain power. I wouldn't have asked except that these issues may have some timely relevance. (About me: Visual C++ 6 & 7, and Mac CodeWarrior C++ application developer. Python lurker & hopeful.) -Dean See Dave Matthews Band live or win a signed guitar http://r.lycos.com/r/bmgfly_mail_dmb/http://win.ipromotions.com/lycos_020201/splash.asp From david.abrahams at rcn.com Fri Mar 29 01:56:23 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Thu, 28 Mar 2002 19:56:23 -0500 Subject: [C++-sig] newbie boost questions References: Message-ID: <0dcd01c1d6bc$e5918700$0202a8c0@boostconsulting.com> ----- Original Message ----- From: "Dean Goodmanson" > Greetings, > > The boost.org site (http://www.boost.org/libs/python/doc/) leaves me lost on how to download & install boost's Python wrapping tools. Am I missing something, or is this not the site to start with... You need to download all of boost. See www.boost.org for instructions. To build Boost.Python, use Boost.Jam in the libs/python/build directory. > Can you recommend some references for using Python as a C++ developer's testing ground? Not I; maybe others can. HTH, Dave From casado2 at llnl.gov Fri Mar 29 17:42:26 2002 From: casado2 at llnl.gov (Martin Casado) Date: Fri, 29 Mar 2002 08:42:26 -0800 Subject: [C++-sig] Boost.Python v2: implicit conversion support In-Reply-To: <20020326182339.28550.qmail@web20203.mail.yahoo.com> References: <20020326182339.28550.qmail@web20203.mail.yahoo.com> Message-ID: <02032908422601.10158@avalanche.llnl.gov> > --- David Abrahams wrote: > > Support for C++ implicit conversions has now been added to the v2 > > codebase. Please see libs/python/test/implicit.* for examples. > > Cool! > Could this mechanism also be used to convert, e.g., python tuples > to std::vector? Ralf, I wrote a conversion mechanism for python sequence types to c++ container types using boost type coersion. It works pretty well, however I was having troubles with recursive types, i.e. lists of lists and haven't had the time to look back into that. If you are interested I can forward you my code. ~~m > implicitly_convertible >(); > > I guess not since there is no appropriate C++ implicit conversion. > What about something like this: > > struct my_converter_type { > operator()(const boost::python::tuple& t, std::vector& v) const > { > // convert the elements of the tuple and store in v > }; > }; > > implicitly_convertible< > boost::python::tuple, > std::vector, > my_converter_type>(); > > Thanks, > Ralf > > > __________________________________________________ > Do You Yahoo!? > Yahoo! Movies - coverage of the 74th Academy Awards? > http://movies.yahoo.com/ > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig From david.abrahams at rcn.com Fri Mar 29 17:48:17 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Fri, 29 Mar 2002 11:48:17 -0500 Subject: [C++-sig] Boost.Python v2: implicit conversion support References: <20020326182339.28550.qmail@web20203.mail.yahoo.com> <02032908422601.10158@avalanche.llnl.gov> Message-ID: <0efc01c1d742$07c8fb70$0202a8c0@boostconsulting.com> Martin, Why don't you post it? I was planning on doing this too, and it sure wouldn't hurt to have a head start. Probably just a couple of hints from me will set you in the right direction... -Dave ----- Original Message ----- From: "Martin Casado" To: Sent: Friday, March 29, 2002 11:42 AM Subject: Re: [C++-sig] Boost.Python v2: implicit conversion support > --- David Abrahams wrote: > > Support for C++ implicit conversions has now been added to the v2 > > codebase. Please see libs/python/test/implicit.* for examples. > > Cool! > Could this mechanism also be used to convert, e.g., python tuples > to std::vector? Ralf, I wrote a conversion mechanism for python sequence types to c++ container types using boost type coersion. It works pretty well, however I was having troubles with recursive types, i.e. lists of lists and haven't had the time to look back into that. If you are interested I can forward you my code. ~~m > implicitly_convertible >(); > > I guess not since there is no appropriate C++ implicit conversion. > What about something like this: > > struct my_converter_type { > operator()(const boost::python::tuple& t, std::vector& v) const > { > // convert the elements of the tuple and store in v > }; > }; > > implicitly_convertible< > boost::python::tuple, > std::vector, > my_converter_type>(); > > Thanks, > Ralf > > > __________________________________________________ > Do You Yahoo!? > Yahoo! Movies - coverage of the 74th Academy Awards? > http://movies.yahoo.com/ > > _______________________________________________ > C++-sig mailing list > C++-sig at python.org > http://mail.python.org/mailman/listinfo/c++-sig _______________________________________________ C++-sig mailing list C++-sig at python.org http://mail.python.org/mailman/listinfo/c++-sig From david.abrahams at rcn.com Sat Mar 30 14:24:00 2002 From: david.abrahams at rcn.com (David Abrahams) Date: Sat, 30 Mar 2002 08:24:00 -0500 Subject: [C++-sig] Boost.Python V2: direct access to data members Message-ID: <00eb01c1d7ee$641831a0$0202a8c0@boostconsulting.com> Direct access to data members is now implemented in Boost.Python v2, using the same def_readonly()/def_readwrite() interface as in v1. See libs/python/test/data_members.* for examples. -Dave +---------------------------------------------------------------+ David Abrahams C++ Booster (http://www.boost.org) O__ == Pythonista (http://www.python.org) c/ /'_ == resume: http://users.rcn.com/abrahams/resume.html (*) \(*) == email: david.abrahams at rcn.com +---------------------------------------------------------------+